entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
17
188
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
2
629k
http://arxiv.org/abs/2307.04169v1
20230709133123
Heavy Higgs Searches at the LHC in the light of a Left-Right Symmetric Model
[ "Sanchari Bhattacharyya" ]
hep-ph
[ "hep-ph" ]
]Sanchari Bhattacharyya[[email protected]] []University of Calcutta 92 Acharya Prafulla Chandra Road, Kolkata 700009 Heavy Higgs Searches at the LHC in the light of a Left-Right Symmetric Model [ ============================================================================ 2cm We investigate a Left-Right symmetric model respecting SU(3)_C ⊗ SU(2)_L ⊗ U(1)_L ⊗ SU(2)_R ⊗ U(1)_R local gauge symmetry. We study the interactions of the heavy neutral and charged scalars of this model along with their production at the hadron collider and their subsequent decays. We analyze the collider searches of two heavy scalars, one of them is charge neutral and another one is singly charged. In both the cases we consider their associated production at the Large Hadron Collider (LHC) and finally concentrate only on the leptonic final states. We perform both cut-based and multivariate analysis using Boosted Decision Tree algorithm for 14 TeV as well as as 27 TeV LHC run with 3000 fb^-1 integrated luminosity. As expected, the multivariate analysis shows a better signal-background discrimination compared to the cut-based analysis. In this article, we show that a charged Higgs of mass 750 GeV and 1.2 TeV can be probed with 2.77 σ (4.58 σ) and 1.38 σ (3.66 σ) significance at 14 (27) TeV run of LHC. § INTRODUCTION It is well known that Standard Model (SM) of particle physics has been extremely successful in describing the interactions of the elementary particles. The discovery of Higgs boson at Large Hadron Collider (LHC), CERN <cit.> has added another feather in its cap. Despite of being so successful, it is still unable to explain some of the natural phenomena which are already experimentally established, for example the explanation of Dark Matter (DM) or tiny neutrino mass etc. It is also unknown to us that whether the discovered Higgs boson is the only scalar candidate in nature or there are also other scalars with heavier masses which are similarly responsible for Electroweak Symmetry Breaking (EWSB). All of these unexplained facts actually motivate the physicists to look beyond SM. In the existing literature, there are several studies which actually deal with the phenomenology of extended Higgs sector <cit.>. Many of them have argued that the idea of one Higgs boson is not complete and there may be other representations also which may give rise to other required Higgs bosons having a heavier or lighter mass compared to the SM Higgs bosons. We are hopeful that with the advancement of technologies a detailed study about the properties of SM Higgs boson, for example its decay, branching ratios (BR), its couplings, precision measurements <cit.> will be possible which will make the picture of the scalar sector more clearer. Extended Higgs sector may also have some bearings on this dark matter sector, Higgs mass hierarchy or neutrino mass issues. In some models, singlet scalar has been considered as a suitable DM candidate <cit.>. The presence of charged Higgs may contribute to the radiative masses of neutrino <cit.>. In Left-Right symmetric models (LRSM), people have studied about the mass generation of neutrinos with help of extended triplet or singlet Higgs bosons <cit.>, <cit.>. Additional Higgs bosons can also play a crucial role in dealing with the flavor problems <cit.>. Although the direct searches from the LHC have not confirmed the existence of such a scalar, which actually pushes the exclusion limits on the masses of such scalars to higher and higher scales. In quest of such a complete theory, we investigate a model which respects SU(3)_C ⊗ SU(2)_L ⊗ U(1)_L ⊗ SU(2)_R ⊗ U(1)_R (32121) <cit.> local gauge symmetry. This can be obtained via a two step symmetry breaking from E_6 <cit.> Grand Unified group with [ SU(3)_C ⊗ SU(3)_L ⊗ SU(3)_R ] as the intermediate step. We shall only be interested in the Left-Right (LR) symmetric gauge group, 32121 and in the phenomenology of its scalar sector. This model contains the fermions from the full 27-plet of E_6, among them 11 are heavy exotic fermions. Two of these heavy fermions, one being Dirac like and another being Majorana like, are suitable Dark Matter (DM) candidates <cit.>. This model gives rise to a two component DM scenario. One of the DM candidates has a larger rate of interaction compared to the other. The relic particle with the larger interaction rate, satisfies the constraints from Direct detection experiments when a dimension-6 effective four-fermion interaction is introduced with a new coupling strength. The other DM candidate, with a smaller interaction rate is able to satisfy relic density constraints only when the coannihilation channels between these two relic candidates are opened up. Thus together they present a promising DM scenario and one can constrain the parameter spaces using the recent results of the direct detection of Dark Matter and relic density measurements from PLANCK collaboration. The detailed analysis regrading Dark Matter aspects of this model has been discussed in <cit.>. Apart from the SM gauge bosons the gauge sector comprises of three heavy BSM gauge bosons. The scalars in 32121 arise from the (1, 3, 3̅) representation of SU(3)_C ⊗ SU(3)_L ⊗ SU(3)_R. They are color singlet and heavy scalars. One of them must have similar properties like SM Higgs boson. Some of the BSM Higgs bosons show interesting signatures at High Luminosity-LHC (HL-LHC). In this article we shall mainly analyse the properties of some of the heavy Higgs bosons and their signatures at the LHC with 14 and 27 TeV high luminosity run. In this article we plan to describe the model breifly in section <ref> where we mainly discuss the particle sector of this model with special emphasis on the scalar sector of our interest. We also discuss the proerties and production mechanisms of the some exotic scalars including heavy neutral and singly charged scalars at the LHC. In section <ref> we perform the signal-background phenomenology of these two BSM Higgs bosons considering the cut-based analysis as well as multivariate analysis. We shall see that the signal-background discrimination is much better in case of multivariate analysis. Finally we conclude in section <ref>. § DESCRIPTION OF 32121 MODEL We start with a Left-Right (LR) symmetric gauge group SU(3)_C ⊗ SU(2)_L ⊗ U(1)_L ⊗ SU(2)_R ⊗ U(1)_R, namely 32121. A two-step symmetry breaking of E_6 can lead to 32121, though we will not be interested in this specific symmetry breaking pattern. This model is rich in particles which are listed in Table <ref> with their corresponding gauge quantum numbers. In this article, among all the particles we will mainly study the interactions of some of the scalars which may generate interesting signatures at hadron collider. The gauge bosons along with the matter fields present in this model are listed in Table <ref> with their corresponding gauge quantum numbers. The Higgs multiplets present in this table are instrumental in breaking down SU(3)_C ⊗ SU(2)_L ⊗ U(1)_L ⊗ SU(2)_R ⊗ U(1)_R to the SU(3)_C ⊗ SU(2)_L ⊗ U(1)_Y and then to SU(3)_C ⊗ U(1)_EM. L and R denote Left and Right repectively. One can calculate the electric charge, Q as, Q = T_3L + T_3R + Y_L/2 + Y_R/2 where Y_L/2 and Y_R/2 are noted down in the last two columns of Table <ref> respectively. §.§ Gauge sector The gauge sector of 32121 model has two charged gauge bosons and four neutral gauge bosons. In the charged sector, one has been identified with the SM W boson and the other field is the heavy W' boson. In the neutral gauge sector two fields have been identified with SM Z and photon. Rest two fields are denoted as Z' and A'. The masses and mixings along with the interactions in electro-weak gauge sector are controlled by the four gauge coupling constants, g_2L, g_2R, g_1L and g_1R along with the vacuum expectation values (vevs) of the scalar fields. If one follows the symmetry breaking pattern of SU(2)_R ⊗ U(1)_L ⊗ U(1)_R to U(1)_Y, one can have an expression like, 1g_Y^2 = 1g_2R^2 + 1g_1L^2 + 1g_1R^2 where g_Y denotes the U(1)_Y gauge coupling constant. g_2L is identified with the SU(2)_L gauge coupling constant of SM, g. We have chosen g_2L=g_2R = g and g_1L=g_1R to keep our Lagrangian Left-Right symmetric. With these choices one can fix the gauge parameters of the 32121 model. On the other hand, the lower limits of the vevs of the Higgs fields can be fixed from the experimental lower limits of the heavy gauge bosons. A deltailed study on the gauge sector of 32121 model can be found in <cit.>. §.§ Fermion sector As already mentioned, in 32121 model we have 27 fermions. Their chiral components are as follows, L_L = [ ν_L; e_L ], L_R = [ ν_R; e_R ] Q_L = [ u_L; d_L ], Q_R = [ u_R; d_R ] Q_LS = q_SL, Q_RS = q_SR, l_S L_B = [ N_1 E_1; E_2 N_2 ]L̃_B = [ N_2^c E_2^c; E_1^c N_1^c ] L_L,R and Q_L,R contain the SM leptons and quarks respectively along with a right-handed neutrino. Rest of fields are exotic fermions. Q_LS and Q_RS form a four-component Dirac-like color triplet quark whereas N_1, N_2^c and E_1, E_2^c construct neutral and singly charged Dirac-like lepton N and E respectively. l_S and l_S^c form a Majorana-like neutral fermion L_S. The interactions between the Higgs fields and the fermions are responsible for the masses of the fermions. The relevant Yukawa Lagrangian is as follows. ℒ_Y = y_qijQ̅_iLΦ_B Q_jR + ỹ_qijQ̅_iRΦ̃_B Q_jL + y_lijL̅_iLΦ_B L_jR + ỹ_lijL̅_iRΦ̃_B L_jL + y_sijQ̅_iLSΦ_S Q_jRS + y_LBij Tr [ L̅_iBL̃_jB] Φ_S^c + y_LSij/Λl̅_iS l_jS^c Φ_S Φ_S + y_BBij Tr[ L̅_iBΦ̃_B ] l_jS^c + y_ijBRL̅_iL L_jBΦ_R + y_ijBLL̅_iR L_jB^†Φ̃_L + y_ijLRSQ̅_iL Q_jRS^* Φ̃_L + y_ijRLSQ̅_iR Q_jLS^* Φ̃_R + h.c. where, i,j = 1,2,3 are generation numbers and y(s) are Yukawa coupling constants. Φ_S^* is complex conjugate of Φ_S, Φ̃_B = σ_2 Φ_B^* σ_2 and L̃_B=σ_2 L_B^* σ_2. The first line of Eq. <ref> shows the terms generating the masses of the SM fermions. The terms present in the second line of Eq. <ref> are responsible for giving masses to the heavy exotic fermions. It is to be noted that we have written a dimension-5 term for generating the Majorana mass of l_S. The rest of the terms represent the mixings among exotic and SM fermions. Here we note that, we can write only Dirac-like mass term for the nutrino in our model. In <cit.> the fermion sector of this model is discussed in more detail. §.§ Scalar sector of 32121 There are several scalar fields this model. The Higgs fields which are mainly responsible for the symmetry breaking from 32121 ⟶ SU(3)_C ⊗ SU(2)_L ⊗ U(1)_Y ⟶ SU(3)_C ⊗ U(1)_EM are, one Higgs bi-doublet (Φ_B), one left-handed (Φ_L), one right-handed (Φ_R) weak doublets and a singlet Higgs boson (Φ_S). Φ_S is SU(2) singlet but carries U(1) hypercharge. These color singlet scalars arise from (1, 3, 3̅) representation of the Trinification gauge group ([ SU(3)_C ⊗ SU(3)_L ⊗ SU(3)_R ]). Among these fields, Φ_R is instrumental in breaking the LR symmetry. The allignment of the Higgs felds are as following. Φ_B = [ 1/√(2)(k_1 + h_1^0 + i ξ_1^0) h_1^+; h_2^- 1/√(2)(k_2 + h_2^0 + i ξ_2^0) ], Φ_L = [ h_L^+; 1/√(2)(v_L + h_L^0 + i ξ_L^0) ], Φ_R = [ 1/√(2)(v_R + h_R^0 + i ξ_R^0); h_R^- ], Φ_S = 1/√(2)(v_S + h_S^0 + i ξ_S^0) The Higgs potential of the 32121 model, V is composed of two parts, V_1 and V_2. It is given by, 𝒱_1 = - μ_1^2 Tr ( Φ_B^†Φ_B) - μ_3^2 ( Φ_L^†Φ_L + Φ_R^†Φ_R ) - μ_4^2 Φ_S^†Φ_S + λ_1 Tr [ (Φ_B^†Φ_B)^2] + λ_3 ( Tr[ Φ_B^†Φ̃_B] Tr[ Φ̃_B^†Φ_B] ) + α_1 (Φ_S^†Φ_S)^2 + β_1 Tr[ Φ_B^†Φ_B] (Φ_S^†Φ_S) + γ_1 [ (Φ_L^†Φ_L) + (Φ_R^†Φ_R)] (Φ_S^†Φ_S) + ρ_1 [ (Φ_L^†Φ_L)^2 + (Φ_R^†Φ_R)^2] + ρ_3 [ (Φ_L^†Φ_L) (Φ_R^†Φ_R)] + c_1 Tr[ Φ_B^†Φ_B] [ (Φ_L^†Φ_L) + (Φ_R^†Φ_R)] + c_3 [ ( Φ_L^†Φ_B Φ_B^†Φ_L ) + ( Φ_R^†Φ_B^†Φ_B Φ_R ) ] + c_4 [ ( Φ_L^†Φ̃_B Φ̃_B^†Φ_L ) + ( Φ_R^†Φ̃_B^†Φ̃_B Φ_R ) ] and, 𝒱_2 = μ_BS Tr [ Φ^† _BΦ̃_B] Φ_S^∗ + h.c. The parameters in 𝒱 are considered to be real. 𝒱 is also LR symmetric and obeys the gauge symmetry of 32121 model. In the above, Φ̃_B ≡σ_2 Φ_B^* σ_2. Apart from the above symmetries, 𝒱_1 is also symmetric under the global phase transformations like, Φ_B → e^i θ_B Φ_B;   Φ_L → e^i θ_L Φ_L;   Φ_R → e^i θ_R Φ_R   and Φ_S → e^i θ_S Φ_S. Whereas, the terms present in 𝒱_2 explicitly breaks this symmetry. Now, if we choose both k_1 and k_2 to be non-zero, the terms proportional to λ_3 in 𝒱_1 give rise to some bilinear terms like h_1^0 h_2^0, h_1^+ h_2^- which makes 𝒱_1 break the aforementioned global symmetry spontaneously. This actually causes an extra undesirable massless Goldstone mode. This issue of getting unwanted Goldstone mode can be avoided in two ways. One simple option is to choose any between k_1 and k_2 to be zero which will make such bilinear terms (like h_1^0 h_2^0, h_1^+ h_2^-) vanish and turn the potential 𝒱_1 invariant under such global symmetry. Another way is to consider 𝒱_2 in addition to 𝒱_1 as the scalar potential. As 𝒱_2 breaks the global symmetry explicitly, we can get rid of the extra massless mode in this way. In <cit.>, it is discussed in detail that the presence of 𝒱_2 does not affect the masses and the mixings in the scalar sector in a significant way. Hence, we choose k_2 to be zero. A non-zero value of v_R is necessary to lead the Left-Right symmetry breaking. Whereas, v_S also needs to be non-zero as it is responsible for U(1) symmetry breaking. A non-zero value of v_L will along with a non-zero v_R will again spontaneously break the global symmetry mentioned in Eq. <ref> which will give rise to extra unwanted Goldstone mode. In order to avoid such a problem, we choose v_L=0 <cit.>. There are 10 real parameters in the scalar potential of this model, λ_1, λ_3, ρ_1, ρ_3, c_1, c_3, c_4, α_1, β_1 and γ_1. We accept only those values of the quartic parameters which make the scalar potential bounded from below and which are allowed by the SM-Higgs signal strengths <cit.>. Among all the scalar fields, there are five neutral CP-even scalar fields, h^0, h_2^0, h_L^0, H_R^0 and H_S^0. h^0 has been identified with the SM-Higgs. The neutral CP-odd scalar sector contains two physical fields, ξ_2^0 and ξ_L^0. In addition to these scalars, there are two charged Higgs fields, H_1^± and H_L^±. h_2^0 and ξ_2^0 are mass degenerate at the tree level. In a similar fashion, h_L^0 and ξ_L^0 also have same mass. In this article, we will mainly concentrate on the scalars who belong to the Higgs bi-doublet, Φ_B and discuss their properties. ∙ Scalars from Bi-doublet Higgs field: Apart from the SM-like Higgs, the bi-doublet Higgs field Φ_B comprises of some exotic scalar fields including two neutral CP-even (h_2^0) and CP-odd (ξ_2^0) scalars and a singly charged Higgs H_1^±. At tree level, the above scalar (h_2^0) and pseudoscalar (ξ^0 _2) have equal masses. With k_2=0, m_h_2^0^2 = m_ξ_2^0^2 = 1/2[ 4 λ_3 k_1^2 + (c_4 - c_3) v_R^2] The zero value of k_2 restricts h_2^0  (ξ_2^0) to couple with a pair of other scalars or gauge bosons but they can have interactions with a pair of SM fermions (see Eq. <ref>). From Eq. <ref> it is evident that the coupling of h_2^0  (ξ_2^0) with the up quark sector is proportional to the bottom quark sector Yukawa coupling and vice-versa. This implies that the coupling of h_2^0  (ξ_2^0) with a pair of bottom quarks is proportional to top Yukawa coupling. To find the limit on the mass of h_2^0  (ξ_2^0) we have produced these heavy scalars in association with a pair of b-quarks with a further decay to b quark pair. ATLAS and CMS have already performed a search for heavy neutral scalar which is produced in association with a pair of b quarks at √(s) = 13 TeV <cit.>. Using this result, we compare σ× BR obtained in 32121 model with the measured rate by ATLAS Collaboration and find a lower limit on m_h_2^0 (m_ξ_2^0). We find, m_h_2^0 (m_ξ_2^0) must be greater than 800 GeV <cit.>. At the LHC, one of the dominating ways of producing h_2^0  (ξ_2^0) is via gluon gluon fusion. Unlike SM Higgs, here a triangle loop of bottom quark will mainly control the production cross-section <cit.>. Another dominant way to produce h_2^0  (ξ_2^0) at the hadronc collider is the associated Higgs production as previously dicussed. One can produce h_2^0  (ξ_2^0) in association with two bottom quarks. This large production cross-section will sensitively depend on the top Yukawa coupling. This in turn makes us consider the associated production mechanism while generating the heavy scalars at the collider. We present the associated production cross-section and decay branching ratios of h_2^0  (ξ_2^0) in Fig. <ref>. We note that h_2^0  (ξ_2^0) has a dominant decay mode to bb̅ untill the decay to H_1^± W^∓ is kinematically allowed. In this plot the mass of H_1^± has been set to 750 GeV. In order to generate such events, we have first implemented our model in <cit.> and then generated such processes using <cit.>. We have also taken care of the QCD K-factor (∼ 1.1) following the ref. <cit.>. Now, coming to the singly charged Higgs boson, H_1^±, it is another scalar field which is of our interest. H_1^± has a mass, m_H_1^±^2 = 1/2 (c_4-c_3) (k_1^2 + v_R^2) It can couple to SM fermions via Yukawa coupling (see Eq. <ref>) and also has interactions with SM W boson and heavy neutral scalar h_2^0  (ξ_2^0). One dominant process of producing this charged scalar at the LHC is the production in association with a top and a bottom quark. Other mechanisms may include Drell-Yan process or even vector boson fusion process. ATLAS and CMS collaborations both have searched for heavy charged Higgs boson at 13 TeV run followed by a decay to a top and a bottom quark <cit.>. In our analysis, we have also produced H_1^± in association with a top and a bottom with a further decay of H_1^± again to a top and a bottom. We compare the event rates obtained in 32121 model with the result provided by ATLAS collaboration which H_1^±. We find, m_H_1^± > 720 GeV <cit.>. While performing our analysis, we have considered to produce m_H_1^± at the collider in association with t b. The leading contribution will be from g g →t̅ b H_1. In Fig. <ref>, we present the production cross-section of H_1^± at centre of mass energies of 14 TeV and 27 TeV along with the branching ratios of H_1^± to different final states. We observe that H_1^± mainly decays to a top and a bottom quark until it is allowed to decay to h_2^0  (ξ_2^0) W^± kinematically. The H^± tb production cross-section varies from 0.15 (1) pb for m_H_1^± = 720 GeV to 0.005 (0.06) pb for m_H_1^± = 1500 GeV at 14 (27) TeV LHC run. In the next section, we will now present the signal-background study of h_2^0  (ξ_2^0) and H_1^± production at the LHC at 14 and 27 TeV run with 3000 fb^-1 integrated luminosity. § COLLIDER PHENOMENOLOGY In the previous section we have already discussed about the production mechanisms and subsequent decays of the two scalars, h_2^0 ( ξ_2^0) and H_1^±. The heavy neutral and charged scalars have some exotic decay channels. In this section we concentrate on the signal-background analysis of these two scalars at the LHC where we have considered such exotic decay channels of the heavy scalars. One of the interesting channels to probe h_2^0 (ξ_2^0) at the LHC is the following (see Fig. <ref>). p p → h_2^0 (ξ_2^0) b b̅→ (H_1^± W^∓) b b̅→ (t b̅ l^- ν̅_l) b b̅→ b b̅ b b̅ l^+ l^- ν_l ν̅_l Similarly to look for H_1^± at the hadron collider, one may consider (see Fig. <ref>), p p → H_1^± t b → (t b̅) t̅ b → (W^- b̅ b) W^+ b b̅→ b b̅ b b̅ l^+ l^- ν_l ν̅_l We shall now briefly discuss these two channels with leptonic decay of W boson with not-too-large background in the context of HL-LHC at 14 TeV and 27 TeV center of mass energy. Fig. <ref> shows the leading order Feynman diagrams which are the most dominating for the production of heavy neutral h_2^0  (ξ_2^0) and singly charged scalars H_1^± at the LHC. We denote the production of h_2^0 (ξ_2^0) and H_1^± as signal 1 (S_1) and signal 2 (S_2) respectively. Both the signals discussed above, have similar final states with four b jets, two oppositely charged leptons and missing transverse energy in the final state. This specific combination in the final state makes these signals unique as the chances of getting similar states coming from the Standard Model is quite low. Among all the background processes, t t̅ + jets production will be the most dominant. Other significant background effects include b b̅ t t̅ production, h t t̅ production, Ztt̅ production, multijet processes etc. Initially we have set the transverse momentum of b-tagged jet, light jets and leptons as, p_T_b > 40 GeV, p_T_j > 30 GeV, p_T_l > 10 GeV respectively. We have also put an initial cut on missing transverse energy, E_T / which is greater than 20 GeV. We perform this analysis for four chosen benchmark points corresponding to four different sets of masses of the scalars and their decay properties. S_1 depends on the branching ratio of h_2^0 to H_1^± W^∓ channel which is non-zero just after a certain mass of h_2^0 (see Fig. <ref>). Whereas S_2 depends on the H_1 → t b branching ratio which is non-zero throughout the mass range of H_1^± (see Fig. <ref>) but this branching ratio reduces after the H_1 ⟶ h_2^0  (ξ_2^0) W decay channel opens up. In Table <ref>, the four choices of the benchmark points have been presented. For BP1 and BP4, the masses of h_2^0  (ξ_2^0) and H_1^± are such that both the signals, S_1 and S_2 are on as the BR (h_2^0 → H_1^± W^∓) and BR (H_1 → t b) are non-zero, whereas for BP2 and BP3 the BR(h_2^0 → H_1^± W^∓) is zero turning Signal 1 off. Furthermore, in case of BP1, for S_1 and S_2, h_2^0  (ξ_2^0) and H_1 dominantly decay to H_1^± W^∓ and tb̅ respectively with the highest BR in the corresponding channels. For BP2, BR(H_1 → t b) remains same as it is in case of BP1 keeping the Signal 2 unchaged. However, in case of BP3 the H_1 dominantly decays to h_2^0 W^± turning BR(H_1 → t b) small. In a similar fashion for BP4 we get both signals on as BP1 but with reduced branching ratios to corresponding channels and with reduced cross-sections. It is important to note here that the BP2 and BP3 practically imply the production of charged Higgs (H_1^±) only whereas BP1 and BP4 actually denote the production of both the scalars. §.§ Cut-based Analysis In this section we present the signal-background analysis of h_2^0 (ξ_2^0) and H_1± production at the LHC using cut-based approach. We have implemented our model in <cit.>, generated the signal and background events with <cit.> using parton distribution functions <cit.>. In order to consider the showering and hadronization, we have passed the events through <cit.> already built in Madgraph and used <cit.> for detector simulation. We demand there are atleast three b-tagged jets and atleast one charged lepton (e or μ) in the final state. Such a choice effectively turns down the number of events of multijet production which makes us ignore this background. With these demands we have made, we plot the distributions of two important variables icluding the transverse momentum of leading b-tagged jet, p_T^b and scalar sum of p_T of all the visible jets, H_T at 14 and 27 TeV HL-LHC run with 3000 fb^-1 integrated luminosity. In Figs. <ref> and <ref>, we present such distributions for the BP1 only. In Figs. <ref> and <ref>, the distributions of p_T of leading b-tagged jet and H_T have been presented for each signal and background processes for BP1 at 14 and 27 TeV center of mass energy respectively with integrated luminosity 3000 fb^-1. The processes corresponding to the different color codes have been mentioned inside the plots. It is clearly understandable from the plots that an appropriate cut on p_T of leading b-tagged jet and H_T in each case can effectively reduce the background events compared to the signal events. Here we want to note that, for the other benchmark points the disributions of the signals are not significantly different compared to what is shown for BP1. We have optimized the cuts in such a way so that the significance of the signal does not vary significantly for all of the benchmark points. ∙ Event Selection As already mentioned, we choose to keep such events where there are at least three b-tagged jets in the final state. Using the information we get from the distribution plots in Figs. <ref> and <ref>, we apply and try to optimize the cuts on the variables we have considered i.e., p_T^b and H_T so that we can reduce certain amount of background events keeping the signal events as much as possible. In other words we try apply our cuts on the suitable variable in such a way so that we obtain maximum significance, 𝒮 where 𝒮 is given by, 𝒮 = √(2[(S+B)  log (S+BB)-S]) S and B stand for the number of signal and background events respectively. In Table <ref>, we show the optimized cut flows providing the maximum significance, (see Eq. <ref>) for all of the benchmark points at 14 TeV HL-LHC run. Here we select only those events where the transverse momentum of leading b-jet, p_t^b > 240 GeV and H_T > 990 GeV. As both the signals have similar final states, in case of BP1 and BP4 S is practically the collection of two different signal event numbers, S=S_1+S_2. In Table <ref>, we present the case for 27 TeV LHC run with 3000 fb^-1 integrated luminosity. Here we find that we obtain the maximum significance when we select only those events who pass the criteria of having p_t^b > 230 GeV and H_T > 680 GeV. From the above Tables, <ref> and <ref>, we observe that the significance for the case of 14 TeV HL-LHC run is much small which gets much better while considering the case for 27 TeV HL-LHC run. BP1 provides 0.7 significance (𝒮) for 14 TeV run whereas the significance increases to 5.1 for 27 TeV run at HL-LHC (Table <ref>). But the results obtained using the cut-based approach does not make this method much useful. This motivates us to explore our results using multivariate analysis which we discuss in the following. §.§ Multivariate Analysis In this section, we mainly concentrate on the results we obtain using Boosted Decision Tree (BDT) algorithm. This part of analysis has been performed in a framework <cit.>. Decision trees are mainly classifiers who generally classifies the signal and background-like events. A suitable variable is chosen and application of a proper cut on this variable separates the signal from the background as best as it can. One can choose a number of variables and train the signal and background sample events. Modifcation of the weights corresponding to the sample events creates new boosted decision trees. After training and testing of the signal and background-like events, this method of analysis excels the generic cut-based analysis by performing a much better discrimination between signal events and background events. To perform the BDT analysis we have considered 11 variables, providing the best possible signal significance, are as the following. * The transverse momentum of leading b-tagged jet, p_T^b1. * The missing transverse energy, E_T /. * The transverse momentum of leading lepton, p_T^l1. * Δη^bibj between the three leading b-tagged jets. * Δϕ^bibj between the three leading b-tagged jets. * Δη^l1l2 between the leading and sub-leading lepton. * Δϕ^l1l2 between the leading and sub-leading lepton. The Table <ref> shows the ranks of the above variables according to their relevance for both 14 and 27 TeV signal-background study. The rank of the variables are determined depending on how many times a variableis used to split decision tree nodes. Here in both cases of 14 and 27 TeV run, E_T / has been the most important variable. In our study, the important parameters for a BDT analysis have been set as follows. We have set the number of Trees to be 850 with maximum depth 3 and the boost type as AdaBoost. The normalized distributions of the above variables are shown in Fig. <ref>. The blue-shaded (red-dashed) distributions are for the signal (background). It is to be mentioned that while doing this analysis, all the four backgrounds have been taken into consideration despitethe fact that tt̅ + jets production is the most dominating one. The linear correlation matrix for the variables of our choice is shown below in Fig. <ref> for only benchmark point BP1. The correlations between any two variables are presented in % in this figure. One can see, in most of the cases, the variables are not correlated in a significant way. The signal and bakground events have been trained for each four benchmark points. A partial overtraining might be quite possible for boosted decision tree algorithm which must be avoided. It can be tested comparing the performance of training and testing samples. We have ensured that the effect of ovetraining of signal and background is minimal for our cases by Kolmogorov-Smirnov (KS) test. In general, KS score must be ∼ 0.1. It may be greater than 0.01 if this value remains fixed over changing the statistics of the signal and background events. In Fig. <ref>, one can see the value of the KS probability is ∼ 0.187 (0.428) and ∼ 0.195 (0.184) for signal (background) for BP1 at 14 TeV and 27 TeV HL-LHC run. Half of the signal and background events have been used for training and the other half of the same sample is used for testing. After a successful training and testing of the signal and background samples, the BDT algorithm has made the results for 14 as well a 27 TeV HL-LHC run better compared to cut-based analysis. The TMVA response of the classification has shown an good discrimination between signal and background which is shown in Fig. <ref> for BP1 benchmark point at 14 as well as 27 TeV HL-LHC run. The significance we approximately measure using the expression shown in Eq. <ref> has improved in a significant amount compared to cut-based scenario which is explained in Table <ref> for both 14 and 27 TeV run of LHC for all of the benchmark points respectively. In Fig. <ref> the signal efficiency, background efficiency and signal significance have been presented for two benchmark points BP1 and BP2 at 14 and 27 TeV HL-LHC run where the results for BP2 solely corresponds to probing a charged Higgs at the hadron collider. The significances obtained from the BDT analysis for each case have been given below in a form of a table (see Table <ref>). The significance obtained for BP1 is ∼ 3.87 which is much better compared to the significance achieved in case of cut-based analysis as one can expect. Similar increments are observed for the other benchmark points BP2, BP3, BP4 at 14 TeV run. The results obtained for 27 TeV HL-LHC run are more encouraging. From the results for BP2 and BP3, in 32121 model one can hope to probe a charged Higgs of mass 750 GeV and 1.2 TeV with 2.77 σ (4.58 σ) and 1.38 σ (3.66 σ) significance respectively at 14 (27) TeV HL-LHC run (see Figs. <ref> (b), <ref> (d) for BP2). § CONCLUSIONS We start with an E_6 GUT inspired gauge theory SU(3)_C ⊗ SU(2)_L ⊗ U(1)_L ⊗ SU(2)_R ⊗ U(1))_R namely 32121. This gauge group can arise after a two step symmetry breaking of E_6. We have mainly concentrated on the Left-Right symmetry breaking from SU(3)_C ⊗ SU(2)_L ⊗ U(1)_L ⊗ SU(2)_R ⊗ U(1))_R down to SU(3)_C ⊗ SU(2)_L ⊗ U(1)_Y. The fermions in this model belong to the full 27-plet of E_6. The Higgs bosons of this model arise from (1, 3, 3̅) represensation of SU(3)^3. The vevs (k_1 = 246 GeV, v_R > 14.7 TeV, v_S > 12.61 TeV) of the scalar fields have been constrained from the masses of the W, W' and A' gauge bosons respectively. The gauge sector of 32121 model contains five gauge couplings whose value have been fixed following the pattern of Left-Right symmetry breaking. Apart from the SM gauge bosons this model contains W', Z' and A' gauge bosons where A' is the hallmark of the extra U(1) gauge symmetry. In the fermionic sector, among all the fermions of the 27-plet of E_6, two color singlet charge neutral fermions will be suitable DM candidates. The scalar sector of 32121 model contains numbers of Higgs bosons among one is SM-like Higgs. In this article we have mainly set our focus on two exotic heavy Higgs bosons, one is charge neutral CP-even Higgs field, h_2^0 and its CP-odd partner ξ_2^0. Both of them have similar masses and couplings. Another scalar field is a singly charged Higgs, H_1^±. Both h_2^0  (ξ_2^0) and H_1^± arise from the Higgs bi-doublet Φ_B. h_2^0  (ξ_2^0) dominantly decays to bb̅ untill the decay channel to H_1^± W^∓ is kinematically accessible. H_1^+ dominatly decays to tb̅ untill the decay cahnnel to h_2^0  (ξ_2^0) W^+ is kinematically allowed. We have used these informations on exotic decay channels while discussing about the signatures of these scalars at the LHC. For h_2^0  (ξ_2^0) we have mainly chosen the dominant production mechanism of this scalar which is in our case the associated Higgs production. The production cross-section of h_2^0  (ξ_2^0) in association of bb̅ is 0.3 (3) pb at 14 (27) TeV LHC run for 1 TeV mass. Whereas H_1^± has been produced in association with tb. Production cross-section of H_1^± in association of tb is 0.04 (0.35) pb at 14 (27) TeV LHC run for 1 TeV scalar mass. In the next section we have performed a detailed signal-background analysis of two of the heavy Higgs bosons, h_2^0  (ξ_2^0) and H_1^±. The associated production of both of them give rise to similar final states with three or more than three b-tagged jets, more than one charged leptons and missing transverse energy. The dominant background will arise from tt̅ production with jets. The other background events will arise from bb̅tt̅, htt̅, Ztt̅ productions. Depending on the masses and decay properties of the heavy neutral and charged scalars in our model, we choose four benchmark points (BP) to perform our analysis. To begin with, we have presented our results using cut-based analysis for four benchmark points. We have applied a series of cuts on some chosen suitable variables like transverse momentum (p_T) of leading b-tagged jet and the scalar sum of p_T of all jets (H_T). For BP1, at 14 (27) TeV the signal to background ratio is 0.7 (5.1) whereas for other benchmark points this is somewhat lower except the case for BP4 at 14 TeV run (𝒮∼ 1.1). In order to distinguish signal events from background-like events more accurately we have used a better algorithm used in multivariate analysis where we have chosen the BDT method. With this mechanism, as per our expectation, a better significance could be achieved for all of the benchmark points. For BP1, at 14 (27) TeV the significance is 3.87 (10.45) which clearly shows a better signal-background discrimination. With the results we obtained, one can hope to probe a heavy charged Higgs of a mass 750 GeV in the 32121 model, with 2.77 σ (4.58 σ) significance at a 14 (27) TeV LHC run with 3000 fb^-1 integrated luminosity. Acknowledgement: SB acknowledges financial support from DST, Ministry of Science and Technology, Government of India in the form of an INSPIRE-Senior Research Fellowship. SB acknowledges Prof. Anindya Datta for his valuable suggestions throughout the analysis. SB also acknowledges Gourab Saha and Nivedita Ghosh for their help in dealing with some technical issues. SB is thankful to Prof. Partha Konar for the insightful discussions. widestlabel higgs_atlas ATLAS collaboration, G. Aad et al., Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC, https://doi.org/10.1016/j.physletb.2012.08.020Phys. Lett. B716 (2012) 1-29, arXiv: [https://arxiv.org/pdf/1207.7214.pdf1207.7214]. higgs_cms CMS collaboration, S. Chatrchyan et al., Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC, https://doi.org/ 10.1016/j.physletb.2012.08.021Phys. Lett. B 716 (2012) 30, arXiv: [https://arxiv.org/pdf/1207.7235.pdf1207.7235]. Higgs-review M. Mühlleitner, M. O. P. Sampaio, R. Santos and J. wittbrodt, Phenomenological comparison of models with extended Higgs sectors, https://doi.org/10.1007/JHEP08(2017)132JHEP08 (2017) 132, arXiv: [https://arxiv.org/pdf/1703.07750.pdf1703.07750]; J. Steggemann, Extended Scalar Sectors, https://doi.org/10.1146/annurev-nucl-032620-043846Annu. Rev. Nucl. Part. Sci. 2020. 70:197–223 and references therein. higgs-precision J. Alison et al., Higgs boson potential at colliders: status and perspectives, https://doi.org/10.1016/j.revip.2020.100045Review in Physics (2020) 100045, arXiv: [https://arxiv.org/pdf/1910.00012.pdf1910.00012]; G. Heinrich, Collider Physics at the Precision Frontier, https://doi.org/10.1016/j.physrep.2021.03.006Physics Reports, Volume 922, 2021, Pages 1-69, arXiv: [https://arxiv.org/pdf/2009.00516.pdf2009.00516]. singlet-DM C. E. Yaguna, The singlet scalar as FIMP dark matter, https://doi.org/10.1007/JHEP08(2011)060 JHEP08(2011)060, arXiv: [https://arxiv.org/pdf/1105.1654.pdf1105.1654]; R. Campbell, S. Godfrey, H. E. Logan and A. Poulin, Real singlet scalar dark matter extension of the Georgi-Machacek model, [https://doi.org/10.1103/PhysRevD.95.016005Phys. Rev. D 95, 016005]; The GAMBIT Collaboration, Status of the scalar singlet dark matter model, [https://doi.org/10.1140/epjc/s10052-017-5113-1Eur. Phys. J. C (2017) 77:568]; P. Das, M. K. Das and N. Khan, A new feasible dark matter region in the singlet scalar scotogenic model, [https://doi.org/10.1016/j.nuclphysb.2021.115307Nuclear Physics B, Vol. 964, 115307]; numass E. Ma and O. Popov, Pathways to naturally small Dirac neutrino masses, https://doi.org/10.1016/j.physletb.2016.11.027Phys. Lett. B.2016.11.027, arXiv: [https://arxiv.org/pdf/1609.02538.pdf1609.02538]. triplet-neutrinomass R. N. Mohapatra and P. B. Pal, Massive neutrinos in physics and astrophysics, https://doi.org/10.1142/5024World Sci. Lect. Notes Phys.72, 1 (2004); N. G. Deshpande, J. F. Gunion, B. Kayser, and F. Olness, Left-right-symmetric electroweak models with triplet Higgs field, https://doi.org/10.1103/PhysRevD.44.837Phys. Rev. D 44, 837; E. Ma and U. Sarkar, Neutrino Masses and Leptogenesis with Heavy Higgs Triplets, https://doi.org/10.1103/PhysRevLett.80.5716Phys. Rev. Lett. 80 (1998) 5716-5719, arXiv: [https://arxiv.org/pdf/hep-ph/9802445.pdfhep-ph/9802445]. Nu_mass1 C. Hati, S. Patra, P. Pritimita and U. Sarkar, Neutrino Masses and Leptogenesis in Left-Right Symmetric Models: A Review From a Model Building Perspective, https://doi.org/10.3389/fphy.2018.00019Front. Phys., 06 March 2018. 2hdm A. Vicente, Higgs Lepton Flavor Violating Decays in Two Higgs Doublet Models, https://doi.org/10.3389/fphy.2019.00174Front. Phys., fphy.2019.00174; D. Das, P M. Ferreira, A. P. Morais, I. Padilla-Gay, R. Pasechnik and J. P. Rodrigues, A three Higgs doublet model with symmetry-suppressed flavour changing neutral currents, arXiv: [https://arxiv.org/pdf/2106.06425.pdf2106.06425]; S. Iguro, Y. Muramatsu, Y. Omura and Y. Shigekami, Flavor physics in the multi-Higgs doublet models induced by the left-right symmetry, https://doi.org/10.1007/JHEP11(2018)046JHEP11 (2018) 046, arXiv: [https://arxiv.org/pdf/1804.07478.pdf1804.07478]. 32121 S. Bhattacharyya and A. Datta, Phenomenology of an E_6 inspired extension of Standard Model: Higgs sector, https://doi.org/10.1103/PhysRevD.105.075021Phys. Rev. D 105, 075021, arXiv: [https://arxiv.org/pdf/2109.08524.pdf2109.08524]. E6 Y. Achiman and B. Stech, Quark-Lepton Symmetry and mass scales in an E6 unified gauge model, https://doi.org/10.1016/0370-2693(78)90584-1Physics Letters B, 77(4-5), 389-393; Q. Shafi, E6 as a unifying gauge symmetry, https://doi.org/10.1016/0370-2693(78)90248-4Physics Letters B, 79(3), 301-303; F. Gursey, P. Ramond and P. Sikivie, A universal gauge theory model based on E6, https://doi.org/10.1016/0370-2693(76)90417-2Physics Letters B, 60(2), 177-180; R. Barbieri, D. V. Nanopoulos and A. Masiero, Hierarchial fermion masses in E6, https://doi.org/10.1016/0370-2693(81)90589-XPhysics Letters B, 104(3), 194-198; G. Dvali and Q. Shafi, On proton stability and the gauge hierarchy problem, https://doi.org/10.1016/s0370-2693(97)00395Physics Letters B, 403(1-2), 65-69. 32121DM S. Bhattacharyya and A. Datta, Dark Matter perspective of Left-Right symmetric gauge model, https://doi.org/10.1016/j.nuclphysb.2023.116197Nucl. Phys. B, 991, 116197 (2023), arXiv: [https://arxiv.org/pdf/2206.13105.pdf2206.13105]. Atlas_h2 ATLAS Collaboration, Search for heavy neutral Higgs bosons produced in association with b-quarks and decaying to b-quarks at √(s) = 13 TeV with the ATLAS detector, https://doi.org/10.1103/PhysRevD.102.032004Phys. Rev. D 102, 032004 (2020), arXiv: [https://arxiv.org/pdf/1907.02749.pdf1907.02749]. Cms_h2 CMS Collaboration, Search for beyond the standard model Higgs bosons decaying into a bb̅ pair in pp collisions at √(s) = 13 TeV, https://doi.org/10.1007/JHEP08(2018)113JHEP 08 (2018) 113, arXiv: [https://arxiv.org/pdf/1805.12191.pdf1805.12191]. feynrules A. Alloul, N. D. Christensen, C. Degrande, C. Duhr and B. Fuks, FeynRules 2.0 - A complete toolbox for tree-level phenomenology, https://doi.org/10.1016/j.cpc.2014.04.012Comput.Phys.Commun. 185 (2014) 2250-2300, arXiv: [https://arxiv.org/pdf/1310.1921.pdf1310.1921]. madgraph J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer et al., The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations, https://doi.org/10.1007/JHEP07(2014)079JHEP07 (2014) 079, arXiv: [https://arxiv.org/pdf/1405.0301.pdf1405.0301]. h2-qcd-k-factor S. Dawson, C. B. Jackson, L. Reina and D. Wackeroth, Higgs Production in Association With Bottom Quarks at Hadron Colliders, https://doi.org/10.1142/S0217732306019256Mod.Phys.Lett. A21 (2006) 89-110, arXiv: [https://arxiv.org/pdf/hep-ph/0508293.pdfhep-ph/0508293]. b_running A. V. Bednyakov, B. A. Kniehl, A. F. Pikelner and O. L. Veretin, On the b-quark running mass in QCD and the SM, https://doi.org/10.1016/j.nuclphysb.2017.01.004Nucl.Phys. B916 (2017) 463-483, arXiv: [https://arxiv.org/pdf/1612.00660.pdf1612.00660]. AtlasCharged1 ATLAS Collaboration, Search for charged Higgs bosons decaying into a top quark and a bottom quark at √(s) = 13 TeV with the ATLAS detector, https://doi.org/10.1007/JHEP06(2021)145JHEP 06 (2021) 145, arXiv: [https://arxiv.org/pdf/2102.10076.pdf2102.10076]. AtlasCharged2 ATLAS Collaboration, Search for charged Higgs bosons decaying into top and bottom quarks at √(s) = 13 TeV with the ATLAS detector, https://doi.org/10.1007/JHEP11(2018)085JHEP 11 (2018) 085, arXiv: [https://arxiv.org/pdf/1808.03599.pdf1808.03599]. CmsCharged CMS Collaboration, Search for charged Higgs bosons decaying into a top and a bottom quark in the all-jet final state of pp collisions at √(s) = 13 TeV, https://doi.org/10.1007/JHEP07(2020)126JHEP 07 (2020) 126, arXiv: [https://arxiv.org/pdf/2001.07763.pdf2001.07763]. parton-dist R. D. Ball et al., Parton Distributions with LHC data, https://doi.org/10.1016/j.nuclphysb.2012.10.003Nucl. Phys. B867, 244 (2013), arXiv: [https://arxiv.org/pdf/1207.1303.pdf1207.1303]. pythia8 T. Sjostrand, S. Mrenna and P. Z. Skands, PYTHIA 6.4 Physics and Manual, https://doi.org/10.1088/1126-6708/2006/05/026JHEP 0605:026,2006, arXiv: [https://arxiv.org/pdf/hep-ph/0603175.pdfhep-ph/0603175]. delphes DELPHES 3 Collaboration, J. de Favereau et al., A modular framework for fastsimulation of a generic collider experiment, https://doi.org/10.1007/JHEP02(2014)057J. High Energ. Phys. 2014, 57 (2014), arXiv: [https://arxiv.org/pdf/1307.6346.pdf1307.6346]. tmva A. Hoecker et al., TMVA - Toolkit for Multivariate Data Analysis, arXiv: https://doi.org/10.48550/arXiv.physics/0703039physics/0703039.
http://arxiv.org/abs/2307.07434v1
20230714155919
Combining multitemporal optical and SAR data for LAI imputation with BiLSTM network
[ "W. Zhao", "F. Yin", "H. Ma", "Q. Wu", "J. Gomez-Dans", "P. Lewis" ]
cs.CV
[ "cs.CV", "eess.IV" ]
[ R. Walder August 12, 2023 =================== Leaf area index (LAI) is an important biophysical parameter and plays a significant role in winter wheat yield prediction. Persistent clouds dramatically affect the acquisition of crop conditions with Sentinel-2 remote sensing images, which may cause unreliable yield predictions. Synthetic Aperture Radar (SAR) can provide all-weather imagery and the ratio between cross- and co-polarized channels (C-band) has a high correlation with time series LAI over winter wheat areas. Here, the time series Sentinel-1 VH/VV is evaluated for imputing LAI, so as to increase its spatial-temporal density. We propose using a bidirectional LSTM (BiLSTM) network to impute the time series LAI. The half mean squared error of the predicted time series for each time step is used as the loss function. During the training, the mean loss over the observations in the mini-batch is calculated. Two test regions are selected, one is in the south of German, and another is in the North China Plain. Only the Sentinel-1 VH/VV and Sentinel-2 generated LAI which is acquired over the growing season are used during the training. Sentinel-2 generated LAI, which has been calibrated by in-situ measurements, is treated as the true value during the training. Plenty of experimental results show that the LAI imputation results provided by the BiLSTM method are much better than traditional regression methods, such as exponential function and polynomial function. It can capture the nonlinear dynamics between multiple time series. BiLSTM is robust to impute the time series LAI in different winter wheat fields with different growing conditions. It can even provide satisfactory results when fewer Sentinel-2 images are available. Since LSTM only see the information from the past, it cannot provide as good results as BiLSTM, especially over the senescence period. In conclusion, the results indicate that the BiLSTM network can be used to impute LAI with time-series Sentinel-1 VH/VV data and Sentinel-2 generated LAI data. This method can also be extended to solve other time series missing value imputation problems. § INTRODUCTION Leaf area index (LAI) can characterize crop canopies with dimensionless quantity. Timely, continuous and accurate monitoring of crop LAI is critical for winter wheat yield forecasting. Optical sensors with high spatial and spectral resolutions have been widely used for generating LAI. However, the passive acquisition model is easily affected by cloudy weather in the crop growing period between seedling and flowering, which will lead to lots of missing values <cit.>. Currently, more and more available multisensor, multitemporal and multispectral data allow the estimation of missing values and fill the temporal monitoring information gaps. SAR sensors are ideal to fulfil this task, because of their all-time and all-weather capabilities, with very good accuracy of the acquisition geometry and without effects of atmospheric constituents for amplitude data. Thanks to the Copernicus programme, more and more Sentinel-1 and Sentinel-2 data are acquired and free access. In this paper, we mainly pay attention to the integration of multitemporal Sentinel-1 Ground Range Detected (GRD) VH and VV ratio (VH/VV) and limited available Sentinel-2 generated LAI to impute more LAI values. There are three popularly used methods dealing with missing values: regression <cit.>, interpolation and matrix completion <cit.>. Interpolation methods can provide a smooth result, but it needs the points which can describe the whole temporal features. For time series Sentinel-2 generated LAI and time series Sentinel-1, they have the capacity of describing the whole growth of winter wheat, with fast increasing values in the spring or rainy season and a decrease in the senescence period <cit.>. Multiple papers have proved the temporal correlation between time series vegetation descriptors (LAI and NDVI) and multitemporal Sentinel-1 data <cit.>. Even though the winter wheat condition may vary in different fields, multisensor time series have a high feature correlation in the same geolocation. With the temporal correlated time series, we transfer the LAI imputation problem to a multivariate time series regression task. We apply the BiLSTM network to fulfil this task, it can take into account both the forward and backward information. The following section will briefly introduce BiLSTM theory. The preprocessing of multisensor time series data will be introduced in section 3. In section 4, we will evaluate the spatial and temporal imputation performance of the trained BiLSTM network and compare it with the other 3 methods. Then, the conclusion is drawn in the final section. § METHODOLOGY BiLSTM is a popularly used non-parametric modelling technique. In the following, we briefly introduce the general BiLSTM formulations and their application to solving the time series LAI imputation problem. The architecture of BiLSTM network [Neural Networks, Types, and Functional Programming: http://colah.github.io/posts/2015-09-NN-Types-FP/] is shown as Fig.<ref>. A BiLSTM layer learns bidirectional long-term dependencies between time steps of time series. These dependencies can be useful when learning the complete time series at each time step. Relative insensitivity to gap length is an advantage of LSTM over RNNs and other sequence learning methods in numerous applications. The large variance of the training data values may generate an unstable model, which will cause a large generalization error. Thus, given a time series vector X, we normalize the training predictors to have zero mean and unit variance. x = X - 𝐄(X)/Var(X) For each element in the input sequence, each layer computes the following function: [ i_t = σ(W_ii x_t + b_ii + W_hi h_(t-1) + b_hi); f_t = σ(W_if x_t + b_if + W_hf h_(t-1) + b_hf); g_t = tanh(W_ig x_t + b_ig + W_hg h_(t-1) + b_hg); o_t = σ(W_io x_t + b_io + W_ho h_(t-1) + b_ho); c_t = f_t ∘ c_(t-1) + i_t ∘ g_t; h_t = o_t ∘tanh(c_t); ] where i_t, f_t, g_t, o_t are the input gate, forget gate, cell gate, and output gate, respectively. c_t is the cell state at time t, h_t is the hidden state at time t, x_t is the input at time t, σ is the sigmoid function, and ∘ represents the Hadamard product. Then, the network follows by a fully connected layer ỹ_t = σ (Wh_t + b), a dropout layer to avoid the overfitting problem and a fully connected layer with the same length as the input features, so as to capture all the information of the entire sequence at each prediction. The half-mean-squared-error of the predicted responses for each time step is used as the imputation error: ℒ = 1/2M∑_i=1^M∑_j=1^R (ŷ_ij-y_ij)^2 where M is the sequence length and R is the amount of responses. This process acts as the regression layer and it will compute the loss over the training data in the mini-batch. Finally, we can obtain the imputed ŷ through minimizing ℒ(ŷ, y). The architecture of LSTM only contains the information flow from the past to the forward (from S_0 to S_i in Fig.<ref>). During the application of the BiLSTM method, only the temporal points which have at least one acquisition value are used (Tab.<ref>). In the spatial domain, the length of the time series may be different. After standardization, the missing values will be treated as one kind of information. We regard missing values as variables of the BiLSTM graph, which are involved in the backpropagation process <cit.>. § STUDY AREA AND TIME SERIES PREPROCESSING In this section, we introduce the preparation of the multisensor time series. Two areas are selected to train and evaluate the methods. One is located in the north of Munich, Germany. The other one is located in Hengshui area, Hebei Province, China. All the Sentinel images are prepared using Google Earth Engine. The preprocessing of Sentinel-1 and Sentinel-2 images is the same as that used in the Sentinel toolbox. The future processing of the data will be introduced as follows. §.§ Preprocessing of Sentinel-2 LAI time series Sentinel-2 is systematically acquired at high spatial resolution (10 m) with 5 revisiting days (with 2 satellites) in the same observation angle. Thanks to Copernicus Programme, it is open access. After atmospheric correction of Sentinel-2 TOA reflectance, we can retrieve LAI using inverse emulator [SIAC on GEE: https://github.com/MarcYin/SIAC_GEE], as shown in <cit.>. All the time series LAI over the test areas are generated by Sentinel-2 images. Only the areas in the Sentinel-2 images which are less affected by the clouds are selected. 10 Sentinel-2 images are selected which were acquired over the Hengshui area between March 2017 and June 2017. 12 Sentinel-2 images are used which were acquired between March 2017 and August 2017 over the North of Munich. §.§ Preprocessing of Sentinel-1 time series The Sentinel-1 GRD images which are obtained through interferometric wide swath mode over the same area during a similar acquisition period as Sentinel-2 images are collected. To reduce the speckle effect, we use the multitemporal denoised images <cit.> to compute Sentinel-1 VH and VV ratio. Specifically, given a time series of M dB values [I_1(s), I_2(s), I_3(s), ···,I_M(s)] indexed by time t, the denoised value at location s can be calculated through: Ĵ_t(s)=μ^ML_t(s)/M∑_t'=1^MI_t'(s)/μ^ML_t'(s) where I_t(s) is the calibrated dB value, J_t(s) the noise-free value, μ^ML_t(s) is the initial estimation value using multi looking (with a window size of 14×14), s is the position in one image and t is the serial number of M images. Then, the variance of the denoised value can be computed with: Var_t(s) = Ĵ^ML_t(s)M+N-1/MNL Finally, we can compute the dual polarization ratio with is computed through Ĵ^VH_t(s)/Ĵ^VV_t(s) <cit.>. To ensure all the time series SAR data I_t(s) capture the same signal from the object, we only use the images which are acquired through a similar incidence angle. 214 images (orbit 117) are used for multitemporal SAR image denoising over the Munich test area, while more than 220 images acquired through ascending orbits over the Hengshui test area are used. All the SAR images are well-registered with the Sentinel-2 images which are obtained at the same place. § EXPERIMENTAL RESULTS AND DISCUSSION In this section, we demonstrate the spatial and temporal imputation performance of the BiLSTM network by comparing it with some popularly used methods. In the test areas, most of the Sentinel-2 LAI and Sentinel-1 VH/VV values are acquired at different times. To keep the spatial resolution of LAI, polynomial and exponential regression parameters are computed once for each time series. During the imputation, Sentinel-1 VH/VV time series are interpolated according to Sentinel-2 acquisition time. Since non-LSTM methods cannot handle arbitrary lengths of the time series, we only compare them with the test of real-time series. LSTM and BiLSTM have the same parameters before training. 60 hidden units are used, followed by a fully connected layer of size 50, a dropout layer with a 0.5 probability of dropping out input elements and a fully connected layer with the same length as the input features. The depth will be good for well-leaning longer-term dependencies, that is partly the reason people stack LSTMs in sequence-to-sequence models. The Sentinel acquisition frequency is very high over Germany test areas, they can show all the temporal dynamic features of winter wheat. We will use them to train the networks. During the whole growing period, the length of the time series may be different in different places, either caused by the weather or the sensor's acquisition settings. To ensure the trained network is robust to different situations, the training time series has been randomly sampled with the shortest length equal to 8 and the longest length equal to 69. Only the pixels which correspond to winter wheat are used. §.§ Temporal imputation 4 different time series (Fig.<ref>) are selected to show the imputation difference provided by the different methods. There are some gaps in the LAI time series between April and May, which correspond to the rainy season. Sentinel-2 images cannot capture the crop growing conditions because of the cloudy weather. However, the LAI values acquired over this period are very important for crop yield prediction. For the fourth test area (fourth row of Fig.<ref>), the saturation of SAR backscattering values may be caused by the similar side-looking angle and sowing direction. This similarity may cause SAR sensors to capture low-volume backscattering values or by the large multi-looking window size during the multitemporal single-channel denoising. This phenomenon popularly exists in the North China Plain, especially when all the ascending orbits acquiring data are used for the multitemporal denoising. When the multisensor time series have similar temporal features, polynomial and exponential imputation methods can provide satisfying results. These two methods are much simpler and faster than LSTM and BiLSTM. However, when the two features have a complex relationship, they can not provide good results, as shown in the fourth row of Fig.<ref>. When the training samples contain too much noise. The multisensor time series have a nonlinear relationship which is too hard to describe using a low-degree regression model. Generally, people always compute the regression coefficients between one feature and another, and find the optimized parameter based on the least-squares estimation. This will lead to the inverse results smaller than the referenced target feature in the peak part. LSTM and BiLSTM can take into account the two features and generate better imputation results. As we can see from Fig.<ref>, all the imputation results provided by LSTM and BiLSTM are better than the former two methods. Even when Sentinel-1 signals have complex changes, they still provide good results. During the imputation, they can take the limited number of LAI points as the weight to adjust the imputed time series. §.§ Spatial performance of the imputation All the imputation is performed in the temporal domain for each geolocated point. In this section, we will analyze the imputation spatial performance by comparing the results with Sentinel-2 provided LAI image and some in-situ measurements. During the imputation, the parameters of polynomial and exponential regression methods are separately computed for each time series. As observed in Tab.<ref>, both BiLSTM and LSTM obtain smaller root-mean-square error (RMSE) results. With the reference of Fig <ref>, the big RMSE results provided by LSTM may be caused by the larger imputed LAI values in May. The big RMSE values provided by polynomial and exponential regression methods are caused by the lower imputation results. From Fig.<ref> we can see that LSTM and BiLSTM can provide much better imputation results in the spatial domain than polynomial and exponential regression methods. Their imputation results have better spatial features, with smooth values in the same field and clear line features in the boundary areas. Compared with LSTM-provided results (Fig.<ref>(5)), BiLSTM can also provide better results in some narrow fields (Northeast of the test area in Fig.<ref>). The spatial distribution of the results provided by BiLSTM is much more similar to the real LAI. § CONCLUSION In this paper, we successfully applied the BiLSTM network to impute LAI with multivariate time series data. Plenty of experiments show the effectiveness of its imputation over the spatial and temporal domains. It is obvious that both BiLSTM and LSTM imputation performance outperforms traditional regression methods. It can well impute the missing values in the time series, especially during the fast-growing season of winter wheat. Even when the temporal Sentinel-1 VH/VV values have a complex change, the BiLSTM network still could provide effective imputation results. In the future, we will take the spatial correlation information into account for missing value imputation. The spatiotemporal multisensor LAI imputation process has been separated into several steps, we will find ways to merge them. unsrt
http://arxiv.org/abs/2307.04475v1
20230710105439
Modelling opinion misperception and the emergence of silence in online social system
[ "Daniele Vilone", "Eugenia Polizzi" ]
physics.soc-ph
[ "physics.soc-ph" ]
media/
http://arxiv.org/abs/2307.07325v1
20230714130210
Representation Learning With Hidden Unit Clustering For Low Resource Speech Applications
[ "Varun Krishna", "Tarun Sai", "Sriram Ganapathy" ]
eess.AS
[ "eess.AS", "cs.AI", "cs.LG" ]
Journal of Class Files, Vol. XX, No. X, January 2022 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Representation Learning With Hidden Unit Clustering For Low Resource Speech Applications Varun Krishna, Student Member, IEEE, Tarun Sai, Student Member, IEEE, and Sriram Ganapathy, Senior Member, IEEE, V. Krishna, S. Tarun and S. Ganapathy are with the Learning and Extraction of Acoustic Patterns (LEAP) lab, Department of Electrical Engineering, Indian Institute of Science, Bangalore, India, 560012. e-mail: {varunkrishna, tarunsai, sriramg}@iisc.ac.in August 12, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================== The representation learning of speech, without textual resources, is an area of significant interest for many low resource speech applications. In this paper, we describe an approach to self-supervised representation learning from raw audio using a hidden unit clustering (HUC) framework. The input to the model consists of audio samples that are windowed and processed with 1-D convolutional layers. The learned “time-frequency” representations from the convolutional neural network (CNN) module are further processed with long short term memory (LSTM) layers which generate a contextual vector representation for every windowed segment. The HUC framework, allowing the categorization of the representations into a small number of phoneme-like units, is used to train the model for learning semantically rich speech representations. The targets consist of phoneme-like pseudo labels for each audio segment and these are generated with an iterative k-means algorithm. We explore techniques that improve the speaker invariance of the learned representations and illustrate the effectiveness of the proposed approach on two settings, i) completely unsupervised speech applications on the sub-tasks described as part of the ZeroSpeech 2021 challenge and ii) semi-supervised automatic speech recognition (ASR) applications on the TIMIT dataset and on the GramVaani challenge Hindi dataset. In these experiments, we achieve state-of-art results for various Zerospeech tasks. Further, on the ASR experiments, the HUC representations are shown to improve significantly over other established benchmarks based on wav2vec, HuBERT and Best-RQ. Self-supervised learning, hidden unit clustering, contrastive loss, Zerospeech. Low resource ASR. § INTRODUCTION Spoken language processing using raw speech, without any textual resources, has received wide spread interest recently <cit.> and are useful for various applications like zero-resource spoken language modeling, low-resource speech recognition and speech synthesis. The key challenge in these tasks is the automatic discovery of sub-word units of speech <cit.> that are speaker invariant and consistent. The learning of unsupervised audio representations <cit.> has been largely investigated with self-supervised loss functions involving contrastive learning  <cit.>, vector quantization <cit.>, clustering <cit.> or autoregressive predictive coding <cit.>. For unsupervised learning of audio representations, the use of restricted Boltzmann machines <cit.>, variational learning <cit.> and generative adversarial networks <cit.> have been explored to learn acoustic/modulation filters. Recently, a family of models, termed as wav2vec <cit.>, were investigated for self-supervised representation learning applications like speech recognition and synthesis. The initial approach <cit.> used the contrastive predictive coding <cit.> with 1-D convolution layers. The subsequent vector-quantization based approach, wav2vec-vq <cit.>, was inspired by the quantization module proposed by Oord et. al. <cit.>. The output of the quantization module is employed in training bidirectional encoder representations from transformer (BERT) language models <cit.>. The recent approach, wav2vec 2.0 <cit.>, further added a learnable code-book based model. The ZeroSpeech series of challenges <cit.> considered an open call for participation in speech representation learning. The challenges bench-marked various methods for self-supervised learning based on word similarity and spoken language modeling tasks. The zero shot metrics used in the ZeroSpeech 2021 challenge <cit.> considered acoustic and linguistic measures. Several approaches based on Gaussian mixture modeling <cit.>, HMMs <cit.>, and deep learning methods <cit.> have been explored for the ZeroSpeech tasks. In the recent years, self-supervised learning methods has also been investigated for ZeroSpeech applications. These efforts have investigated various model architectures like convolutional networks  <cit.>, recurrent networks <cit.>, transformer models <cit.>, and conformer models <cit.>. For spoken language modeling tasks, most of prior works use a separate clustering model on the embeddings. Further, most of the prior works generate representations that are general purpose and do not attempt any disentanglement of the factors to enhance the semantic richness of the representations. In this paper, we explore a framework of hidden unit clustering (HUC), where the deep representations learned by a neural network are optimized using a categorical loss. This paper extends our previous work on acoustic unit discovery <cit.>. The work is inspired by prior work on deep clustering by Caron et. al. <cit.> for self-supervised learning from the raw audio data. In our work, the HUC model is shown to enable the learning of discriminative representations that are less impacted by speaker variations. The experimental validation of the proposed approach is performed on two settings. The first setting is the ZeroSpeech 2021 challenge task. The phoneme recognition experiments are performed on the TIMIT dataset while the large vocabulary speech recognition experiments are performed on the semi-supervised track of the GramVaani Hindi ASR challenge[<https://sites.google.com/view/gramvaaniasrchallenge/home>]. The novel contributions from this current work over the previous approach <cit.> are, * We propose a self-supervised learning approach using a combination of contrastive loss and HUC loss that generates speaker invariant representations. The semantic richness in the representations is achieved using an utterance level mean normalization performed on the context vector representations. * We develop a data sampling technique to generate the pseudo-phoneme cluster labels. The labels learned using the sampling approach is shown to be effective in generating categorical representations. * We propose a self-supervised dimensionality reduction method on the representations, driven by the pseudo-labels. This is based on gradient boosted decision tree modeling <cit.>. * A set of semantic task evaluations illustrate the performance gains using the proposed framework. Using the data from the ZeroSpeech 2021 challenge, we show that the proposed HUC based self-supervision achieves state-of-art results on 7 out of the 8 sub-tasks. We also show that the representations from the proposed model perform well in phoneme recognition and low-resource ASR experiments. * A detailed analysis on the representations and the evaluations on non-semantic tasks shows that the proposed HUC framework achieves the semantic richness at the cost of being a general purpose speech representation for non-semantic speech tasks. § BACKGROUND §.§ Related prior work §.§.§ Contrastive predictive coding The contrastive predictive coding <cit.> network consists of two sets of processing layers, with inputs being the raw audio samples. Let x[n] denote the audio samples that are windowed into short-term frames (typically of 20ms duration). The output of the convolutional block contains f dimensional filterbank representations for each frame t. These outputs, sampled with a stride of 10 ms, are denoted as z_t, are then fed to the LSTM layer. The LSTM layer uses a context window of v previous frames, (z_t,z_t-1,...,z_t-v). The output of the LSTM layer is referred to as the context vector c_t. The contrastive loss is computed based on the prediction of the K future filterbank representations, {z_t+k}_1 ≤ k ≤ K by minimizing the following loss function <cit.>, ℒ_t = -1/K∑_k=1^Klog (exp( z_t+k^T W_k c_t)/∑_ z∈𝒩exp(z̃^T W_k c_t)) Here, 𝒩 denotes a subset of negative examples chosen randomly, and W_k forms a set of weights used in the prediction of the k^th future embedding z_t+k. The CPC model used for the experiments reported in this paper employs negative sampling from within the same utterance (within speaker negative sampling). §.§.§ wav2vec models : The wav2vec <cit.> architecture emulates the 1-D convolutional layer of the CPC model. wav2vec: Instead of the LSTM layer used in the CPC model, the representations z_t are processed with a second set of convolutional layers with a window of v contextual embeddings, (z_t,z_t-1,...,z_t-v). The model outputs the contextual vector representation c_t, for each time step t. The model learns to distinguish a sample z_t+k from distractor samples z̃. ℒ_k = - ∑_t=1^T-k log (σ ( z_t+k ^T h_k( c_t)) + λ𝔼_ z∼ p_n log (σ (- z ^T h_k ( c_t))) Here, z̃ is drawn from a proposal distribution p_n, h_k defines a linear feed-forward neural network layer and σ(.) denotes sigmoid activation function. Further, the weight factor λ is a hyper-parameter. wav2vec-vq: In this model <cit.>, the quantized context vector representations are used in the prediction task. The model architecture resembles that of wav2vec model, with two convolutional networks f: 𝒳↦𝒵 and g: Ẑ↦𝒞 for feature extraction and aggregation, and an additional vector quantization module q: 𝒵↦Ẑ wav2vec 2.0: The wav2vec 2.0 <cit.> builds over the ideas of wav2vec-vq <cit.>. The wav2vec 2.0 uses a multi-layer convolution feature encoder f : 𝒳↦𝒵 with the raw audio as input. A portion of the features are masked before the transformer based aggregator g : 𝒵↦𝒞. The context vectors corresponding to the unmasked regions are used to predict the quantized <cit.> representations of the masked regions. §.§.§ Deep clustering The deep clustering approach, proposed for learning visual representations, combines feature learning and clustering into a single neural model <cit.>. Unlike the VQ based approach, the deep clustering does not require the context vectors to be approximated as code-book entries. For speech modality, deep clustering has been explored in the HuBERT framework. §.§.§ HuBERT The HuBERT <cit.> model consists of convolutional layers followed by a set of transformer encoder layers. During the pre-training stage, transformer <cit.> encoder consumes the masked acoustic <cit.> features U from the convolutional encoder. The output H of the transformer is used to predict the discrete targets z∈ [K] (K-class categorical variable) through a linear projection layer. The HuBERT adopts an iterative clustering and training process. For the first iteration, the mel frequency features, categorized into K clusters, are used as discrete targets. For subsequent iterations, discrete targets are derived by clustering the hidden layer representations. §.§ Contrast with proposed HUC model Many of the prior works perform self supervised learning (SSL) on the English read speech in Librispeech <cit.> dataset, while illustrating the quality of the representations using speech recognition tasks <cit.>. In English read speech utterances, the two key streams of information are the semantic (content) variability and the non-semantic (speaker) variability. Most of the prior works do not attempt to explicitly disentangle the semantic and non-semantic information streams as they aim to derive general purpose audio representations. In contrast, the proposed work a) identifies a way to quantify the non-semantic component of a given speech utterance, b) uses the non-semantic component in a diverse sampling of the acoustic units, and, c) factors this component out of the final representations in order to enhance the semantic encoding of speech. We illustrate this contrast experimentally using various downstream tasks that probe the semantic and non-semantic encoding in the speech representations derived from the HUC framework. Our work, while also using deep clustering, differs from HuBERT <cit.> in the following ways - i) We do not employ any masking of the audio regions, ii) A mean normalized representation from the LSTM aggregator is used for clustering, iii) A data sampling framework, based on diversity of pseudo-speaker clusters is used to design the code-books, iv) The model is trained with a combination of clustering and CPC losses. With various ablation experiments, we empirically highlight the advantages of these processing steps. § PROPOSED WORK The overview of the proposed framework is shown in Fig. <ref>. The initial processing steps consist of a block of 1-D convolution layers, similar to the CPC model <cit.>. The convolutional layers generate filter-bank features denoted as z_t^i, where t is the frame index and i is the utterance index. Subsequently, the LSTM layer generates context vectors c_t^i. The regularized HUC loss is used to train the model. The HUC approach attempts to generate categorical labels using a clustering framework applied on the CPC based context vector representations. For this purpose, the context vectors c_t^i from a pre-trained model are clustered using an unsupervised k-means clustering. The labels from this clustering step are considered to be the psuedo-phoneme labels for training the model shown in Fig. <ref>. §.§ Analyzing the mean context vector In our analysis, shown in Fig <ref>, we use a pre-trained CPC model trained on the Librispeech dataset, employing within utterance negative sampling of distractors, to compute the utterance level mean, c^i, of the context vector representation. We plot the t-distributed stochastic neighbourhood embedding (t-SNE) <cit.> of the mean context vectors, for 50-100 utterances derived from 7 speakers chosen randomly from the dataset. As seen in this plot, the utterance level mean of the context vectors primarily captures the speaker information. §.§ Data sampling for k-means training With a large number of utterances, and with a frame-rate of 100 Hz (10ms windows), the number of context vectors c_t^i available for clustering is significantly large (>100M). Secondly, the pseudo-phoneme classes (classes for training the HUC model) have to satisfy two goals namely, * consistency - should be sufficiently broad to represent different variants of the same underlying sound spoken by different speakers and in different acoustic contexts. * conciseness - should be succinct to capture the fine acoustic properties that make it distinctive from other units. The data sampling process attempts to achieve the right trade-off between the consistency and conciseness. The goal of the data sampling process is to create a subset of the training data, that are used in the clustering. The algorithm is outlined in Alg. 1. The key objective is to preserve only the utterances from diverse “speakers” in the corpus. The first step in the data sampling is to perform a k-means clustering of the mean context vectors, c^i, derived from all the utterances in the training data. This step generates M pseudo-speaker centroids of the training data. The optimal value of M is determined using knee point algorithm <cit.>for k-means. The knee point algorithm finds the point of inflection in inertia. Specifically, the sum of squared distances of a point from the nearest cluster center is computed for different choices of M (number of clusters). The point of inflection is used as the optimal value for M. Further, using the inter-centroid distance matrix (the matrix containing the pair-wise distance between the M centroids), the set of N centroids (N < M) are identified as extreme cluster centroids (the centroids which are maximally distant), as outlined in Alg. 1. In the second step, any given utterance i of the training data is mapped to one of the M pseudo-speaker centroids, by generating the mean context vector c^i and finding the closest centroid. The utterances, whose mean context vector associates to one of the N extreme centroids are alone used in the HUC. This sampling process thus retains only the maximally diverse utterances for clustering. The value of N is a hyper-parameter and is obtained through validation experiments. The sampled set of mean normalized context vectors are input to the k-means clustering. We use the Euclidean distance for the clustering. The clustering process generates the pseudo-phoneme labels for each time frame of the context vector. The k-means clustering approach is run for 100 iterations and the final cluster centers are stored. These cluster centers are used to obtain the pseudo-phoneme labels for each frame of the audio utterance in the training dataset. The model architecture, shown in Fig. <ref>, is trained from random initialization using the frame level pseudo-phoneme labels. §.§ Representation pre-processing In order to generate representations that are speaker invariant, we remove the utterance level mean, c^i, from the context vectors c_t^i during the HUC model training. The mean normalized context vectors are denoted as ĉ_t^i. §.§ Loss function The HUC loss is the cross-entropy loss, where the logits layer (a linear layer followed by softmax non-linearity) on the mean-normalized context vectors, ĉ_t, (Fig. <ref>) is used the predict the pseudo-phoneme labels. We define the cluster loss as, ℒ_C = - ∑ _t=1^T l_t ^T log (S(f(ĉ_t))), where S denotes the softmax function, f denotes the logits layer (a linear layer with soft-max non-linearity), and l_t denotes the one-hot encoded label vector (cluster identity) for the context vector c_t. The logits layer allows the non-linear mapping of the context vectors c_t. As the clustering is a non-linear process, the logits layer transformation with soft-max activation enables the model to learn the cluster labels. The number of clusters k defines the granularity of the categorical space. A small value of k reduces the number of target classes in the clustering network and allows for a faster training of the models. However, a larger value of k allows the model to capture the finer acoustic categories. Our experiments indicate that a value of k=200 provides the best performance in terms of the ABX metric <cit.> on the ZeroSpeech 2021 challenge dataset. We also explore a regularization of the HUC loss with the CPC loss (defined in Eq. <ref>). The regularized loss is given by, ℒ_reg = ℒ_𝒞 + λℒ_cpc Here, λ is the regularization factor. The clustering loss is a frame-level loss while the CPC loss encourages the prediction of future frames conditioned on the previous time-frames. Thus, the regularized loss allows the benefits of categorical learning (from the clustering loss) without over-fitting. In our experiments, we have chosen λ << 1 to emphasize the importance of the clustering loss in the representation learning framework. §.§ Dimensionality reduction For the ABX tasks, we further enrich the categorical nature of the context vector representations, c_t^i, using a discriminative dimensionality reduction approach. This is achieved by retaining only those dimensions that are the most predictive of the pseudo-phoneme labels. We build a gradient boosted decision tree model <cit.> with the inputs being the context vector representations, c_t^i and the targets being the corresponding pseudo-phoneme labels. The decision tree model <cit.> allows the ranking of the feature dimensions and the top D dimensions are retained. § ZEROSPEECH EXPERIMENTS §.§ Data The training data is derived from LibriSpeech <cit.> and Libri-light dataset <cit.>. The LibriSpeech corpus contains data from audio-books that are part of the LibriVox project, and it consists of 1000 hours of read speech sampled at 16kHz. The training data is derived from more than 1000 speakers, almost equally split across the two genders, with each speaker providing 25-30 minutes of audio. The Libri-light is also derived from open-source audio books from the LibriVox project. It contains over 60K hours of audio. In the ZeroSpeech 2021 challenge <cit.>, the data is provided without any textual or speaker information. §.§ HUC model training The CPC-big <cit.> model, trained on the Librispeech 960h dataset, generates the initial context vector representations. The encoder architecture of the CPC-big model contains 5 layers of 1-D convolutional networks with kernel sizes of 10,8,4,4,4 and stride of 5,4,2,2,2, respectively. The autoregressive model is a 4 layer LSTM network with a hidden dimension of 512 units. The encoder architecture of the HUC model is similar to the CPC-big model. However, 2 layers of the LSTM network, with a hidden dimension of 256 units, are used for the autoregressive modeling. The HUC model is trained with the regularized loss function, defined in Eq. <ref>. The number of pseudo-speaker clusters N, HUC units k, and regularization factor λ are hyper-parameters, which are set using the ABX metric on the ZeroSpeech development data. The pseudo phoneme labels extracted from the full librispeech data (LS-960h) are used to train the HUC model. The models are trained for 200 epochs with a patience factor of 5. For all the other tasks, like the language modeling on the ZeroSpeech dataset and the ASR experiments, the hyper-parameters are not modified. Following the HUC model training, the pseudo-phoneme sequences, obtained as the frame-level class predictions from the model, extracted on the LS-960h data, are used for training the BERT based language model. §.§ Performance metrics The metrics defined in the Zerospeech-2021 challenge <cit.> are computed on the development and test set. The results reported in the paper are for Track-1 - speech based language modeling task. Phonetic <cit.>: Given a triplet of tri-phone words, A,B and X, where A and B differ in the center phoneme, while A and X are two different utterances of the same word, the ABX metric computes the fraction of instances when A and X are more distant than A and B. The average angular distance of the representations along the dynamic time warped (DTW) path is used to compute the distance between the utterances of the words. The ABX score computed when all the tri-phone words are chosen from same speaker is called the “within” speaker ABX, while the case when the words A and B are derived from the same speaker, while X is chosen from a different speaker is called the “across” speaker ABX. Experiments are also reported on the “clean” and “other” splits of the Librispeech development data <cit.> data. Lexical <cit.>: The sWUGGY "spot-the-word" measures the ability of the model to identify a legitimate word. A set of word/non-word pairs are selected. The measure is computed as the fraction of the pairs where the likelihood of the legitimate word is higher than non-word. Since it is an accuracy metric, a higher value is preferred. Syntactic <cit.>: The sBLIMP metric measures the ability of model to identify grammatically correct sentences. It is computed as fraction of instances where the likelihood of grammatically correct sentence is higher than an incorrect one. Thus, a higher value is preferred. Semantic: The sSIMI metric is used to assess the lexical semantics and is computed as the Spearman’s rank correlation coefficient ρ between the semantic similarity scores between given by the model and the human scores in the dataset. The phonetic task (ABX) uses the representations from the self-supervised learning (SSL) models to compute the ABX metric. For the rest of the tasks, the ZeroSpeech 2021 challenge <cit.> proposes a cascading of the self-supervised feature extractor module, k-means based quantizer and a language model. The k-means quantizer, trained on the representations from the feature extractor, is used to discretize an utterance. These discretized sequences are used to train a BERT-based language model <cit.>. During the inference, the likelihood score of the discretized sequence, given by the language model, is used for the spoken language modeling tasks. §.§ Hyper-parameter selection The results for the phonetic sub-task experiments on the development set of ZeroSpeech 2021 data are shown in Table <ref>, <ref>, <ref>, and <ref> for various choices of hyper-parameters λ, k, N, architecture and D respectively. Unless mentioned explicitly, the best value of a hyper parameter found in an experiment is used for all the remaining experiments. The hyper-parameter selection was done by pre-training the models on Librispeech 100h split with the ABX metric on the ZeroSpeech development set as the performance measure. The key observations from these experiments are, §.§.§ Regularization Parameter λ The regularization parameter λ controls the amount of categorical learning induced by the cluster loss and the long-term predictive power induced by the CPC loss. As shown in the experiments reported in Table <ref>, the best ABX error is achieved for a value of λ = 1e-4. This implies that a higher weight for the HUC loss is more useful for sub-word discovery task in the ZeroSpeech data. The experiments also highlight that, using the HUC loss alone (λ = 0), is only slightly inferior to the best results achieved. §.§.§ Number of clusters k The experiments with different choices for the number of clusters, k, are reported in Table <ref>. These results show that k=200 achieves the best compromise between the goals of consistency and conciseness. §.§.§ Number of pseudo-speaker clusters used in sampling N As mentioned in Sec. <ref>, the pseudo-speaker clusters are identified by clustering the utterance level means of the context vector representations, c^i. The number of clusters is obtained by the choosing the knee-point[We compute knee point using <https://kneed.readthedocs.io/en/stable/>]. For the Librispeech 100h dataset, this value is found to be M=250. Further, choosing N extreme cluster centroids for data sampling ensures diversity of the data used in the clustering. The number of cluster centroids chosen (N) is varied and the ABX results for these experiments are reported in Table <ref>. The first row reports the experiments without any data sampling and without the mean normalization step, while the second row reports the results for the same setting with the mean normalization step. The mean normalization improves the ABX metric for all the conditions of within and across speaker test settings. Further, the next three rows with the data sampling, using the mean normalized context vector representations, show that sampling only from N=30 farthest clusters gives additional performance improvements. In order to further illustrate the benefit of choosing the farthest centroids (diverse pseudo-speakers), we also experiment by sampling from N=30 nearest centroids. The ABX results show that the error is significantly higher (higher than the case without any sampling) for this experiment indicating that the choice of data selection using the most diverse pseudo-speakers is important for identifying the acoustic units of speech. We also compare the proposed sampling approach with random and Poisson sampling <cit.> of the context vectors (Table <ref>). The results show that the random and Poisson sampling do not improve the performance when compared with the system that does not involve any sampling. §.§.§ Number and dimension of LSTM layers The experiments comparing the network architecture of CPC-big <cit.> using 4 layers of LSTM with 512 units and 2 layers of LSTM network with 256 units are reported in Table <ref>. The results suggest that using a bigger architecture does not improve the performance. For all the subsequent experiments, we use the HUC model with 2 layers of LSTM and with 256 units. §.§.§ HUC loss with input masking In prior works <cit.>, masking the output of the 1-D convolution layer or masking the raw audio signal was found to be useful. We experiment with a masking strategy similar to HuBERT <cit.>, where 8% of the 1-D convolution encoder output with span of length 10 is masked. The results reported in Table <ref> (third row) show a significant degradation in the performance with the choice of masking. This may be due to the use of the LSTM aggregator network instead of the transformer aggregator networks used in prior works <cit.>. §.§.§ Dimensionality of the final representations D We use a a gradient boosted decision tree model<cit.> to choose the feature dimensions that are relevant for preserving the separability of the pseudo-phoneme classes. The number of dimensions is varied and these results are reported in Table <ref>. The first row reports the results without any dimensionality reduction. The best results are achieved for a choice of D=196, which corresponds to retaining 75% of the dimensions. §.§ Comparison with other benchmarks on ABX task We compare the proposed work with other previously published benchmarks on the ABX task. The results are reported for the development set and the official evaluation set (test set) in Table <ref>. The baseline systems compared are the representations derived from the CPC-Big model (used in the pre-training of the HUC representations) <cit.>, wav2vec <cit.>, wav2vec-vq <cit.> and wav2vec 2.0 <cit.>. These approaches require lower GPU budget for training (< 200h). Further, recent works published by Maekaku et. al. <cit.>, Nierkerk et. al. <cit.> are also relatively lower in GPU budget (< 200h). However, the approaches reported in Chorowski et. al. <cit.> use the speaker information in the training. It is note worthy that the proposed HUC approach is low GPU budget (150h) and does not use any speaker information. The proposed HUC approach provides consistent improvements for the both the development and evaluation setting. The model is also seen to improve over other high budget approaches, resulting in state-of-art results on most of these sub-tasks. The best results are achieved for the HUC model with mean normalization, data sampling and dimensionality reduction. In comparison with the CPC big baseline <cit.> and our prior work <cit.>, the proposed HUC model improves the ABX error relatively by 54% and 45%, respectively. §.§ Language modeling tasks The architecture of the BERT language model is the same as the BERT-small model described in <cit.>. This language model is trained on 960 hours of LibriSpeech data. For the language model based evaluation tasks, we cluster the embeddings (ĉ_t) from the trained models into discrete tokens using k-means with k=200. The BERT language model is trained on these discrete tokens. The language model is trained using fairseq tools[<https://github.com/pytorch/fairseq>]. The results on the language modeling sub-tasks are shown in Table <ref>. The HUC based pseudo-phoneme sequences, modeled with the BERT-big architecture, provides the best performance on most of the language modeling tasks. §.§ GPU Budget We define the GPU budget of a SSL model as the number of GPU hours required to complete 400k weight update steps on the Librispeech 960h dataset. For example if a model takes 7 days to complete 400k weight update steps while training on 8 GPUs, the GPU budget of the model would be 7 × 8 × 24 = 1344h. We have compared the GPU budget of various models in Table <ref>, where we find the HUC framework to be considerably efficient. § ROBUSTNESS OF SELF-SUPERVISED MODELS We follow the approach proposed by Gat et al. <cit.> to measure the robustness of the models in the presence of noise and semantically invariant transformations. Given a speech signal x ∈ℝ^𝕋, we apply basic transformation g : R^T ↦ R^T, such as pitch, reverberation or additive noise to obtain x^'. Both x and x^' are then fed to the SSL model f : R^T ↦ R^T^'. The resulting continuous representations are then fed to the k-means quantizer E : R^T^'↦{1....K}^T^' and the discretized sequence is de-duplicated (for example a sequence 12,12,34,34,52 is converted to 12,34,52). Here K denotes the number of discrete units. The SSL model F and the quantizer E are trained on the “clean” data. The modified Levenshtein distance <cit.> UED_𝔻 between x and x^' is the given by, UED_𝔻 = ∑_∈𝔻1/T^'LEV((E∘ f)(x),(E∘ f∘ g)(x)) Here, 𝔻 is evaluation data and LEV is Levenshtein distance <cit.>. All the SSL models are pre-trained using Librispeech 960h dataset, while the k-means is trained on the representations using the Librispeech 100h subset. The transformations applied are pitch perturbation, uniformly sampled between scales of -300 to 300, reverberation by uniformly sampling room responses with scale between 0 to 100, and additive noise, sampled from MUSAN dataset <cit.> with SNR between 5 to 15 dB. The transformations are implemented using WavAugment[<https://github.com/facebookresearch/WavAugment>]. The results averaged over 5 runs are reported in Table <ref>. The results show that the representations from the proposed HUC model elicit significant robustness to noise and other perturbations. § LOW RESOURCE SPEECH RECOGNITION §.§ TIMIT phoneme recognition §.§.§ Data TIMIT <cit.> corpus consists of 5 hours of 16 kHz sampled English read speech. The manually transcribed time-aligned phonetic and word level transcriptions for each utterance are also available. It contains recordings from 630 speakers belonging to 8 different dialects of American English. The training set consists of 3696 utterances spoken by 462 speakers. The development set consists of 400 utterances spoken by 50 speakers. The test split contains 192 utterances from 24 speakers. §.§.§ Experimental set up In the TIMIT dataset, the phoneme recognition task is designed using an encoder decoder attention based neural network. This model is motivated by the hybrid connectionist temporal cost (CTC) attention network proposed for ASR by Watanabe et. al. <cit.>. The encoder is a 4-layer recurrent neural network (RNN) with a hidden dimension of 320 and the dropout is set at 0.2. The decoder is an attention decoder containing 1-layer of LSTM units with a hidden dimension of 320. The pre-training for all the representation learning based features is performed on the Librispeech 960 hour dataset. For the wav2vec models we have used the model released by fairseq[<https://github.com/facebookresearch/fairseq/blob/main/examples/wav2vec/README.md>]. Further, the representation learning front-ends are fine-tuned for the task of phoneme recognition. §.§.§ Results The phoneme recognition results in terms of phoneme error rate (PER) are reported in Table  <ref>. Among the various benchmarks compared, the wav2vec model gives the best PER. The proposed HUC framework improves significantly over the prior works in terms of PER. The model improves the PER relatively by 9.2% and 4.6% over the baseline front-ends like the CPC-big and the wav2vec. Further, the results also highlight the incremental improvements in the PER results with mean normalization and data sampling approaches proposed in this work for the HUC model, which are consistent with the ABX improvements seen on the ZeroSpeech challenge task. §.§ ASR experiments §.§.§ Data The GramVaani Hindi ASR challenge dataset[<https://sites.google.com/view/gramvaaniasrchallenge/dataset>] comprises of 1100 hours of telephone quality speech data in Hindi language. In the Track-II of the evaluation, the task entails 1000 hours of unlabelled audio and 100 hours of transcribed speech for training the ASR system. The data set also contains subsets of 5 hours of development and 3 hours of evaluation, which are manually transcribed. The audio is recorded over the telephone channel in varying sampling rates of 8kHz, 16kHz, 32kHz, 44kHz and 48kHz. In our experiments, we down sample the entire dataset to 8kHz. §.§.§ Experimental setup The training objective, based on hybrid CTC attention, is similar to the phoneme recognition setup on the TIMIT dataset. The acoustic model encoder contains 12 layers of conformer architecture <cit.>. Further, 8 layers of transformer architecture are used in the decoder. Both the encoder and decoder layers have the hidden dimension set to 2048 with the number of attention heads set to 8. The CNN kernel size in the conformer is set to 15. The number of byte pair encoding (BPE) is set to 1000. The model is trained for 20 epochs with early stopping. For the semi-supervised training on 1000 hours of unlabelled data, we pre-train ASR models with 100 hours of supervised data. Two separate ASR systems are built using hybrid ASR architecture and E2E ASR architecture. Further, the decoded transcripts from both the systems on the 1000 hours of unlabeled data are compared and a subset of 200 hours is selected where the agreement between the hybrid ASR and E2E ASR systems are the highest. More details about the pseudo-label generation and modeling are given in <cit.>. The final ASR system is trained using 100 hours of fully transcribed data along with 200 hours of pseudo-labels. All the experiments use RNN language model trained with the text from the 100 hours of transcripts. §.§.§ Results The ASR results, in terms of WER, are reported in Table <ref>. The first row shows the baseline result using only the 100 hours of supervised audio, while the second row shows the ASR results for the case when 200 hours of unsupervised audio with the pseudo labels are added to the training of the ASR models. The baseline ASR system trained with 100 hours was used to generate the pseudo-labels for the remainder of the 1000 hours of unlabeled speech, and a selection criterion was used to filter 200 hours of reliable transcriptions for the self-training. The semi-supervised training provides a minor improvement to the ASR results. Hence, this setting is used for the rest of the experiments. The pre-trained representation learning frameworks (CPC, wav2vec and HUC models) are trained using the 1100 hours (supervised + unsupervised pool) of Hindi audio data. The representation learning models improve significantly over the mel filterbank baseline features. The best baseline SSL model improves relatively over the mel filterbank features by 7.3% and 6.8% on the development and test sets respectively. The HUC model experiments show that, with k=200 clusters improves over other choice of k=50/400 clusters. Further, the data sampling and the mean normalization techniques subsequently improve the ASR performance. The final HUC model, which combines mean normalization and data sampling, improves significantly over the baseline mel filterbank and wav2vec features. The model achieves relative improvements of 9.9% and 8.6% over the wav2vec features on the development and test sets respectively. The Hindi ASR results also highlight the consistent trends seen in the experiments reported in the previous sections. §.§ Statistical Significance Test To compare the performance of the HUC model relative to other systems in a statistical sense, we use bootstrap estimate of confidence intervals <cit.>. Table <ref> shows the analysis of the proposed approach against various baseline systems. The probability of improvement (POI) for proposed system is noticeably high (above 86%) for all the system comparisons. § NON-SEMANTIC EVALUATIONS We investigate the effectiveness of HUC for emotion recognition, speaker and language identification tasks. The models are pre-trained on the Librispeech 960h <cit.> dataset. For all the tasks, we average the representations at utterance level and the SSL models are not fine-tuned. A linear SVM <cit.> is used on the pooled representations for downstream tasks. The tasks explored here are our implementations of the sub-tasks involved in the non-semantic speech (NOSS) benchmark <cit.>. §.§ Experimental setup For speaker identification tasks, the VoxCeleb-1 <cit.> dataset consisting of 1251 speakers is used. The accuracy score on the test split of the dataset is used as evaluation metric. We use VoxForge <cit.> dataset, consisting of 6 languages (English, Russian, Spanish, German, French and Italian), for language identification tasks. The CREMA-D <cit.> dataset is used for emotion recognition tasks. The 5 fold cross validation score was used to evaluate all the systems. §.§ Results The results are given in Table <ref>. We see that there is significant drop in the performance of our proposed model across all the non-semantic tasks. This could be due to the mean normalization and the data sampling that renders the representations invariant to speaker, language and emotion. The HUC model, without the mean normalization (second row), shows significant improvements over the proposed model on these tasks, indicating that the mean contains significant non-semantic information. § DISCUSSION The analysis reported in Fig. <ref> showed that the utterance level mean of the embeddings captured speaker level information. In order to further quantify the speaker invariance and the phoneme discovery properties of the representations, we design a set of experiments on the Librispeech 100h dataset. This dataset consists of 251 speakers. The phonetic alignments for these experiments come from pre-trained Kaldi ASR model which is operated in a forced alignment mode to generate phoneme-level transcripts at every frame (sampled every 0.01s) for the 100 hours of data. The pre-trained CPC model used for the experiments employs within utterance (within speaker) negative sampling scheme. We train a simple linear classifier for classifying the speakers/phonemes at frame-level with the acoustic representations (fbank/CPC/HUC). Further, the mean of the utterance level representations are clustered into 251 clusters. The speaker cluster purity is measured as the ratio of the number of samples in a given cluster belonging to the most dominant speaker to the total number of samples in the cluster. The phoneme/speaker classification accuracy as well as the speaker cluster purity are reported in Table <ref>. As seen in the Table, the CPC representations have an improved phoneme and speaker classification accuracy compared to the fbank-pitch features with the simple linear classifier. The speaker cluster purity is also high, indicating that the CPC representations encode both the phoneme and speaker information. This is not desired as the phonetic and spoken language modeling tasks require speaker invariant speech representations. The HUC model, with mean normalization, improves the phoneme accuracy over the CPC baseline features, while also reducing the speaker classification accuracy. The speaker cluster purity is seen to be lower for the HUC representations. With the data sampling procedure in the HUC framework, the phoneme classification is improved while the speaker classification is further degraded. The speaker cluster purity measure also reduces, highlighting that the representations from the proposed HUC model capture phoneme level units that are speaker invariant. The key benefits and limitations of the proposed HUC framework are summarized below, Benefits: * The ability to learn semantically rich representations of speech in a self-supervised setting. * Enable the formation of robust representations that are less susceptible to speech transformations. * Having a lower computational budget compared to other popular SSL approaches. * Significantly improved performance on a battery of downstream semantic tasks on ZeroSpeech, phoneme recognition and ASR settings. Limitations: * The major limitation of the model is that the representations, while encoding the semantic aspects, compromise on encoding the non-semantic aspects of the speech signal. * The other limitation includes the need for stage-wise pre-training of the model. These steps include mean normalization, data sampling for k-means and HUC training. § SUMMARY The paper presents a hidden unit clustering approach for representation learning of speech from raw audio data. The model forces the representations to be more categorical in a pseudo-phoneme space. We propose techniques for improving the speaker invariance, consistency and conciseness of the model representations using mean normalization and data sampling. The quality of the representations are evaluated using a series of phonetic and linguistic evaluations on the ZeroSpeech challenge sub-tasks. In these experiments, we also establish new state-of-art results with relatively lower computational budget. The model architecture with the same hyper-parameters are also used in ASR experiments on the TIMIT and GramVaani datasets. The low resource ASR experiments further highlight the benefits of the proposed representation learning framework. The ASR results illustrate improvements over benchmarks on self-supervised representation learning. IEEEbib
http://arxiv.org/abs/2307.06117v1
20230708094335
A qubit regularization of asymptotic freedom at the BKT transition without fine-tuning
[ "Sandip Maiti", "Debasish Banerjee", "Shailesh Chandrasekharan", "Marina K. Marinkovic" ]
hep-lat
[ "hep-lat", "cond-mat.str-el", "hep-th", "quant-ph" ]
[email protected] Saha Institute of Nuclear Physics, HBNI, 1/AF Bidhannagar, Kolkata 700064, India Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai 400094, India [email protected] Saha Institute of Nuclear Physics, HBNI, 1/AF Bidhannagar, Kolkata 700064, India Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai 400094, India [email protected] Department of Physics, Box 90305, Duke University, Durham, North Carolina 27708, USA [email protected] Institut für Theoretische Physik, Wolfgang-Pauli-Straße 27, ETH Zürich, 8093 Zürich, Switzerland We propose a two-dimensional hard core loop-gas model as a way to regularize the asymptotically free massive continuum quantum field theory that emerges at the BKT transition. Without fine-tuning, our model can reproduce the universal step-scaling function of the classical lattice XY model in the massive phase as we approach the phase transition. This is achieved by lowering the fugacity of Fock-vacuum sites in the loop-gas configuration space to zero in the thermodynamic limit. Some of the universal quantities at the BKT transition show smaller finite size effects in our model as compared to the traditional XY model. Our model is a prime example of qubit regularization of an asymptotically free massive quantum field theory in Euclidean space-time and helps understand how asymptotic freedom can arise as a relevant perturbation at a decoupled fixed point without fine-tuning. A qubit regularization of asymptotic freedom at the BKT transition without fine-tuning Marina K. Marinkovic 0000-0002-9883-7866 August 12, 2023 ====================================================================================== The success of the Standard Model of particle physics shows that at a fundamental level, nature is well described by a continuum QFT. Understanding QFT non-perturbatively continues to be an exciting area of research, since defining them in a mathematically unambiguous way can be challenging. Most definitions require some form of short-distance (UV) regularization, which ultimately needs to be removed. Wilson has argued that continuum QFT arise near fixed points of renormalization group flows <cit.>. This has led to the concept of universality, which says that different regularization schemes can lead to the same QFT. Following Wilson, traditional continuum quantum field theories are usually regulated non-perturbatively on a space-time lattice by replacing the continuum quantum fields by lattice quantum fields and constructing a lattice Hamiltonian with a quantum critical point where the long distance lattice physics can be argued to be the desired continuum QFT. However, universality suggests that there is a lot of freedom in choosing the microscopic lattice model to study a particular QFT of interest. Motivated by this freedom and to study continuum quantum field theories in real time using a quantum computer, the idea of qubit regularization has gained popularity recently <cit.>. Unlike traditional lattice regularization, qubit regularization explores lattice models with a strictly finite local Hilbert space to reproduce the continuum QFT of interest. Euclidean qubit regularization can be viewed as constructing a Euclidean lattice field theory with a discrete and finite local configuration space, that reproduces the continuum Euclidean QFT of interest at a critical point. If the target continuum theory is relativistic, it would be natural to explore Euclidean qubit regularized models that are also symmetric under space-time rotations. However, this is not necessary, since such symmetries can emerge at the appropriate critical point. Lattice models with a finite dimensional Hilbert space that can reproduce continuum QFT of interest were introduced several years ago through the D-theory formalism <cit.> and has been proposed for quantum simulations <cit.>. In contrast to qubit regularization, the D-theory approach allows the local Hilbert space to grow through an additional dimension when necessary. In this sense, qubit regularization can be viewed as the D-theory approach for those QFT where a strictly finite Hilbert space is sufficient to reproduce the desired QFT. Examples of using qubit regularization to reproduce continuum QFT in the IR are well known. Quantum spin models with a finite local Hilbert space are known to reproduce the physics of classical spin models with an infinite local Hilbert space near Wilson-Fisher fixed points <cit.>. They can also reproduce QFT with topological terms like the Wess-Zumino-Witten theories <cit.>. Gauge fields have been proposed to emerge dynamically at some quantum critical points of simple quantum spin systems <cit.>. From the perspective of Euclidean qubit regularization, recently it was shown that Wilson-Fisher fixed points with O(N) symmetries can be recovered using simple qubit regularized space-time loop models with N+1 degrees of freedom per lattice site <cit.>. Similar loop models have also been shown to produce other interesting critical behavior <cit.>. Loop models are extensions of dimer models, which are also known to describe interesting critical phenomena in the IR <cit.>. All this evidence shows that Euclidean qubit regularization is a natural way to recover continuum QFT that emerge via IR fixed points of lattice models. A non-trivial question is whether we can also recover the physics of ultraviolet fixed points (UV-FPs), using qubit regularization. In particular, can we recover massive continuum QFT which are free in the UV but contain a marginally relevant coupling? Examples of such AF theories include two-dimensional spin models and four dimensional non-Abelian gauge theories. In the D-theory approach, there is strong evidence that the physics at the UV scale can indeed be recovered exponentially quickly as one increases the extent of the additional dimension <cit.>. Can the Gaussian nature of the UV theory emerge from just a few discrete and finite local lattice degrees of freedom, while the same theory then goes on to reproduce the massive physics in the IR? For this we will need a special type of quantum criticality where three length scales, as sketched in <ref>, emerge. There is a short lattice length scale a, where the non-universal physics depends on the details of the qubit regularization, followed by an intermediate length scale ≫ a, where the continuum UV physics sets in and the required Gaussian theory emerges. Finally, at long length scales ≫, the non-perturbative massive continuum quantum field theory emerges due to the presence of a marginally relevant coupling in the UV theory. The qubit regularized theory thus reproduces the universal continuum QFT in the whole region ℓ_ UV to ℓ_ IR. The special quantum critical point must be such that ℓ_ UV/a →∞. Recently, a quantum critical point with these features was discovered in an attempt to find a qubit regularization of the asymptotically free massive non-linear O(3) sigma model in two space-time dimensions in the Hamiltonian formulation <cit.>. Using finite size scaling techniques, it was shown that the qubit regularized model recovers all the three scales. In this paper, we report the discovery of yet another example of a quantum critical point with similar features. In the current case, it is a Euclidean qubit regularization of the asymptotically free massive continuum quantum field theory that arises as one approaches the BKT transition from the massive phase <cit.>. In both these examples, the qubit regularized model is constructed using two decoupled theories and the AF-QFT emerges as a relevant perturbation at a decoupled quantum critical point. The coupling between the theories plays the role of the perturbation that creates the three scales, as illustrated in the RG flow shown in <ref>. An interesting feature of this discovery is that there is no need for fine-tuning to observe some of the universal features of the BKT transition that have been unattainable in practice with other traditional regularizations <cit.>. The BKT transition is one of the most widely studied classical phase transitions, since it plays an important role in understanding the finite temperature superfluid phase transition of two-dimensional systems <cit.>. One simple lattice model that captures the universal behavior of the physics close to the phase transition is the classical two-dimensional XY model on a square lattice given by the classical action, S = -β∑_⟨ ij⟩cos(θ_i-θ_j), where the lattice field 0≤θ_i < 2π is an angle associated to every space-time lattice site i and ⟨ ij⟩ refers to the nearest neighbor bonds with sites i and j. The lattice field naturally lives in an infinite dimensional Hilbert space of the corresponding one dimensional quantum model. Using high precision Monte Carlo calculations, the BKT transition has been determined to occur at the fine-tuned coupling of β_c ≈ 1.1199(1) <cit.>. The Villain model is another lattice model which is friendlier for analytic calculations and has been used to uncover the role of topological defects in driving the phase transition <cit.>. More recently, topological lattice actions which seem to suppress vortices and anti-vortices but still drive the BKT transition have also been explored <cit.>. As one approaches the BKT transition from the massive phase, the long distance physics of the <ref> is known to be captured by the sine-Gordon model whose Euclidean action is given by<cit.>, S = ∫ dx dt [ 1/2t (∂_μθ_1)^2 + t/8π^2 (∂_μθ_2)^2 - A t/4π^2cosθ_2 ] where t ≥π/2. The field θ_1(x,t) captures the spin-wave physics while the vortex dynamics is captured by the field θ_2(x,t). The BKT transition in this field theory language occurs at t = π/2 where the cosθ_2 term becomes marginal as one approaches the critical point and the physics is governed by a free Gaussian theory. In this sense, the long distance physics of the lattice XY model, as β is tuned to β_c from smaller values, is an asymptotically free massive Euclidean continuum QFT. Qubit regularizations of the classical XY-model have been explored recently using various quantum spin formulations <cit.>. Lattice models based on the spin-1 Hilbert space are known to contain rich phase diagrams <cit.>, and quantum field theories that arise at some of the critical points can be different from those that arise at the BKT transition. Also, the presence of a marginally relevant operator at the BKT transition can make the analysis difficult, especially if the location of the critical point is not known. In these cases, it becomes a fitting parameter in the analysis, increasing the difficulty. Since in our model the location of the critical point is known, our model can be analyzed more easily. The model we consider in this work is a variant of the qubit regularized XY model introduced in Euclidean space recently <cit.>. The model can be viewed as a certain limiting case of the classical lattice XY-model <ref> written in the world-line representation <cit.>, where the bosons are assumed to be hard-core. The partition function of our model is a sum of weights associated with configurations of oriented self-avoiding loops on a square lattice with Fock-vacuum sites. An illustration of the loop configuration is shown as the left figure in <ref>. The main difference between our model in this work and the one introduced previously is that closed loops on a single bond are now allowed. Such loops seemed unnatural in the Hamiltonian framework that motivated the previous study, but seem to have profoundly different features in two dimensions <cit.>. It is also possible to view the loop configurations of our model as a configuration of closed packed oriented dimers on two layers of square lattices. The dimer configuration corresponding to the loop configuration is shown on the right in <ref>. The dimer picture of the partition function arises as a limiting case of a model involving two flavors of staggered fermions, introduced to study the physics of symmetric mass generation <cit.>. In this view point the inter-layer dimers (or Fock vacuum sites) resemble t'Hooft vertices (or instantons) in the fermionic theory. Using this connection, the partition function of our model can be compactly written as the Grassmann integral Z = ∫ [d d] [d d] exp(λ ∑_i _i _i _i_i) × exp( ∑_⟨ ij⟩( _i _i _j_j + _i _i _j _j)) where on each site i of the square lattice we define four Grassmann variables _i, _i, _i and _i. We consider periodic lattices with L sites in each direction. Using the fermion bag approach <cit.>, we can integrate the Grassmann variables and write the partition function as a sum over dimer configurations whose weight is given by λ^N_I where N_I is the number of instantons (or Fock-vacuum sites). Thus, λ plays the role of the fugacity of Fock-vacuum sites. It is easy to verify that the action of our model is invariant under _j _j → e^iσ_jθ_j _j and _j _j → e^-iσ_jθ_j _j where σ_j = ± tracks the parity of the site j. This U(1) symmetry is connected to the BKT transition and in order to track it, the dimers are given an orientation as explained in <ref>. Using worm algorithms (see <cit.>) we study our model for various values of L and λ. At λ = 0, one gets two decoupled layers of closed packed dimer models, which is known to be critical <cit.>. The effect of λ≠ 0 was studied several years ago, and it was recognized that there is a massive phase for sufficiently large values of λ <cit.>. However, the scaling of quantities as λ→ 0 was not carefully explored. Recently, the subject was reconsidered, and a crossover phenomenon was observed for small λ as a function of L. An understanding of this crossover was largely left unresolved as a puzzle <cit.>. In this paper, we demonstrate that the observed crossover phenomena captures the asymptotic freedom of <ref>. We do this by comparing the universal behavior of <ref> with the traditional XY model <ref> near the massive phase of the BKT transition <cit.>. To compare universal behaviors of <ref> and <ref> we compute the second moment finite size correlation length ξ(L) defined as ξ(L) = √((χ/F)-1)/(2sin(π/L)) (see <cit.>), where χ = G(0) and F = G(2π/L) are defined through the two point correlation function G(p) = ∑_j e^i p x⟨ O^+_(x,t) O^-_(0,0)⟩. In the above relation j is the space-time lattice site with coordinates (x,t) and O^+_j, O^-_j are appropriate lattice fields in the two models. In the XY model O^+_j = e^iθ_j, O^-_j = e^-iθ_j, while in the dimer model O^+_j = O^-_j = _j _j. We demonstrate that the step-scaling function (SSF) (i.e., the dependence of ξ(2L)/ξ(L) on ξ(L)/L) of the two lattice models show excellent agreement with each other in the scaling regime ℓ_UV≫ a, in <ref>. Another interesting universal result at the BKT transition is the value of the helicity modulus, which can be defined using the relation, Υ = ⟨ Q_w^2⟩ where Q_w is the spatial winding number of bosonic worldlines. In the XY model <ref>, it is usually defined using a susceptibility of a twist parameter in the boundary conditions <cit.>. In our model, we can easily compute the winding charge Q_w in each loop configuration illustrated in <ref>. The universal result in the massive phase as we approach the BKT transition is that Υ≈ 2/π in the UV up to exponentially small corrections <cit.>, although in the IR Υ = 0. While it is difficult to obtain the UV value in lattice calculations using the traditional model <ref>, in our model, we can see it emerge nicely at λ=0.01. We demonstrate this in <ref>. Again, as expected, the value of Υ when λ=0 is very different, since it is a theory of free bosons but at a different coupling. Using the different value of the coupling gives Υ ≈ 0.606 <cit.>. Our results provide strong evidence that the AF-QFT at the BKT transition emerges from our dimer model when we take the limit L→∞ followed by λ→ 0. The opposite limit leads to the critical theory of the decoupled dimer model. Acknowledgments: We are grateful to J. Pinto Barros, S. Bhattacharjee, T. Bhattacharya, H. Liu, A. Sen, H. Singh and U.-J. Wiese for inspiring discussions. We acknowledge use of the computing clusters at SINP, and the access to Piz Daint at the Swiss National Supercomputing Centre, Switzerland under the ETHZ’s share with the project IDs go24 and eth8. Support from the Google Research Scholar Award in Quantum Computing and the Quantum Center at ETH Zurich is gratefully acknowledged. S.C's contribution to this work is based on work supported by the U.S. Department of Energy, Office of Science — High Energy Physics Contract KA2401032 (Triad National Security, LLC Contract Grant No. 89233218CNA000001) to Los Alamos National Laboratory. S.C is supported by a Duke subcontract based on this grant. S.C's work is also supported in part by the U.S. Department of Energy, Office of Science, Nuclear Physics program under Award No. DE-FG02-05ER41368. Supplementary Material § UNIVERSAL VALUES OF Υ FOR Λ = 0 AND Λ≠ 0 In this section we explain the two different values of the helicity modulus Υ for our model when λ=0 and λ→ 0. When λ=0 our model maps into two identical but decoupled layers of closed packed classical dimer models. As has already been explained in the literature (see for example  <cit.>), each layer can be mapped to the theory of a free compact scalar field with the action S = 1/2 t∫ d^2 x (∂_μθ(x))^2. with t=4π. One can compute Υ starting with <ref>, by noting that the scalar fields have winding number configurations labeled by n_x: θ(x) = 2 π x n_x/L_x + φ(x), where φ(x) is a smooth fluctuation that is independent of winding number n_x. The value of the action in each winding sector in a finite space-time volume is then given by S(n_x) = 2π^2 n_x^2/tL_y/L_x + S_0, where S_0 is the action from the usual fluctuations in the zero winding number sector. Using L_x = L_y, we can compute Υ using its connection to the average of the square of the winding numbers, Υ = ⟨ (Q_x)^2 |=⟩∑_n_x n_x^2 · e^- 2 π^2 n_x^2/t/∑_n_x e^-2π^2 n_x^2/t Numerically evaluating this expression for t=4π we obtain Υ = 0.303426... for a each layer of our dimer model. Our value of 0.606852... is due to the presence of two decoupled layers. In contrast, in the limit λ→ 0, we need to consider the physics at the BKT transition and so we begin with the action S = ∫ d^2x [ 1/2t̃ (∂_μθ_1)^2 + t̃/8π^2 (∂_μθ_2)^2 - A t̃/4π^2cosθ_2 ] and focus at t̃=π/2. At this coupling the last term is irrelevant and Υ gets dominant contribution from the θ_2 field. In this we can still use <ref> but need to substitute t = 4π^2/t̃ = 8π. Substituting we get Υ = 0.636508... which is approximately 2/π. § WORM ALGORITHM In this section, we discuss the worm algorithm we use to simulate the model with the partition function, Z = ∫ [d d] [d d] exp(λ ∑_i _i _i _i_i) × exp( ∑_⟨ ij⟩( _i _i _j_j + _i _i _j _j)) as introduced in the main paper. These algorithms are well known <cit.>, and can be divided into three parts: Begin, Move, and End. * Begin: pick a site at random and denote it as tail, and there are the following two possibilities: (A) either it has a bond connected to it on the other layer (which we call an instanton, or an interlayer dimer), or, (B) it has a bond connected to it on the same layer (which we call a dimer). * For the case (A), propose to remove the instanton, and put the worm head on the same site at the different layer, with a probability 1/λ. If accepted, then begin the worm update, otherwise go to (1). * For the case (B), pick the other site to which the dimer is connected as the head, and begin the worm update. * Move: Propose to move the worm head to one of the (2D+1) neighbor sites of head with an equal probability, which can either be on the same layer (2D choices), or on the different layer (one choice). Denote the proposed new site as site0, and the following possibilities can occur, provided that site0 is not the tail: * site0 is on the same layer, and has an instanton connected to it. Propose to remove the instanton with a probability 1/λ. If accepted, place the head at site0, but on the different layer. * site0 is on the same layer, and has a dimer connected to it (joining site0 and y). Move the head to the site y with a probability 1, and simultaneously insert a dimer between head and site0. * site0 is on the different layer, then propose if an instanton can be created. If yes, then move the position of the head to y in the other layer, where y is the other end of the dimer connecting site0 and y. * End: If at any stage in the algorithm, the site0 is the tail, then propose to end the worm update. If the site0 = tail is on the same layer, then end the update by putting a dimer between the head and tail with a probability 1. If, on the other hand, they are on different layers, the worm update ends with a probability λ, leading to the addition of an extra instanton. § EXACT VS MONTE CARLO RESULTS ON A 2 × 2 LATTICE In this work, we compute two independent fermion bilinear susceptibilities defined as χ_1 = 1/2V∑_i,j i≠ j⟨ψ̅_i ψ_i ψ̅_j ψ_j ⟩, χ_2 = 1/2V∑_i,j i≠ j⟨ψ̅_i ψ_i χ̅_j χ_j ⟩, where χ_1 is an observable that can be defined even on a single layer, while χ_2 is involves both the layers. When the coupling λ = 0, the two layers are completely decoupled from each other and we get χ_2 = 0. Another quantity we compute is the average density of Fock vacuum sites or inter-layer dimers (which we also view as instantons), defined as ρ = 1/V∑_i ⟨ψ̅_i ψ_i χ̅_i χ_i ⟩, where the expectation value is defined as ⟨ O⟩ = 1/Z∫ [𝒟ψ̅𝒟ψ] [𝒟χ̅𝒟χ] O e^-S[ψ̅,ψ, χ̅,χ]. Since every site is populated by either a Fock-vacuum site or an intra-layer dimer, the average intra-layer dimer density is not an independent observable. We can always compute it from the Fock vacuum sites (instanton) density ρ. In order to test out algorithm, we focus on exact results on a 2× 2 lattice. The partition function in this simple case is given by Z = 64 + 16 λ^2 + λ^4, while the instanton density and the two independent susceptibilities are given by ρ = 1/4Z (32 λ^2 + 4 λ^4), χ_1 = 1/2Z (32 + 4 λ^2), χ_2 = 1/2Z (8 λ). Note that ρ is zero when λ = 0 and approaches one for large couplings. Also, as expected χ_2=0 when λ=0. In <ref> we compare results for three different observables, instanton density (ρ), fermion bilinear susceptibility (χ_1), and helicity modulus (Υ) on a 2 × 2 lattice obtained from an exact calculation against the results obtained using the worm algorithm. Interestingly, when λ≠ 0 we find that both χ_1 and χ_2 become similar as L increases. The difference also becomes smaller as λ increases. We show this behavior in the <ref>. Due to this similarity we only focus on χ_1 in our work. § PLOTS OF Ρ AND Χ_1 We have computed the fermionic XY model at various values of λ on square lattices up to L = 4000 using the worm algorithm described above. For our simulations, after allowing for appropriate thermalization, we have recorded between 8 × 10^3 and 48 × 10^3 measurements, each averaged over 2000 worm updates. A comparable number of measurements were also made for the bosonic model. In <ref>, we plot ρ for various lattice sizes at different values of λ on the left. We note that ρ increases monotonically and approaches the thermodynamic limit by L=160 which is shown on the right. In <ref>, we plot χ_1 as a function of system size, L for different values of λ. When λ is small, we find that our data is consistent with the behavior χ_1 ∼ AL^2-η expected in a critical phase. However, for larger values of λ, the susceptibility begins to saturate as χ_1 ∼ A which means η≈ 2. For λ=0, since the model describes two decoupled layers of closed packed dimer models we expect η=0.5 <cit.>. However, when λ is small, since we expect our model to describe the physics at the BKT transition, we expect η∼ 0.25. This is consistent with our findings. The values of constant A and η for various values of λ obtained from a fit are given in <ref>. § STEP SCALING FUNCTION In order to argue that the traditional XY model at the BKT transition and the two layer interacting dimer model are equivalent we compute the step scaling function (SSF) in both of them. We refer to the traditional XY model defined through the lattice action S = -β∑_⟨ ij ⟩cos(θ_i-θ_j), as the bosonic XY model and dimer model defined in <ref> as the fermionic XY model. In order to compute the step-scaling function we first compute the second moment correlation length defined in a finite box of size L using the expression ξ(L) = 1/2sin(π/L)√(χ/F - 1), where χ = ∑_i ⟨ O^+_i O^-_0⟩, F = ∑_i ⟨ O^+_i O^-_0⟩cos(2π x /L), where i=(x,t) is the space-time lattice site and O^+_i, O^-_i are lattice fields in the two lattice models. In the bosonic XY model, O^+_i = e^iθ_i and O^-_i = e^-iθ_i, while in the fermionic model O^+_i = O^-_i = ψ_iψ_i. The SSF for the bosonic XY model is computed in the massive phase close to the critical point, for β < β_c = 1.1199 <cit.>. To study the step scaling function, we prepare several pairs of data at (β, L) and (β,2L), and compute both ξ(2L)/ξ(L) and ξ(L)/L using the data presented in <ref>. We follow certain criteria as explained in <cit.>, to ensure the minimization of finite volume and finite lattice spacing errors. In particular, we only choose lattices of sizes L ≥ L_min, where L_min = 80 for couplings β≥0.92. Since the correlation length increases for β close to the β_c, larger lattice sizes are essential. The similar criteria for choosing the lattices sizes and couplings in the fermionic model is L ≥ L_min, where L_min = 80 for 0.62≤λ≤0.9, and L_min = 640 for λ < 0.6. In order to compute the expectation value and error of ξ(L)/L, we use the jackknife analysis. We report the results here for the analysis with 40 jackknife blocks. The effect of variation of the jackknife blocks did not change the errors significantly, and were consistent with the errors obtained using a bootstrap analysis. In <ref>, we show an example of the variation of the average and error of ξ(L)/L at λ=0.353 and L=320 for the fermionic model using both the jackknife and the bootstrap analysis as a function of block size. For both methods, we use the same number of block sizes, but in order to show the distinction between them, we have displaced the data on the x-axis by multiplying nBlock by a factor of 1.1 for the bootstrap analysis. In order to compare the SSF between the bosonic and the fermionic models we tried to parameterize the function in two different ways. In the first approach, we follow the idea discussed in <cit.> where it was proposed that Σ(x) = 1 + a_1 e^-1/x + a_2 e^-2/x + a_3 e^-3/x + a_4 e^-4/x, where x = ξ(L)/L and Σ = ξ(2L)/ξ(L). The behavior of this function is such that, as x → 0, the function Σ(x) approaches 1. While this function is strictly valid only for small x we find that this form fits our data well. The fit results are given in <ref>. We see that while we get good fits by including all four fit parameters, we can also fix a_2=0 and still get a good fits. In the second approach, to parameterize our SSF we used a cubical spline to interpolate the data. In <ref>, we provide a tabulation of the spline function that helps parameterize the SSF for both the bosonic and the fermionic models. The errors are obtained using a jackknife analysis. In order to show how these two different parameterizations help capture our data we show the corresponding curves for the bosonic model in <ref> and for the fermionic model in <ref>. We believe that a combined parameterization would best capture the true function. Hence, we use <ref> for ξ(L)/L ≤ 0.572 and the cubical spline interpolation for ξ(L)/L ≥ 0.572. This combined form in the bosonic model is shown in <ref>, along with the bosonic model data. The dark line of this plot is used in the main paper to compare with the fermionic model. § INFINITE VOLUME CORRELATION LENGTH We can compute the infinite volume correlation length ξ_∞ using the SSF. Here we try to understand how ξ_∞ depends on λ in the fermionic XY model. In order to reliably estimate the errors in ξ_∞ we again use the jackknife analysis. We start with 40 jackknife blocks, where each block contains a pair (ξ(L)/L, ξ(2L)/ξ(L)) for different coupling values (0.01 ≤λ≤ 0.8). We obtain 40 different cubical splines using each jackknife block. We then start with the initial ξ(L)/L at L=640 in each block and evaluate ξ(2^n L) using the spline function for arbitrary values of n, until the correlation length ξ(2^n L) becomes insensitive to L. Finally, the jackknife mean and error is then computed from the 40 values. These results for ξ_∞ and their errors are quoted in <ref>. Since the correlation lengths increase exponentially as λ becomes small, we were able to extract the infinite volume correlation length only in the range 0.3≤λ≤0.8. Below λ < 0.3, our extrapolation methods fail. Using the data in <ref> we study the λ dependence of ξ_∞. For the bosonic XY model, it is well known that as one approaches the BKT phase transition, the leading divergence of the infinite volume correlation length is captured by ξ = C exp( b/√(β_c - β)), where β_c is the critical coupling, and b and C are non-universal constants. For the fermionic XY model since the partition function is an even function of λ we expect ξ_∞ to be a function of λ^2. Since the BKT critical point appears when λ→ 0, we conjecture that ξ^(1)_∞ = a_1 exp( b_1/√(λ^2)). We test this conjecture numerically by fitting the data in <ref> to it. We also compare this to other fit forms including ξ^(2)_∞ = a_2 exp(b_2/(λ^2)^1/4) and ξ^(3)_∞= a_3 exp(b_3/√(λ^2) + c_3 log(λ^2)/2). The results are shown in <ref>. We observe that <ref> is clearly quite good if we expect the constants a and b to be numbers which are not unnatural. We cannot rule out the presence of a power law correction to the expected form. In <ref>, we show the data in <ref> and the various fits. The first form is the expected behaviour from <ref>. The second form explores a possible dependence on square-root of λ which is clearly unnatural. Finally the third form allows for a logarithmic correction in the exponential (which is equivalent to including a 1/λ dependence outside the exponential). We note that in this extended form the data in the larger range of 0.3 ≤λ≤ 0.8 can be fit. § MONTE CARLO RESULTS We tabulate all of our Monte Carlo data in <ref> for both the bosonic XY and the fermionic XY models, for various values of L and couplings. The errors in these primary quantities have been obtained with 20 jackknife blocks.
http://arxiv.org/abs/2307.04992v1
20230711030733
Peeking into the next decade in Large-Scale Structure Cosmology with its Effective Field Theory
[ "Diogo Bragança", "Yaniv Donath", "Leonardo Senatore", "Henry Zheng" ]
astro-ph.CO
[ "astro-ph.CO", "hep-ph", "hep-th" ]
=1 ./figures/ equationsection 16.4 true cm 22.0 true cm 0 cm 0 cm 0.4 cm 0. true cm psf empty x⃗_fl 1.1 equationsection
http://arxiv.org/abs/2307.10206v1
20230714072547
Volumetric Wireframe Parsing from Neural Attraction Fields
[ "Nan Xue", "Bin Tan", "Yuxi Xiao", "Liang Dong", "Gui-Song Xia", "Tianfu Wu" ]
cs.CV
[ "cs.CV", "cs.GR" ]
Volumetric Wireframe Parsing from Neural Attraction Fields Nan Xue1 Bin Tan2 Yuxi Xiao1,3 Liang Dong4 Gui-Song Xia2 Tianfu Wu5 1Ant Group 2Wuhan University 3Zhejiang University 4Google Inc. 5NC State University [email protected], {tanbin, guisong.xia}@whu.edu.cn, [email protected], [email protected] August 12, 2023 ============================================================================================================================================================================================================================================================================= empty [anchor=south west,inner sep=0] (image) at (0,0) < g r a p h i c s > ; [black,fill=yellow] (image.south west) rectangle (image.south east) node[midway, yshift=-5pt, inner sep=5pt] An Input View Image; < g r a p h i c s > [overlay, remember picture,inner sep=0pt, outer sep=0pt] at (-1.7,0.2) [text=black] Line3D++@LSD (4959 lines); < g r a p h i c s > [overlay, remember picture,inner sep=0pt, outer sep=0pt] at (-1.7,0.2) [text=black] Line3D++@HAWP (320 lines); < g r a p h i c s > [overlay, remember picture,inner sep=0pt, outer sep=0pt] at (-1.7,0.2) [text=black] NEAT 3D Line Segments; < g r a p h i c s > [overlay, remember picture,inner sep=0pt, outer sep=0pt] at (-1.7,0.2) [text=black] Our 3D Wireframe (656 lines); [anchor=south west,inner sep=0] (image) at (0,0) < g r a p h i c s > ; [black,fill=yellow] (image.south west) rectangle (image.south east) node[midway, yshift=-5pt, inner sep=5pt] 2D Wireframe Detection; < g r a p h i c s > [overlay, remember picture,inner sep=0pt, outer sep=0pt] at (-1.7,0.2) [text=black] Rand. Init. 3D Junctions; < g r a p h i c s > [overlay, remember picture,inner sep=0pt, outer sep=0pt] at (-1.7,0.2) [text=black] 24k Iterations; < g r a p h i c s > [overlay, remember picture,inner sep=0pt, outer sep=0pt] at (-1.7,0.2) [text=black] 50k Iterations; < g r a p h i c s > [overlay, remember picture,inner sep=0pt, outer sep=0pt] at (-1.7,0.2) [text=black] Final 3D Junctions; figureTop: The first two are the vanilla Line3D++ <cit.> with LSD <cit.> detector in high-resolution images and the one with HAWP <cit.> detection results via cross-view 2D wireframe correspondences; Our matching-free solution learns the NEAT 3D line segments via coordinate MLPs and finally results in a parsimonious 3D wireframe by jointly optimizing the global 3D junctions from NEAT 3D Line Segments. Bottom: The status of global 3D junctions in different iterations of optimization from randomly-initialized 3D locations. Please check the video demostrations in our supplementary materials for better qualitative evaluation. The primal sketch is a fundamental representation in Marr's vision theory, which allows for parsimonious image-level processing from 2D to 2.5D perception. This paper takes a further step by computing 3D primal sketch of wireframes from a set of images with known camera poses, in which we take the 2D wireframes in multi-view images as the basis to compute 3D wireframes in a volumetric rendering formulation. In our method, we first propose a NEural Attraction (NEAT) Fields that parameterizes the 3D line segments with coordinate Multi-Layer Perceptrons (MLPs), enabling us to learn the 3D line segments from 2D observation without incurring any explicit feature correspondences across views. We then present a novel Global Junction Perceiving (GJP) module to perceive meaningful 3D junctions from the NEAT Fields of 3D line segments by optimizing a randomly initialized high-dimensional latent array and a lightweight decoding MLP. Benefitting from our explicit modeling of 3D junctions, we finally compute the primal sketch of 3D wireframes by attracting the queried 3D line segments to the 3D junctions, significantly simplifying the computation paradigm of 3D wireframe parsing. In experiments, we evaluate our approach on the DTU and BlendedMVS datasets with promising performance obtained. As far as we know, our method is the first approach to achieve high-fidelity 3D wireframe parsing without requiring explicit matching. § INTRODUCTION This paper studies the problem of multi-view 3D reconstruction from the perspective of primal sketch <cit.>, aiming at explicitly computing a parsimonious and accurate representation of 3D scenes from posed multi-view images. Marr's vision theory <cit.> has long laid out a computational paradigm in which 2D and 2.5D primal sketches are key representations for image and view-dependent scene geometry in perception and understanding, motivating us to move forward to computing primal sketches of 3D scenes. In recent years, wireframes <cit.> has been proposed as a parsimonious yet expressive representation scheme focusing on the boundary-oriented primal sketch of 2D images. By unifying edge maps, line segments, and junctions all in one task of wireframe parsing, we have witnessed their great success in the geometric characterization of images for man-made environments and indoor images in wireframes. A wireframe consists of junctions as the graph vertices and line segments as the graph edges <cit.>. However, the current state of the art in wireframe-based scene characterization mainly focuses on either the image-level structural perception or the single-view 3D perception <cit.>. For the 3D wireframe parsing from multi-view images, it was studied as a line-based multi-view 3D reconstruction <cit.> problem in which the key is to build line correspondences across views. Because of the view-dependent occlusion of underlying 3D line segments in the detected 2D line segments, such a correspondence-based pipeline often does not work well on top of the holistic line segments from the wireframe graphs by the deep wireframe parsers <cit.>. As shown in the left of Fig. <ref>, due to the difficulty of line segment matching between 2D wireframes, many meaningful 2D line segments are ignored by Line3D++ <cit.> to yield an incomplete 3D wireframe model from 2D detection results. Instead, the correspondence-based pipelines are built on the small line fragments by the traditional LSD approach <cit.>, which leads to fragmented and noisy 3D wireframe parsing results. More importantly, 3D junctions (i.e. vertices of a 3D wireframe graph) are not explicitly computed in, and seem not practically feasible to be reliably tackled by, such a correspondence-based pipeline. Most recently, we have witnessed the great success in neural implicit rendering for multi-view 3D reconstruction <cit.>. It highlights the exciting fact that the scene geometry can be optimized well via coordinate MLPs without entailing explicit cross-view correspondences. Accordingly, we raise a question: Could we accomplish 3D wireframe parsing from multi-view input images without incurring explicit 2D wireframe correspondences? The answer is Yes! We firmly address it by proposing a novel approach, the NEural ATtraction (NEAT) field for 3D wireframe parsing (Fig. <ref> (a)) in two-fold as follows. The meaningfulness of rendering 3D line segments by querying 2D wireframe guided rays in the NEAT field. We use as inputs multi-view 2D wireframes computed by a state-of-the-art self-supervised 2D wireframe parser <cit.>, together with raw input multi-view images (Fig. <ref> (b)). To eliminate the need of building cross-view correspondences between the 2D wireframes, our NEAT-field directly renders 3D line segments by querying view-dependent rays that intersect with 2D wireframes (more specifically, the 2D attraction regions associated with 2D wireframes proposed in <cit.>) under the volume rendering integral formulation <cit.>. We verified that by only querying the 2D wireframe guided rays (that are sparse compared with the counterparts in the vanilla volume rendering) the implicit scene geometry can still be optimized sufficiently well. The restulting 3D line segment cloud computed by our NEAT field is promising and meaningful, but noisy (Fig. <ref>). 3D junctions are entailed to “clean up" the noisy rendered 3D line segments. center def __init__(self, num_junctions, dim=256, **kwargs): ... self.latents = nn.Parameters(num_junctions, dim,...) self.junction_mlp = nn.Sequential(nn.Linear(dim,dim),nn.ReLU(), nn.Linear(dim,dim),nn.ReLU(), nn.Linear(dim,3)) ... def compute_global_junctions(self): return self.junction_mlp(self.latents) The surprising effectiveness of directly perceiving global 3D junctions in the NEAT field. It is tempting to render 3D junctions by querying rays passing through 2D junctions in the 2D wireframe, which does not work due to the intrinsic geometry ambiguity. Counterintuitively, we present a novel mechanism, the Global Junction Perceiving (GJP) to learn the 3D junctions across viewpoints, which is best illustrated by the code snippet shown above. It is counterintuitive because the latent queries (as model parameters) are directly transformed into 3D junctions via an MLP. Both the latent queries and the junction MLP are learned on the fly, and yet they can coverge to predict the 3D junctions reliably (Fig. <ref>). We exploit the synergies between 3D junctions and end-points of 3D line segments in a 3D wireframe in learning. Although noisy, the rendered 3D line segments provide sufficient “local evidence" in the 3D space for “regularizing" the learning of GJP. We utilize the Hungarian matching in learning to match the “local evidence" from volume rendering and the GJP outputs. After the GJP converges, the perceived global 3D junctions are stable and in turn provide binging forces to “clean up" the noisy rendered 3D line segments. Together, they form the reconstructed 3D wireframe. This surprising effectiveness of GJP reinforces the exciting fact that the implicit scene geometry that can be optimized well via coordinate MLPs in the volumetric rendering formulation can be concurrently optimized for volumetric 3D wireframe parsing with our NEAT field, but without entailing explicit cross-view correspondences. We test our proposed NEAT field on the DTU <cit.> and BlendedMVS <cit.> datasets and promising results are obtained. In sum, this paper makes three main contributions to the field of volumen rendering and 3D wireframe parsing: * To the best of our knowledge, we are the first to achieve the multi-view 3D wireframe reconstruction. Without leveraging any heuristic correspondence search across viewpoints, we exaggerate the powerful capability of coordinate MLPs (implicit neural networks) for implicit feature correspondences. * We present a novel NEAT field to learn sparse structures (3D line segments and 3D junctions) from the dense volumetric representation of rendering. On the one hand, we demonstrated that the regional association between rays queried in rendering and 2D line segments is a powerful tool to learn the sparse representation of 3D line segments from the dense volume. On the other hand, our novel design of directly perceiving the global 3D junctions from the randomly initialized latent queries provides a promising way to represent the scene geometry in a parsimonous and unified way. * In experiments, we show that our NEAT field significantly pushes the boundary of 3D wireframe reconstruction to simultaneously handle both straight-line dominated scenes and curve-based (or polygonal line segment dominated) scenes, paving a way towards learning 3D primal sketch. Our proposed GJP opens a door to characterize the scene geometry from 2D supervision in structured point-level 3D representations. § RELATED WORK Structured 3D Reconstruction with Lines. Because of the inherent structural regularities for scene representation conveyed by line structures <cit.>, there has been a vast body of literature on line-based multiview 3D reconstruction tasks including line-based SfM <cit.>, SLAM <cit.>, and multi-view stereo <cit.> based on the theory of multi-view geometry <cit.>. Due to the challenging difficulties of line segment detection and matching in 2D images, most of those studies expected the 2D line segments detected from input images to be redundant and small-length to maximize the possibility of line segment matching. As for the estimation of scene geometry and camera poses, the keypoint correspondences (even including the 3D point clouds) are usually required. For example in Line3D++ <cit.>, given the known camera poses by keypoint-based SfM systems <cit.>, it is still challenging though to establish reliable correspondences for the pursuit of structural regularity for 3D line reconstruction. For our goal of wireframe reconstruction, because 2D wireframe parsers aim at producing parsimonious representations with a small number of 2D junctions and long-length line segments, those correspondence-based solutions pose a challenging scenario for cross-view wireframe matching, thus leading to inferior results than the ones using redundant and small-length 2D line segments detected by LSD <cit.>. To this end, we present a correspondence-free formulation based on coordinate MLPs, which provides a novel perspective to accomplish the goal of 3D wireframe reconstruction from the parsed 2D wireframes. Neural Rendering for Geometry Primitives. In recent years, the emergence of neural implicit representations <cit.> have greatly renown the 3D vision community. By using coordinate MLPs to implicitly learn the scene geometry from multi-view inputs without knowing either the cross-view correspondences or the 3D priors, it has largely facilitated many 3D vision tasks including novel view synthesis, multi-view stereo, surface reconstruction, . Some recent studies further exploited the neural implicit representations by (explicitly and implicitly) taking the geometric primitives such as 2D segmentation masks into account to lift the 2D detection results into 3D space for scene understanding and interpretation <cit.>. Most recently, nerf2nerf <cit.> exploited a geometric 3D representation, surface fields as a drop-in replacement for point clouds and polygonal meshes, and takes the keypoint correspondences to register two NeRF MLPs. Our study can be categorized as the exploration of geometric primitives in neural implicit representation, but we focus on computing a parsimonious representation by using the most fundamental geometric primitives, the junction (points) and line segments, to provide a compact and explicit representation from coordinate MLPs. § PROBLEM STATEMENT In this section, we first define the problem, and give a high-level formulation, of lifting dense volume rendering of neural implicit surfaces to parsimonious 3D wireframe parsing (Sec. <ref>). Our formulation takes the multi-view images and the corresponding 2D wireframes detected by an off-the-shelf wireframe parser, HAWP <cit.> (Sec. <ref>). Building on the top of the volumetric surface rendering <cit.> along the queried rays emanating from the camera location c with the view dicrection v Î(c, v) = ∫_0^∞ T(t)·σ(x_t) ·𝐫(x_t, v, 𝐧(x_t), z(x_t)) dt, with the transmittance T(t) = exp -∫_0^t σ(x(s))ds, we get rid of explicit feature correspondences for 3D wireframe parsing in Sec. <ref>. §.§ The Problem of 3D Wireframe Parsing Denoted by x(t) = c+ t· v for a ray x emanating from a camera center c∈ℝ^3 along the unit direction v, we formulate the problem of 3D wireframe parsing based on VolSDF, which learns the signed distance d(x_t) and the surface feature z(x_t) at the queried 3D location x_t and then compute the radiance field 𝐫(x_t,v,𝐧(x_t), z(x_t), where 𝐧(x_t) = ∇_x d(x_t) is the surface normal determined by the SDF. In practical computation, the coordinate MLPs are used for the learning of SDF and radiance field. Intuitively, our proposed volumetric wireframe parsing is to compute the line drawing of those implicit surfaces using the 3D wireframe representation. A 3D wireframe of a scene is represented by a graph 𝒢=(𝒥, ℒ), where 𝒥 is the vertex set consisting of 3D junctions, J_i ∈ℝ^3, and ℒ is the edge set consisting of 3D line segments. A 3D line segment, L_i,j, connects two 3D junctions (i.e., end-points), J_i and J_j. Our proposed volumetric wireframe parsing is built on top of the VolSDF and consists of three components: * 3D Line Segment Rendering: (x_t, v) (x_t,x_t)+(Δ x_t^1, Δ x_t^2) = L(x_t, v, 𝐧, z), where (Δ x_t^1, Δ x_t^2) are the displacement/offset vectors of the 3D line segment at the 3D query point x_t. We will address the problem of which rays will be sampled in order to ensure the learning of both Eqn. <ref> and Eqn. <ref> based on the 2D wireframe observations, and compute the 3D line segment rendering integral to generate 3D line segment proposals. * Global 3D Junction Percieving: Our 3D line segment rendering inherits the dense representation as the density field and the radiance field. To achieve parsimonious wireframes, we propose a novel query-based design to holistically perceive a predefined sparse set of N 3D junctions by Q_N× C J_N× 3, where Q_N× C are C-dim latent queries (randomly initialized in learning). Surprisingly, as we shall show in experiments, the underlying 3D scene geometry induced synergies (Sec. <ref>) between J_N× 3 and the above 3D line segment rendering integral enable us to learn a very meaningful global 3D junction perceiver. To put it in another way, the 3D junction perceiver is learned to geometrically attract the dense 3D line segment rendering integral to a sparse set of junctions in the 3D space. Due to this attraction nature, we dub the proposed representation scheme short in NEAT. * 3D Wireframe Reconstruction: With the 3D line segment proposals (Eqn. <ref>) and the 3D junction proposals (Eqn. <ref>), we extend the 2D wireframe parsing method proposed in <cit.> for 3D wireframe reconstruction as the final step in Sec. <ref>. §.§ 2D Wireframes and the Ray Sampling In this section, we elaborate on the inputs that are needed in the proposed 3D line segment rendering (Eqn. <ref>) and on what rays to be sampled in learning to satisfy both Eqn. <ref> and Eqn. <ref>. We leverage the self-supervised HAWPv3 <cit.> model (trained on a subset of ImageNet data) [The code and pre-trained checkpoint are available at <https://github.com/cherubicXN/hawp>] as the wireframe parser due to their outperforming OOD performance throughout our experiments. For simplicity, consider an image I in a given set of multi-view images in learning, we denote by L^ 2D = {l̈_i = (𝐱_i^1, 𝐱_i^2)}_i=1^M the set of 2D line segments in the parsed 2D wireframe for I. In terms of ray sampling, one straightforward method is to only sample rays that intersect with a 2D line segment in L. Unfortunately, this will become too sparse to optimize the coordinate MLPs, which in turn impacts the learning of the 3D line segments (Eqn. <ref>). To address this issue, we exploit the basic idea of lifting a 2D line segment to its attraction region proposed in HAWP <cit.> to densify the sparse geometry with the non-edge pixels. The attraction region of a line segment l̈_i, denoted by a(l̈_i), consists of pixels whose distances to the line segment are smaller than a predefined threshold (e.g., 10 pixels). We will consider the set of all rays that intersect with any pixels in all the 2D attraction regions in ray sampling in learning. With this relaxation, we verified that the ray sampling enables the high-fidelity rendering of images based on Eqn. <ref>, as shown in Fig. <ref> and our supplementary materials. We also assign 2D line segments l̈_i as the ground-truth 2D observation for 3D line segments rendered from any rays that intersect with a pixel in the attraction region a(l̈_i) based on Eqn. <ref>. § THE PROPOSED NEAT FIELDS In this section, we present details of the three components in our volumetric wireframe parsing. §.§ 3D Line Segment Rendering Similar to the computation of the expected color (Eqn. <ref>), with the 3D line segment rendering (Eqn. <ref>), we compute the two displacement vectors of the expected 3D line segment along a ray x(t) by, L̈(c, v) = ∫_0^∞ T(t)·σ(x_t) · L(x_t, v, 𝐧(x_t), z(x_t)) dt, where L̈(c, v)∈ R^2× 3. We follow the point sampling algorithm proposed in VolSDF <cit.> in approximating the integral empirically. Denote by Π(L̈(c, v)) the projected 2D line segment to the image plane of the camera emitting the ray x(t), and by l̈(c,v) the observed 2D line segment (computed by the HAWPv3) that intersects with the ray x(t). The loss between Π(L̈(c, v)) and l̈ is defined by the minimum l_1 distance between the projected and observed 2D line segments over the permutation of the order of the two end-points, ℒ_ neat(L̈, l̈) =min_χΠ(L̈)-χ(l̈)_1, where χ represents the two permutations of swapping the two end points of a 2D line segment. §.§ Global 3D Junction Perceiving Due to the many-to-one mapping between the associated rays and a 2D line segment observation in 3D line segment rendering, we obtain many redundant 3D line segment proposals. To “clean up" them, global scene geometry is entailed. We present a counter-intuitive yet surprisingly effective query-based learning method for 3D junction perceiving (Eqn. <ref>), similar in spirit to the DETR <cit.> and the Perceivers <cit.>, but without using their sophisticated designs of self-attention and cross-attention. We use a plain MLP with randomly initialized queries as the inputs (Eqn. <ref>). Since we do not have any well-defined ground-truth observations in learning 3D junctions, we exploit the end-points of the redundant rendered 3D line segments (Sec. <ref>) as the noisy labels. Denote by 𝐉_M× 3 the set of end-points from all the current rendered 3D line segments (Eqn. <ref>). To deal with the noisy labels, our 3D junction perceiving consists of three steps: * Filtering 𝐉_M× 3 to remove the end-points that do not have observed 2D junction support after projection, that is their projections' minimum distances to the set of observed 2D junctions are larger than a predefined threshold (e.g., 10 pixels). Denote by 𝐉_M'× 3 the set of end-points after filtering. * Clustering 𝐉_M'× 3 to eliminate the redundance caused by the many-to-one relationships between rays and 2D line segement observation. We utilize the DBScan method <cit.> in clustering. Denote by 𝐉_m× 3 the set of end-points after clustering (m<M). * Bi-pariate set-to-set matching between the perceived junctions J_N× 3 (Eqn. <ref>) and the filtered and clustered noisy labels 𝐉_m× 3. We use the Hungarian algorithm <cit.>. The pair-wise matching cost is computed by the ℓ_2 norm between 3D points. Denote by 𝒥={(J_k×3, 𝐉_k× 3)_k=1^K} the set of matched junctions, where K=min(N, m). Based on the bipartite matching results, the loss between a perceived junction J_k and its assigned “ground-truth" 𝐉_k is defined by, ℒ_jc(J_k, 𝐉_k) = J_k - 𝐉_k _1 + λ·Π(J_k) - Π(𝐉_k) _1, where Π() is the 3D-to-2D projection, and λ the trade-off parameters (e.g. 0.01 in our experiments). §.§ 3D Wireframe Reconstruction After optimization, the N global junctions {𝒥_k} are saved as the model parameters and the dense 3D line segments (, NEAT lines) are accessed by querying all the foreground rays across viewpoints from NEAT fields. In this stage, we use very “thin" attraction regions with the attraction threshold being 1 pixel for the computing of NEAT lines. Then, three key steps are used for the final 3D wireframe reconstruction. 3D Junction Refinement. Thanks to the properties of SDF, the learned global junctions by the feed-forward layer can be further refined per the SDF values and their normals by 𝒥_k^ refined = 𝒥_k - d(𝒥_k)∇ d(𝒥_k). For the junctions that have larger SDF values (, |d(𝒥_k)|>0.05), we directly remove them as they are not reliable to yield the final 3D wireframe model. The Attraction Scheme. Similar to what HAWP done for 2D wireframe parsing <cit.>, we use the junctions 𝒥^ refined as the final endpoints of the 3D line segments to attract the nearest 3D NEAT lines. Then, a wireframe model with {𝒥_ refined} as the vertices, and the attracted indices {(k_i^1,k_i^2)} can be directly transformed to the endpoint representation of the i-th 3D line segments. Benefitting from the global junctions, the Attraction Scheme could directly deduplicate the noisy and duplicated 3D line segments across multiview images without considering the cross-view correspondences beforehand. However, such a putative wireframe model would lead up to some unreasonable 3D line segments, which can be filtered by visibility checking to fit the 2D observations. Visibility Checking. We use the camera projection matrices to check the visibility of each 3D line segment by comparing their reprojected distance with the 2D detection results. Here, we use a relaxed threshold for each view but require the line segments in the final model should be visible on at least 5 views. See more details in the supplementary. §.§ Implementation Details Our NEAT approach is implemented on PyTorch <cit.> and the source code will be publicly available for reproducible research purposes. For the requirement of 2D wireframes, we use the self-supervised HAWPv3 model <cit.> trained on the ImageNet data as the detector and keep all the default configurations in our experiments. The Specification of MLPs. To keep the simplicity of designs simple enough, the specification of MLPs for the SDF and the radiance field is the same as VolSDF <cit.>. For the proposed NEAT field, we use a 4-layer MLP to render the 3D line segments from the queried rays. Total Loss Function. For the optimization, we jointly learn the coordinate MLPs and the Global Junction Perceiving module from scratch by scenes. The total loss function ℒ_total is a weighted-sum by all the mentioned loss items in ℒ_total = ℒ_ rend + λ_nℒ_ neat + λ_eℒ_ eikonal + λ_jℒ_jc, where the rendering loss ℒ_ rend and Eikonal loss ℒ_ eikonal are the same with VolSDF <cit.>. The weights λ_n, λe and λ_j are all set to 0.01. Optimizer and Hyperparameters. We use ADAM <cit.> as our optimizer. In each iteration, the batch size of the ray sampling is 1024, and the initial learning rate is set to 5× 10^4. The exponential learning rate schedule is used, which decays the learning rate in each step by a decay rate of 0.1. § EXPERIMENTS In experiments, we mainly testify our NEAT on two datasets (, the DTU dataset <cit.> and the BMVS dataset <cit.>) for real-scene multiview images with known camera poses. In addition to those two datasets, in our supplementary materials, the experiments on the ABC dataset <cit.> evaluated by using the 3D wireframe annotations further verified our proposed NEAT approach for the 3D wireframe representation. §.§ Baselines, Datasets and Evaluation Metrics Line3D++ Baselines. We take the well-engineered Line3D++ <cit.> to build the baselines, which takes LSD <cit.> as the 2D line segment detector for input images and match the 2D line segments by using the known camera intrinsic and extrinsic parameters. Because our target is 3D wireframe reconstruction instead of 3D line segment reconstruction, for fair comparisons, we use HAWPv3 <cit.> as the alternative for 2D detection. We call this baseline Line3D++@HAWP. Besides this baseline, we also compared our method with Line3D++ by using LSD as the detector, but we defer the details in supplementary materials. DTU <cit.> and BlendedMVS <cit.> Datasets. These two datasets were mainly designed for multiview stereo (MVS), but they are applicable to 3D wireframe reconstruction as they provided high-quality 3D point clouds as annotations. For our experiments, we run our method on 12 scenes from DTU datasets and 5 scenes from BlendedMVS datasets. For the quantitative evaluation, we first convert the reconstructed wireframe model by NEAT (or the 3D line segment model by baselines) into the point cloud by sampling 32 points on each line segment and computing the ACC metric to make comparisons. Because the reconstructed 3D wireframes (and line segments) are rather sparse than the dense surfaces, the COMP metric is not used for comparison. Instead, we use the number of reconstructed 3D line segments and junctions as the reference of completeness. §.§ Main Comparisons We compare our NEAT approach with three baselines on the scenes from DTU and BlendedMVS datasets, which include both the straight-line dominant scenes and some curve-based ones. In Tab. <ref>, we quantitatively report the ACCs for both 3D line segments and their junctions (or endpoints), as well as the number of geometric primitives. Compared to the baseline Line3D++@HAWP that takes the same 2D wireframes as input, our NEAT significantly outperforms it in all metrics, which indicates that NEAT is able to yield more accurate and complete 3D reconstruction results than L3D++ for HAWP inputs. Fig. <ref> shows the quantitative comparison results between Line3D++@HAWP and our method on the two real-scene datasets. As is shown, our approach can simultaneously handle both straight-line scenes and complicated ones with curved structures. That is to say, our NEAT is of great potential for the more general purpose of structural scene representation by 3D wireframes. §.§ Ablation Studies In our ablation study, two scenes (, DTU-24 and DTU-105) are used as representative cases to discuss our NEAT approach. In the first, we qualitatively show the NEAT lines (, raw output of 3D line segments by querying the NEAT field), the initial reconstruction by binding the queried NEAT lines to global junctions, and the final reconstruction results by the visibility checking. Then, we discuss our NEAT approach in the following two aspects: (1) the parameterization of NEAT Fields and (2) the view dependency issue for junction perceiving. The Process of Wireframe Reconstruction. Fig. <ref> shows the three steps for wireframe reconstruction. In the first step, we query all possible 3D line segments from the optimized NEAT field. Then, the queried 3D line segments are binding to the global junctions. In the final step, by leveraging the visibility checking, the unstable 3D line segments are removed from the initial wireframe models. Benefitting from the proposed Global Junction Perceiving module, we largely simplified the way of removing duplicated and unreliable line segments without using either the known 3D points or the complicated line segment matching. Parameterization of NEAT Fields. We found that the parameterization of NEAT Fields learning is playing in a vital role in the wireframe reconstruction. Even though our NEAT field aims at representing 3D line segments by the displacement vectors of the 3D points, the localization error in the detected 2D wireframes will possibly lead to some 3D line segments that cannot be well supported by high-quality 2D detection results missing. The information on view direction is a key factor to avoid this issue and yield more complete results. According to Tab. <ref>, the parameterization without the viewing directions will result in a coarser reconstruction with larger ACC errors for both 3D junctions and line segments while having fewer line segments although the number of global junctions is similar to the final model. Clustering in Junction Perceiving. The DBScan <cit.> clustering is a key factor in accurately perceiving global junctions from the view-dependent coordinate MLP of the NEAT field. To verify this factor, we ablated the DBScan clustering to optimize MLPs on DTU-24 and DTU-105. Quantitatively reported in Tab. <ref>, although the parameterization of viewing direction largely reduced the ACC errors for both reconstructed junctions and line segments, the number of 3D junctions and line segments is also significantly reduced. When we enable the clustering during optimization, the lower-quality 3D local junctions (from the NEAT field) can be filtered, thus leading to an easy-to-optimize mode to yield more 3D junctions and line segments with fewer reconstruction errors. §.§ Failure Mode and Limitations Because our method is built on the top of VolSDF <cit.>, the difficulties for the inside-out scenes for neural surface rendering solutions are inevitable in the current stage. The recent studies <cit.> that leverages the pre-trained monocular depth and normal maps <cit.> would overcome those difficulties, but it is out of the scope of this paper. We would leave it in our future work. The quality of 2D wireframe detection would be another key factor leading to the failures. Once the used HAWP model <cit.> fails to accurately detect wireframes, we cannot accomplish the goal of 3D wireframe reconstruction and parsing. Based on those two points, Fig. <ref> provides a representative failure case, which is provided by the ScanNet <cit.> dataset. Due to the existence of motion blur, although the HAWP model detects the wireframes, the shifted locations with respect to motion blur will lead to many flying 3D line segments from the NEAT fields. Then, by attracting the flying line segments to the global junctions, the visibility checking will filter out many line segments as shown in Fig. <ref>. Even so, the learned global junctions shown in Fig. <ref> seem to like faithfully learned from the blurry 2D wireframes. One possible reason is that our global design could average the 2D reprojection error during optimization. Such a phenomenon is possible to bring some new perspectives to rethink the relationship between junctions and line segments for the wireframe representation. § CONCLUSION This paper studied the problem of multi-view 3D wireframe parsing (reconstruction) to provide a novel viewpoint for compact 3D scene representation. Building on the basis of the volumetric rendering formulation, we propose a novel NEAT solution that simultaneously learns the coordinate MLPs for the implicit representation of the 3D line segments, and the global junction perceiving (GJP) to explicitly learn global junctions from the randomly-initialized latent arrays in a self-supervised paradigm. Based on new findings, we finally achieve our goal of computing a parsimonious 3D wireframe representation from 2D images and wireframes without considering any heuristic correspondence search for 2D wireframes. To our knowledge, we are the first to achieve multi-view 3D wireframe reconstruction with volumetric rendering. Our proposed novel GJP module opens a door to characterize the scene geometry from 2D supervision in structured point-level 3D representation. § SUPPLEMENTARY MATERIAL The supplementary document is summarized as follows: * Appx. <ref> gives a summary of the submitted supplementary video. * Appx. <ref> elaborates on the technical details (introduced in Sec. 3.2 of the main paper)of the wireframe-driven ray sampling and the corresponding rendering results. * Appx. <ref> supplies the details for the NEAT field learning (introduced in Sec. 4), including the network architecture, visibility checking, and some additional experimental results. * Appx. <ref> illustrates the difference between 3D wireframe parsing and Line-based 3D reconstruction <cit.> in additional experiments. * Appx. <ref> presents the additional experiments on the ABC dataset <cit.> to discuss the performance given the ground-truth annotations of 3D wireframes. * Appx. <ref> shows the miscellaneous stuff. § VIDEO In our video <https://youtu.be/qtBQYbOpVpc>, we first illustrate our key ideas by using a simple object from the ABC dataset as a running example for the learned 3D line segments by NEAT field, the global junction perceiving module, as well as the final 3D wireframe model. Then, we visualize the learned redundant 3D line segments and the optimization process of the global junctions on the DTU-24 as another running example. Finally, the qualitative evaluations on the DTU and BlendedMVS are presented, which are all aligned with the quantitative evaluations of our main paper. § WIREFRAME-DRIVEN RAY SAMPLING To demonstrate the wireframe-driven ray sampling, we run a set of experiments on the scene 24 from the DTU dataset <cit.>. Fig. <ref> shows the feasibility of optimizing coordinate MLPs by using wireframe-driven ray sampling. As shown in Fig. <ref>(a), by masking more than 80% pixels out (with a distance threshold of 10 pixels), the optimization of coordinate MLPs also works to result in reasonable results in Fig. <ref>(b). Apart from the rendering results, we found that the increase in the distance threshold will lead to fewer line segments and junctions. As reported in Tab. <ref>, when we set the distance threshold to τ_d = 20, the number of 3D lines and junctions are all decreased even though the ACC errors are also reduced since the pixels far away from line segments may not be related to any line segments. When the distance threshold τ_d is set to 1, due to the insufficient supervision signals, performance degeneration is observed across all the metrics. § DETAILS OF NEAT FIELD LEARNING §.§ Additional Implementation Details Network Architecture. The coordinate MLPs used in our NEAT approach are derived from VolSDF <cit.>, which contains three coordinate MLPs for SDF, the radiance field, and the NEAT field. For the MLP of SDF, it contains 8 layers with hidden layers of width 256 and a skip connection from the input to the 4th layer. The radiance field and the NEAT field share the same architecture with 4 layers with hidden layers of width 256 without skip connections. The proposed global junction perceiving (GJP) module contains two hidden layers and one decoding layer as described in the code snippets of Sec.  1 in our main paper. Hyperparameters. The distance threshold τ_d about the foreground pixel (ray) generation is set to 5 by default[In our main paper (L419-L420), we use the sentence “... smaller than a predefined threshold (eg., 10 pixels)", which will be corrected in the revision.]. For the number of global junctions (, the size of the latent), we set it to 1024 on the DTU and BlendedMVS datasets. When the scene scale is larger (, a scene from ScanNet mentioned in Fig. 5 of the main paper), the number of global junctions is set to 2048. For DBScan <cit.>, we use the implementation from sklearn package, set the epsilon (for the maximum distance between two samples) to 0.01 and the number of samples (in a neighborhood for a point to be considered as a core point) to 2. The Definition of ACC Metric. We follow the official evaluation protocol of the DTU dataset <cit.> to compute the reconstruction accuracy (ACC), which is defined to ACC = 𝐩∈ Pmean(min_𝐩^*∈ P^*𝐩 - 𝐩^*), where P and P^* are the point clouds sampled from the predictions and the ground truth mesh. Because we pursue the parsimonious 3D representation, we did not use the reconstruction completeness (, COMP) as the metric for evaluation. Instead, we use the number of 3D junctions and line segments to show the completeness of the reconstructed 3D wireframes. §.§ Visibility Checking As mentioned in Sec. 4.3 of the main paper, the visibility checking is leveraged to filter out the incorrect wireframe edges (, 3D line segments). Considering the fact that 2D detection results by HAWPv3 <cit.> are not accurate enough, we would like to use a relaxed criterion to check if a 3D line segment is visible in 1 view, and use the visibility count over all the queried views to filter out the incorrect 3D line segments. Given a 2D line segment l̈_p = (𝐱_1^p,𝐱_2^p) by projecting a 3D line segment L̈ in the queried view with camera parameters (K, R, T), we define the visibility at the queried view by Vis(l̈_p|K,R,T) = min_l̈_q (𝐱_1^p-𝐱_1^q_2^2 + 𝐱_2^p-𝐱_2^q_2^2, 𝐱_1^p-𝐱_2^q_2^2 + 𝐱_2^p-𝐱_1^q_2^2)<τ_v, where l̈_q are the detected line segments in the queried view. By summing the visibility across viewpoints in Vis(L̈) = ∑_K,R,T Vis(l̈_p|K,R,T), the 3D line segments that are visible in less than τ_ ck∈ℤ^+ will be discarded. In our implementation, τ_ ck is set to 5 and τ_v is set to 100 for all scenes. More careful tuning of τ_ ck and τ_v would improve the final performance, but it is not our main scope. §.§ SDF-based 3D Junction Refinement To demonstrate the effectiveness of the junction refinement by the SDF, we provide an ablation study to obtain the 3D wireframe models with and without the SDF-based refinement in Tab. <ref>, which shows that the junction refinement is necessary to obtain significantly better results. § NEAT LINE3D++ In this section, we present a more comprehensive comparison between our NEAT and Line3D++ <cit.>. Note that the original design of Line3D++ <cit.> favors high-resolution input images[The long side of the input images should be greater than 800 pixels.] to obtain more 2D small-length line segments, we set up two additional baselines, Line3D++O and Line3D++M to indicate the original version and the modified version, respectively. In the modified version Line3D++M, we resize the input images into 512× 512 as what it did in HAWPv3 <cit.> for the detection. Tab. <ref> reports the evaluation results for NEAT and the three baselines of Line3D++ <cit.>. When comparing our proposed NEAT with Line3D++M that detect wireframes/line segments in images with the size of 512× 512, the overall ACC errors for 3D junctions and 3D line segments are significantly reduced by 71.6% and 87.3% relative improvements, respectively. As for the vanilla Line3D++ (, Line3D++O), it obtains the best reconstruction accuracy with small-length line segments. Compared to all baselines, the average length of the reconstructed 3D line segments by our NEAT is 25.42, significantly longer than the LSD-based baselines. Although the average length of Line3D++@HAWP is similar to ours, its reconstruction error is also larger than other approaches. § EXPERIMENTS ON THE ABC DATASET Because the 3D wireframe annotations are very difficult to obtain for real scene images, to better discuss the problem of 3D wireframe reconstruction and analyze our proposed NEAT approach, we conduct experiments on objects from ABC Datasets as it provides 3D wireframe annotations. Data Preparation. We use Blender <cit.> to render 4 objects from the ABC dataset. The object IDs are mentioned in Tab. <ref>. For each object, we first resize it into a unit cube by dividing the size of the longest side and then moving it to the origin center. Then, we randomly generate 100 camera locations, each of which is distant from the origin by √(1.5^2+1.5^2)≈ 2.1213 units. The setting of the distance, √(1.5^2+1.5^2), is from our early-stage development for the rendering, in which we set a camera at (0,1.5,1.5) location. By setting the cameras to look at the origin (0,0,0), we obtain 100 camera poses. Considering the fact that the ABC dataset is relatively simple, we set the focal length to 60.00mm to ensure the object is slightly occluded for rendering images. The sensor width and height of the camera in Blender are all set to 32mm. The ground truth annotations of the 3D wireframe are from the corresponding files. For the simplicity of evaluation, we only keep the straight-line structures and ignore the curvature structures to obtain the ground truth annotations. The rendered images are with the size of 512× 512. Baseline Configuration. Fig. <ref> illustrates the rendered input images for the used four objects. Because the rendered images are textureless and with planar objects, the dependency of those baselines on the correspondence-based sparse reconstruction by SfM systems <cit.> is hardly satisfied to produce reliable line segment matches for 3D line reconstruction. Accordingly, we set up an ideal baseline instead of using Line3D++ <cit.> for comparison. Specifically, we first detect the 2D wireframes for the rendered input images and then project the junctions and line segments of the ground-truth 3D wireframe models onto the 2D image plane. For the 2D junctions, if a projected ground-truth junction can be supported by a detected one within 5 pixels in any view, we keep the ground-truth junction as the reconstructed one in the ideal case. For the 2D line segments, we compute the minimal value for the distance of the two endpoints of a detected line segment to check if it can support a ground-truth 3D line. The threshold is also set to 5 pixels. Then, we count the number of reconstructed 3D line segments and junctions in such an ideal case. Evaluation Metrics. For our method, we compute the precision and recall for the reconstructed 3D junctions and line segments under the given thresholds. Because the objects (and the ground-truth wireframes) are normalized in a unit cube, we set the matching thresholds to {0.01,0.02,0.05} for evaluation. For the matching distance of line segments, we use the maximal value of the matching distance between two endpoints to identify if a line segment is successfully reconstructed under the specific distance threshold. For the ideal baseline, we report the number of ground-truth primitives (junctions or line segments), the number of reconstructed primitives, and the reconstruction rate. Results and Discussion. Tab. <ref> quantitatively summarizes the evaluation results and the statistics on the used scenes. As it is reported, our NEAT approach could accurately reconstruct the wireframes from posed multiview images. The main performance bottleneck of our method comes from the 2D detection results. As shown in the ideal baseline, by projecting the 3D junctions and line segments into the image planes to obtain the ideal 2D detection results, the 2D detection results by HAWPv3 <cit.> did not perfectly hit all ground-truth annotations. Furthermore, suppose we use the hit (localization error is less than 5 pixels) ground truth for 3D wireframe reconstruction, there is a chance to miss some 3D junctions and more 3D line segments. In this sense, given a relaxed threshold of the reconstruction error for precision and recall computation, our NEAT approach is comparable with the performance of the ideal solution. For the first object (ID 4981), because of the severe self-occlusion, some line segments are not successfully reconstructed for both the ideal baseline and our approach. For object 17078, our NEAT approach reconstructed some parts of the two circles that are excluded from the ground truth, which leads to a relatively low precision rate. Fig. <ref> also supported our results. § MISCELLANEOUS The scene IDs and their MD5 code of the BlendedMVS scenes are: * Scene-01: * Scene-02: * Scene-03: * Scene-05: * Scene-04: ieee_fullname
http://arxiv.org/abs/2307.05264v1
20230711135546
Hardy Spaces of Meta-Analytic Functions and the Schwarz Boundary Value Problem
[ "William L. Blair" ]
math.CV
[ "math.CV", "math.AP", "math.FA", "30E25, 30G20, 30H10, 35G15, 46F20" ]
We extend representation formulas that generalize the similarity principle of solutions to the Vekua equation to certain classes of meta-analytic functions. Also, we solve a generalization of the higher-order Schwarz boundary value problem in the context of meta-analytic functions with boundary conditions that are boundary values in the sense of distributions. Assessing Peer Award Diversification on Reddit Amaury Trujillo ============================================== § INTRODUCTION In this paper, we prove a representation formula for certain classes of meta-analytic functions and solve an associated Schwarz boundary value problem. We work to further the study of solutions to generalizations of the Cauchy-Riemann equation. One of the most well studied generalization is the Vekua equation w/ = Aw + Bw, for A,B ∈ L^q, q>2, see <cit.>. Solutions of (<ref>) are called generalized analytic functions and share many of the desirable characteristics of complex analytic functions because of the representation known as the similarity principle. The similarity principle is the representation of a generalized analytic function as a factorization w = e^φϕ, where ϕ is holomorphic and φ is Hölder continuous on the closure of the domain. Since Hölder continuous functions on a closed set are bounded in modulus, the similarity principle not only extends properties of generic holomorphic functions that depend on size but extends those kinds of properties of the Hardy spaces of holomorphic functions when the holomorphic factor is an element of one of these spaces, see <cit.>. The poly-analytic (or n-analytic) functions are those functions that solve the higher-order generalization of the Cauchy-Riemann equation ^n f/^n = 0. These functions are known to be representable as a polynomial in with holomorphic coefficients. When the holomorphic coefficients are Hardy space functions, the resulting classes of functions inherit some of the properties of the corresponding Hardy space, see <cit.>. Considering the Vekua equation as (/ -A -BC(·)) w = 0, where C(·) denotes the mapping that sends functions to their complex conjugate, it is natural to consider the higher-order generalizations (/ -A -BC(·))^n w = 0, for n >1. In <cit.>, the authors show that solutions to (<ref>) with A ∈ and B≡ 0 are representable as w = ∑_k=0^n-1^k e^φϕ_k, i.e., a polynomial in with coefficients which are solutions to the Vekua equation (<ref>). In <cit.>, the authors call these functions meta-analytic and show that when the generalized analytic function coefficients are members of the generalized Hardy spaces from <cit.>, then these classes of functions inherit properties of the Hardy spaces. We extend this representation to the solutions of (<ref>) with A a member of the W^n-1,q Sobolev space, q>2, and B≡ 0, and we show that, with A ∈ W^n-1,∞, the extension of Hardy space boundary behavior is present in this case too. Next, we consider the Schwarz boundary value problem. The Schwarz boundary value problem is a classically studied simplification of the Riemann-Hilbert problem in the complex plane, see <cit.>. The author, in <cit.> and <cit.>, extended the solvable classes of the Schwarz boundary value problem to those with boundary conditions in terms of boundary values in the sense of distributions for solutions of nonhomogeneous Cauchy-Riemann equations w/ = f, and the higher order generalizations ^n w/^n = f, with f an integrable function. In <cit.>, a special case was considered that employed the structure of n-analytic functions to solve a Schwarz boundary value problem where the f in (<ref>) and (<ref>) is not necessarily integrable, the first result of its kind in the literature. We utilize the construction from the results in <cit.> and the representations we prove for meta-analytic functions to solve a Schwarz boundary value problem where the solution is meta-analytic and all of the boundary conditions are in terms of boundary values in the sense of distributions. The standard technique of solving the Schwarz boundary value problem is to use singular integral operators, see <cit.>,<cit.>, or <cit.>. The novelty of our technique is to avoid the use of these integral operators in the places where their use would require greater boundary regularity. We are not restricted to continuous boundaries and can consider the more general case where the boundary condition is in terms of only boundary values in the sense of distributions.This work extends the results of <cit.>, the classical case of continuous boundary condition, and the case of A≡ 0 from <cit.>. The paper is structured as follows. Section 2 provides definitions of the classes of functions that we will encounter throughout the paper and background results. In Section 3, we prove the generalization of the similarity principle for solutions of ( / - A)^n f = 0 with A nonconstant. Also, we show the improvement that occurs to the representation when we consider the solution in the context of Hardy spaces and that certain desirable boundary behaviors are recovered. In Section 4, we consider a generalization of the Schwarz boundary value problem for meta-analytic functions and with boundary conditions in terms of only boundary values in the sense of distributions of holomorphic functions and solve the problem explicitly. Certain connections between the solutions of the boundary value problem and the Hardy spaces of meta-analytic functions are described. We thank Professor Andrew Raich and Professor Gustavo Hoepfner for their support during the time this work was produced. § DEFINITIONS AND BACKGROUND We represent the unit disk in the complex plane by D, and its boundary by D. We represent the Sobolev spaces of L^q(D) functions with k weak derivatives which are all in L^q(D) by W^k,q(D). We represent the space of distributions on D by 𝒟'( D). By C^0,α(S), we denote the set of α-Hölder continuous functions defined on the set S. We define the classes of functions that are used throughout. We denote by H(D) the set of holomorphic functions on D, i.e., f: D → such that f/ = 0. Let f be a function defined on D. We say that f has a boundary value in the sense of distributions, denoted by f_b ∈𝒟'( D), if, for every φ∈ C^∞(∂ D), the limit ⟨ f_b, φ⟩ := lim_r ↗ 1∫_0^2π f(re^iθ) φ(θ) dθ exists. We define H_b to be that subset of functions in H(D) that have boundary values in the sense of distributions. The next theorem gives a growth condition which guarantees a holomorphic function has a boundary value in the sense of distributions and provides a representation formula which we use in Section <ref>. For f ∈ H(D), the following are equivalent: * For every ϕ∈ C^∞( D), there exists the limit ⟨ f_b, ϕ⟩ := lim_r ↗ 1∫_0^2π f(re^iθ) ϕ(θ) dθ. * There is a distribution f_b ∈𝒟'( D) such that f is the Poisson integral of f_b f(re^iθ) = 1/2π⟨ f_b, P_r(θ - ·) ⟩, where P_r(θ) = 1-r^2/1-2rcos(θ) +r^2 is the Poisson kernel on D. * There are constants C>0, α≥ 0, such that |f(re^iθ)| ≤C/(1-r)^α, for 0 ≤ r < 1. For 0 < p < ∞, we denote by H^p(D) the holomorphic Hardy spaces of functions f ∈ H(D) such that ||w||_H^p(D):= (sup_0< r < 1∫_0^2π |w(re^iθ)|^p dθ)^1/p < ∞. The functions in H^p(D), 0 < p ≤∞, satisfy (3) in Theorem <ref> So the functions in H^p(D) have boundary values in the sense of distributions, but it is pointed out in <cit.> that ∪_0< p ≤∞ H^p(D) is a proper subset of H_b (see <cit.> for an example of h ∈ H_b and h∉∪_0< p ≤∞ H^p(D)). A classical result about the boundary behavior of functions in H^p(D) is the following. A function w(z) ∈ H^p(D), 0 < p < ∞, has nontangential boundary values w_+(e^iθ) ∈ L^p(∂ D) at almost all points e^iθ of the circle ∂ D, lim_r↗ 1∫_0^2π |w(re^iθ)|^p dθ = ∫_0^2π |w_+(e^iθ)|^p dθ, and lim_r↗ 1∫_0^2π |w(re^iθ)- w_+(e^iθ)|^p dθ = 0. For n a positive integer, we denote by H_n(D) the set of functions f: D → that satisfy ^n f/^n = 0. For n a positive integer and 0 < p < ∞, we define the poly-Hardy space to be the set of functions f that satisfy ^n f/^n = 0. and ||f||_n,p := ∑_k=0^n-1( sup_0 < r < 1∫_0^2π| ^k f/^k(re^iθ) |^p dθ)^1/p < ∞. For n a positive integer, A ∈ W^n-1,q(D), q>2, and 0 < p < ∞, we define the meta-Hardy space to be the set of functions f: D →ℂ that satisfy (/- A)^n f= 0, and ||f||_n,p := ∑_k=0^n-1( sup_0 < r < 1∫_0^2π| ^k f/^k(re^iθ) |^p dθ)^1/p < ∞. Clearly, H^n,p_0(D) = H^n,p(D). We recall a well-known fact about nonhomogeneous Cauchy-Riemann equations. For any f ∈ L^1(D), g(z) = -1/π∬_Df(ζ)/ζ - z dξ dη, where ζ = ξ + i η, solves g/ = f, and for f ∈ L^q(D), q>2, g ∈ C^0,α(D), α = q-2/q. There exist alternative integral representations for solutions to nonhomogeneous Cauchy-Riemann equations that satisfy other conditions. One that is useful when solving Schwarz boundary value problems, see Section <ref>, is the following. For any f ∈ L^1(D), g(z) = -1/π∬_D( f(ζ)/ζζ + z/ζ - z + f(ζ)/ζ1+zζ/1-zζ) dξ dη, where ζ = ξ + i η, solves g/ = f, and g(0) = 0. Note that the two integral representations above differ by a holomorphic function. We will later appeal to results from <cit.> that improve the regularity of the first integral representation from Theorem <ref>, and these results will extend to the integral representation from Theorem <ref> by this fact. § REPRESENTATION THEOREMS The next theorem is a classic result that has been widely communicated by M. Balk. Every function f that satisfies ^n f/^n = 0 is representable by f(z) = ∑_k=0^n-1^k f_k(z), with f_k ∈ H(D). The classic result is improved in <cit.> for functions in the spaces . For n a positive integer and 0 < p < ∞, every f ∈ is representable as f(z) = ∑_k=0^n-1^k f_k(z), with f_k ∈ H^p(D). The next two theorems extend Theorem 2.1 from <cit.> where A is a complex constant, with a slightly different form. We prove the first using the argument of Balk from <cit.> for the A≡ 0 case. We prove the second using the argument from the proof of Theorem 2.1 in <cit.>. For n a positive integer and A ∈ W^n-1,q(D), q>2, every solution w of ( / - A)^n w = 0 has the form w(z) = e^ψ(z)∑_k=0^n-1^k w_k, where ψ(z) = -1/π∬_D A(ζ)/ζ - z dξ dη and w_k ∈ H(D), for every k. The case n = 1 is a classic and can be found in <cit.>. We proceed by induction. Suppose that the theorem holds for all n such that 1≤ n ≤ m-1. Let w be a solution to ( / - A)^m w = 0. So, f = ( / - A) w solves ( / - A)^m-1 f = 0, and f = ( / - A)w = e^ψ(z)∑_k=0^m-2^k w_k with w_k ∈ H(D), 0 ≤ k ≤ m-2. Observe that g = e^ψ(z)∑_k=0^m-21/k+1^k+1 w_k also solves ( / - A)^m g = 0 by direct computation. Consider ( / - A)(w - e^ψ(z)∑_k=0^m-21/k+1^k+1 w_k) = e^ψ(z)∑_k=0^m-2^k w_k - ( / - A)(e^ψ(z)∑_k=0^m-21/k+1^k+1 w_k) = e^ψ(z)∑_k=0^m-2^k w_k - (A(z)e^ψ(z)∑_k=0^m-21/k+1^k+1 w_k + e^ψ(z)∑_k=0^m-2^k w_k -A(z)e^ψ(z)∑_k=0^m-21/k+1^k+1 w_k ) = 0. Hence, the difference is a solution to the n = 1 case, and w - e^ψ∑_k=0^m-21/k+1^k+1 w_k = e^ψϕ_o, where ϕ_0 ∈ H(D). Let ϕ_k = 1/k+1w_k for 0≤ k ≤ m-2, and we have w = e^ψ∑_ℓ=0^m-1^ℓϕ_ℓ, where ϕ_ℓ∈ H(D), for 0≤ℓ≤ m-1. For n a positive integer, 0 < p < ∞, and A ∈ W^n-1,∞(D), every function w ∈ H^n,p_A(D) has the form w(z) = e^ψ(z)∑_k=0^n-1^k w_k(z), where ψ(z) = -1/π∬_D A(ζ)/ζ - z dξ dη and w_k ∈ H^p(D), for every k. Let f ∈. So, by Theorem <ref>, f(z) = e^ψF(z), where F(z) = ∑_k = 0^n-1^k f_k(z), f_k ∈ H(D), and ψ(z) = -1/π∬_D A(ζ)/ζ - z dξ dη. So, [ f(z); f/(z); ⋮; ^n-1 f/^n-1(z) ] = [ e^ψ(z)F(z); /( e^ψ(z)F(z) ); ⋮; ^n-1/^n-1( e^ψ(z)F(z) ) ] = e^ψ(z)[ 1 0 0 ⋯ 0; A 1 0 ⋯ 0; A^2 + A/ 2A 1 ⋯ 0; ⋮ ⋯ ⋱ ⋮; P_(n-1,1)(A) P_(n-1,2)(A) P_(n-1,3)(A) ⋯ 1 ][ F(z); F/(z); ^2 F/^2(z); ⋮; ^n-1 F/^n-1(z) ], where we note that the matrix is lower triangular, each P_(k,j)(A), where (k,j) denotes that the polynomial is the (k,j) entry of the matrix, is a polynomial of order at most max{k-1,j-1} in A and its derivatives up to order at most max{k-1, j - 1}, and the entries on the diagonal are all 1. Hence, the matrix, which we will call [A], is invertible, and [ F(z); F/(z); ^2 F/^2(z); ⋮; ^n-1 F/^n-1(z) ] = e^-ψ(z) [A]^-1[ f(z); f/(z); ⋮; ^n-1 f/^n-1(z) ]. Since ψ, A, and all derivatives of A up to order n -1 are bounded, it follows that || ^j F/^j||^p_H^p(D) = sup_0 < r < 1∫_0^2π| ^j F/^j(re^iθ) |^p dθ = sup_0 < r < 1∫_0^2π|e^-ψ(re^iθ)∑_k = 0^n-1 [A]^-1_j,k(re^iθ) ^k f/^k(re^iθ) |^p dθ ≤ C ∑_k = 0^n-1sup_0 < r < 1∫_0^2π| ^k f/^k(re^iθ) |^p dθ < ∞, for each k. Hence, F ∈. By Theorem <ref>, F can be represented as F(z) = ∑_k = 0^n-1^k f_k(z), where each f_k ∈ H^p(D). Thus, f(z) = e^ψ(z)∑_k = 0^n-1^k f_k(z), where each f_k ∈ H^p(D). While the results in Section <ref> are in terms of boundary values in the sense of distributions, we show that the representation from Theorem <ref> allows us to prove that the functions in have nontangential boundary values almost everywhere and the functions converge to those boundary values in the L^p norm. For n a positive integer, 0< p < ∞, and A ∈ W^n-1,∞(D), every f ∈ has a nontangential boundary value f_+ ∈ L^p( D) almost everywhere on D and lim_r↗ 1∫_0^2π| f_+(e^iθ) - f(re^iθ) |^p dθ = 0. To show that the nontangential boundary value exists and the function converges to that nontangential boundary value in the L^p( D) norm, we follow the argument of Theorem 2.3 from <cit.>. Let f ∈ have the representation f(z) = e^ψ(z)∑_k=0^n-1^k f_k(z) from Theorem <ref>. By Theorem <ref>, each f_k ∈ H^p(D) has a nontangential boundary value f_k+ almost everywhere on D, f_k+∈ L^p( D), and lim_r ↗ 1∫_0^2π |f_k+(e^iθ) - f_k(re^iθ)| dθ = 0. Since e^ψ and ^k are continuous up to the boundary, it follows that f_+(e^iθ) = e^ψ(e^iθ)∑_k=0^n-1 e^-ikθ f_k+(e^iθ), and ∫_0^2π |f_+(e^iθ)|^p dθ = ∫_0^2π |e^ψ(e^iθ)∑_k=0^n-1 e^-ikθ f_k+(e^iθ)|^p dθ≤ C ∑_k=0^n-1∫_0^2π |f_k+(θ)|^p dθ < ∞, where C is a constant that only depends on p, n, and A. Now, observe ∫_0^2π |f_+(e^iθ) - f(re^iθ)|^p dθ = ∫_0^2π|e^ψ(e^iθ)∑_k=0^n-1 e^-ikθ f_k+(e^iθ) - e^ψ(re^iθ)∑_k=0^n-1 r^k e^-ikθ f_k(re^iθ) |^p dθ ≤ C∫_0^2π|e^ψ(e^iθ)∑_k=0^n-1 e^-ikθ f_k+(e^iθ) - e^ψ(e^iθ)∑_k=0^n-1 e^-ikθ f_k(re^iθ) |^p dθ + C∫_0^2π|e^ψ(e^iθ)∑_k=0^n-1 e^-ikθ f_k(re^iθ) - e^ψ(re^iθ)∑_k=0^n-1 r^k e^-ikθ f_k(re^iθ) |^p dθ = C∫_0^2π|e^ψ(e^iθ)∑_k=0^n-1( f_k+(e^iθ)- f_k(re^iθ) )|^p dθ + C∫_0^2π|∑_k=0^n-1( e^ψ(e^iθ) - e^ψ(re^iθ) r^k) f_k(re^iθ) |^p dθ ≤ C ∑_k=0^n-1∫_0^2π| f_k+(e^iθ)- f_k(re^iθ) |^p dθ + C_k∑_k=0^n-1∫_0^2π |e^ψ(e^iθ) - e^ψ(re^iθ)|^p |f_k(re^iθ)|^p dθ. Note that the left sum in the right hand side approaches zero as r ↗ 1, because of (<ref>), and the right sum in the right hand side approaches zero as r ↗ 1 by the Dominated Convergence Theorem. So, lim_r↗ 1∫_0^2π| f_+(e^iθ) - f(re^iθ) |^p dθ = 0. Next, we show that certain classes of solutions to equation (<ref>) have boundary values in the sense of distributions. We do so initially by proving a fact about integrable functions. Every f ∈ L^1(D) with nontangential boundary value f_+∈ L^1( D) such that lim_r↗ 1∫_0^2π| f(re^iθ) - f_+(e^iθ) | dθ = 0 has a boundary value in the sense of distributions f_b and f_+=f_b as distributions. For φ∈ C^∞( D) and 0 < r < 1, observe that |∫_0^2π f(re^iθ) φ(θ) dθ| ≤sup_θ|φ| ∫_0^2π |f(re^iθ)| dθ. So, |⟨ f_b, φ⟩| : = |lim_r ↗ 1∫_0^2π f(re^iθ) φ(θ) dθ| ≤sup_θ|φ| lim_r ↗ 1∫_0^2π |f(re^iθ)| dθ = sup_θ|φ|∫_0^2π |f_+(e^iθ)| dθ< ∞, since f converges to f_+ in the L^1 norm. Also, we have |⟨ f_b - f_+, φ⟩| ≤lim_r↗ 1∫_0^2π| f(re^iθ) - f_+(e^iθ) | dθ = 0. For n a positive integer and A ∈ W^n-1,∞(D), every f ∈ H^n,1_A(D) has a boundary value in the sense of distributions f_b, and f_b = f_+ as distributions, where f_+ is the nontangential boundary value of f. For n a positive integer and A ∈ W^n-1,∞(D), every f ∈ H^n,1_A(D) has a boundary value in the sense of distributions f_b, and f_b = f_+ as distributions, where f_+ is the nontangential boundary value of f. For φ∈ C^∞( D) and 0 < r < 1, observe that |∫_0^2π f(re^iθ) φ(θ) dθ| ≤sup_θ|φ| ∫_0^2π |f(re^iθ)| dθ. So, |⟨ f_b, φ⟩| : = |lim_r ↗ 1∫_0^2π f(re^iθ) φ(θ) dθ| ≤sup_θ|φ| lim_r ↗ 1∫_0^2π |f(re^iθ)| dθ < ∞, by Theorem <ref>. Also by Theorem <ref>, we have |⟨ f_b - f_+, φ⟩| ≤lim_r↗ 1∫_0^2π| f(re^iθ) - f_+(θ) | dθ = 0. Now, we restrict our view to A ∈ C^∞(D). We know that if A ∈ C^∞(D), then A ∈ W^n-1,∞(D) and from <cit.>, ψ(z) = -1/π∬_D A(ζ)/ζ - z dξ dη∈ C^∞(D). This will lead to a quick proof of the next result. For n a positive integer and A ∈ C^∞(D), every solution f of (/- A)^n f= 0 with representation f(z) = e^ψ(z)∑_k=0^n-1^k f_k(z), where ψ(z) = -1/π∬_D A(ζ)/ζ - z dξ dη and f_k ∈ H_b, for every k, has a boundary value in the sense of distributions. Observe that, for 0 < r < 1 and φ∈ C^∞( D), we have ∫_0^2π f(re^iθ) φ(θ) dθ = ∫_0^2π e^ψ(re^iθ)∑_k=0^n-1 r^k e^-ikθ f_k(re^iθ) φ(θ) dθ = ∑_k=0^n-1 r^k ∫_0^2π e^ψ(re^iθ)-ikθ f_k(re^iθ) φ(θ) dθ. Since each f_k ∈ H_b and e^ψ is C^∞( D) for every r, 0≤ r ≤ 1, it follows that lim_r↗ 1|∫_0^2π f_k(re^iθ) e^ψ(re^iθ)-ikθφ(θ) dθ|< ∞, for each k. Thus, lim_r ↗ 1|∫_0^2π f(re^iθ) φ(θ) dθ| ≤lim_r ↗ 1∑_k=0^n-1|∫_0^2π e^ψ(re^iθ)-ikθ f_k(re^iθ) φ(θ) dθ| < ∞. For n a positive integer, 0< p ≤ 1, and A ∈ C^∞(D), every f ∈ has a boundary value in the sense of distributions. § SCHWARZ BOUNDARY VALUE PROBLEM In this section, we exploit a construction from <cit.> to solve a higher-order Schwarz boundary value problem for meta-analytic functions. From <cit.>, we have the following. For n a positive integer, the Schwarz problem ^n f_n/^n = 0 { ( ^k f_n/^k)_b } = { (f_n-k)_b} = { (h_n-1-k)_b - ∑_ℓ= 1^n-k-1 (-1)^ℓ/ℓ! e^-iℓ(·) (f_n-ℓ)_b} { ^k f_n/^k(0) } = c_n-1-k where each f_k solves the Schwarz problem f_k/ = f_k-1 { (f_k)_b} = { (h_k-1)_b - ∑_ℓ= 1^k-1 (-1)^ℓ/ℓ! e^-iℓ(·) (f_k-ℓ)_b } { f_k(0)} = c_k-1 with h_k-1∈ H_b, and c_k-1∈, for each k with 1 ≤ k ≤ n, is solved by f_n(z) = ic_n-1 - I_n-1 + 1/2π⟨ (h_n-1)_b, P_r(θ - ·) ⟩ - ∑_ℓ = 1^n-1(-1)^ℓ/ℓ!^ℓ f_n-ℓ(z), where each f_k is described by f_k(z) = ic_k-1 - I_k-1 + 1/2π⟨ (h_k-1)_b, P_r(θ - ·) ⟩ - ∑_ℓ = 1^k-1(-1)^ℓ/ℓ!^ℓ f_k-ℓ(z), with I_k-1 := i/2π⟨{(h_k-1)_b}, 1⟩, for 1 ≤ k ≤ n, and f_0 ≡ 0. Theorem <ref> extends Theorem 3.4 in <cit.> where the boundary condition is required to be in terms of a continuous function. While the boundary condition is not as straightforward as the one in Theorem 3.4 of <cit.>, the boundary condition is the direct result of the natural extension of the representation formula found in Theorem 3.4 of <cit.> when utilizing the Poisson integral representation of boundary values in the sense of distributions of holomorphic functions from Theorem 3.1 of <cit.>. We appeal to the constructed sequence of functions {f_k} in Theorem <ref> to extend this theorem to the meta-analytic setting. The following theorem extends the results of Theorem 3.2 and Theorem 3.3 in <cit.>, in the case where a ≡ 1 and b ≡ 0 in those theorems, to solve a Schwarz boundary value problem for meta-analytic functions with boundary condition only in terms of boundary values in the sense of distributions. For n a positive integer and A ∈ W^n-1,q(D), q>2, the Schwarz problem (/ - A)^n w = 0 { ( ^k w/^k/ e^ψ)_b } = { (f_n-k)_b} = { (h_n-1-k)_b - ∑_ℓ= 1^n-k-1 (-1)^ℓ/ℓ! e^-iℓ(·) (f_n-ℓ)_b} { (^k w/^k/e^ψ)(0) } = c_n-1-k where each f_k solves the Schwarz problem f_k/ = f_k-1 { (f_k)_b} = { (h_k-1)_b - ∑_ℓ= 1^k-1 (-1)^ℓ/ℓ! e^-iℓ(·) (f_k-ℓ)_b } { f_k(0)} = c_k-1 with h_k-1∈ H_b, and c_k-1∈, for each k with 1 ≤ k ≤ n, is solved by w(z) = e^ψ(z)[ic_n-1 - I_n-1 + 1/2π⟨ (h_n-1)_b, P_r(θ - ·) ⟩ - ∑_ℓ = 1^n-1(-1)^ℓ/ℓ!^ℓ f_n-ℓ(z)], where each f_k is described by f_k(z) = ic_k-1 - I_k-1 + 1/2π⟨ (h_k-1)_b, P_r(θ - ·) ⟩ - ∑_ℓ = 1^k-1(-1)^ℓ/ℓ!^ℓ f_k-ℓ(z), with I_k-1 := i/2π⟨{(h_k-1)_b}, 1⟩, for 1 ≤ k ≤ n, and f_0 ≡ 0 and ψ(z) = -1/π∬_D A(ζ)/ζ - z dξ dη. The existence of such a sequence {f_k} is proved in <cit.>. By direct computation, w is a solution of the equation (<ref>). Conditions (<ref>) and (<ref>) are satisfied by w/e^ψ by observing that w/e^ψ is a solution of the Schwarz problem solved in Theorem <ref>. From <cit.>, if the h_k in the hypothesis of Theorem <ref> are such that h_k ∈ H^p_k(D), then the solution w in Theorem <ref> is an element of H^n,p(D), where p:= min_k{p_k}. Consequently, if the h_k in the hypothesis of Theorem <ref> are chosen so that h_k ∈ H^p_k(D) and A ∈ W^n-1,∞(D), then the solution w in Theorem <ref> is an element of H^n,p_A(D), where p:= min_k{p_k}. In the case of A ∈ C^∞(D), we solve a Schwarz boundary value problem where the solution is meta-analytic and the boundary condition and the pointwise condition at z = 0 can be in terms of the solution, not its poly-analytic factor. For n a positive integer and A ∈ C^∞(D), the Schwarz problem (/ - A)^n w = 0, { ( (/-A)^k w )_b } = { e^ψ(e^i(·))(ic_n-1-k - I_n-1-k+(h_n-1-k)_b - ∑_ℓ= 1^n-k-1 (-1)^ℓ/ℓ! e^-iℓ(·) (f_n-ℓ)_b)}, { ((/-A)^k w)(0) } = e^ψ(0)c_n-1-k where each f_k solves the Schwarz problem f_k/ = f_k-1 { (f_k)_b} = { (h_k-1)_b - ∑_ℓ= 1^k-1 (-1)^ℓ/ℓ! e^-iℓ(·) (f_k-ℓ)_b } { f_k(0)} = c_k-1 with h_k-1∈ H_b, and c_k-1∈, for each k with 1 ≤ k ≤ n, is solved by w(z) = e^ψ(z)[ic_n-1 - I_n-1 + 1/2π⟨ (h_n-1)_b, P_r(θ - ·) ⟩ - ∑_ℓ = 1^n-1(-1)^ℓ/ℓ!^ℓ f_n-ℓ(z)], where each f_k is described by f_k(z) = ic_k-1 - I_k-1 + 1/2π⟨ (h_k-1)_b, P_r(θ - ·) ⟩ - ∑_ℓ = 1^k-1(-1)^ℓ/ℓ!^ℓ f_k-ℓ(z), with I_k-1 := i/2π⟨{(h_k-1)_b}, 1⟩, for 1 ≤ k ≤ n, and f_0 ≡ 0 and ψ(z) = -1/π∬_D ( A(ζ)/ζζ + z/ζ - z + A(ζ)/ζ1+zζ/1-zζ) dξ dη. The constructed w solves (<ref>) by the same direct computation as in the proof of Theorem <ref>. Since A∈ C^∞(D) implies e^ψ∈ C^∞(D), it follows, by Theorem <ref>, that (e^ψ)_b = e^ψ|_ D. Since (/-A)^k w = e^ψ(z)[ic_n-1-k - I_n-1-k + 1/2π⟨ (h_n-1-k)_b, P_r(θ - ·) ⟩ - ∑_ℓ = 1^n-1-k(-1)^ℓ/ℓ!^ℓ f_n-ℓ(z)], it follows that ((/-A)^k w)_b = ( e^ψ(z)[ic_n-1-k - I_n-1-k + 1/2π⟨ (h_n-1-k)_b, P_r(θ - ·) ⟩ - ∑_ℓ = 1^n-1-k(-1)^ℓ/ℓ!^ℓ f_n-ℓ(z)])_b = e^ψ(e^i(·))(ic_n-1-k - I_n-1-k+(h_n-1-k)_b - ∑_ℓ = 1^n-k-1(-1)^ℓ/ℓ! e^-iℓ(·) (f_n-ℓ)_b) and {( (/-A)^k w )_b } = { e^ψ(e^i(·))(ic_n-1-k - I_n-1-k+(h_n-1-k)_b - ∑_ℓ = 1^n-k-1(-1)^ℓ/ℓ! e^-iℓ(·) (f_n-ℓ)_b)}. Hence, (<ref>) is satisfied. Now, since e^ψ(0) is real, it follows that ((/-A)^k w) (0) = e^ψ(0)[ic_n-1-k - I_n-1-k + 1/2π⟨ (h_n-1-k)_b, 1 ⟩ - ∑_ℓ = 1^n-1-k(-1)^ℓ/ℓ! 0^ℓ f_n-ℓ(0)] = e^ψ(0)[ic_n-1-k + {1/2π⟨ (h_n-1-k)_b, 1 ⟩}] = e^ψ(0){1/2π⟨ (h_n-1-k)_b, 1 ⟩}+ ie^ψ(0)c_n-1-k and {((/-A)^k w) (0)} = e^ψ(0)c_n-1-k. Therefore, (<ref>) is satisfied.
http://arxiv.org/abs/2307.04364v2
20230710064055
Probe hyperon electric dipole moments with full angular analysis
[ "Jinlin Fu", "Hai-Bo Li", "Jian-Peng Wang", "Fu-Sheng Yu", "Jianyu Zhang" ]
hep-ex
[ "hep-ex" ]
APS/123-QED [email protected] [email protected] [email protected] ^1School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China ^2Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, People's Republic of China ^3MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, People's Republic of China ^4School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000, People's Republic of China ^5Center for High Energy Physics, Peking University, Beijing 100871, People's Republic of China The electric dipole moment (EDM) of elementary particles, arising from flavor-diagonal CP violation, serves as a powerful probe for new physics beyond the Standard Model (SM) and holds the potential to provide novel insights in unraveling the enigma of the matter-dominated universe. Hyperon EDM is a largely unexplored territory. In this paper, we present a comprehensive angular analysis that focuses on entangled hyperon-antihyperon pairs in J/ψ decays for the indirect extraction of hyperon EDM. The statistical sensitivities are investigated for BESIII and the proposed Super Tau-Charm Facility (STCF). Leveraging the statistics from the BESIII experiment, the estimated sensitivity for Λ EDM can reach an impressive level of 10^-19 e cm, demonstrating a three-order-of-magnitude improvement over the only existing measurement in a fixed-target experiment at Fermilab with similar statistics. The estimated sensitivities for the Σ^+, Ξ^-, and Ξ^0 hyperons at the same level of 10^-19 e cm will mark the first-ever achievement and the later two will be the first exploration in hyperons with two strange valence quarks. The EDM measurements for hyperons conducted at the BESIII experiment will be a significant milestone and serve as a litmus test for new physics such as SUSY and left-right symmetrical model. Furthermore, at the STCF experiment, the sensitivity of hyperon EDM measurements can be further enhanced by two orders of magnitude. Additionally, this angular analysis enables the determination of CP violation in hyperon decays, the effective weak mixing angle, and beam polarization. Probe hyperon electric dipole moments with full angular analysis Jianyu Zhang^1 August 12, 2023 ================================================================ The measurement of a particle's permanent electric dipole moment (EDM), which violates both Parity (P) and Time reversal symmetries, and consequently Charge Parity (CP) symmetry according to the CPT theorem, provides a robust test within and beyond the Standard Model (SM). It serves as a sensitive probe for new physics, especially those that could induce lower loop or flavor diagonal CP Violation (CPV), in the multi-100 TeV mass range <cit.>. Neutron and ^199Hg EDM measurement have set an upper limit on the SM QCD effective vacuum phase of θ̅⪅10^-10, yet the SM permits any value within the [0,2π] range. This conundrum is commonly known as the strong CP problem <cit.>. Examining EDM within the hadronic system serves as a means to either corroborate or disprove the θ̅ explanation and, in conjunction with the investigation of leptonic EDM, constitutes an essential approach for the pursuit of new physics <cit.>. Investigating EDM in baryonic and light nuclear systems offers a distinct opportunity to uncover diverse CPV models <cit.>. Within the hyperon system, the strange quark may exhibit a special interaction with new physics, potentially resulting in a substantial EDM effect. This could suggest that the new physics possesses a specific flavor structure. Another crucial aspect is that a single EDM measurement alone is insufficient to distinguish between various sources of CPV beyond the SM. Therefore, it becomes essential to employ complementary observations of different systems, such as hadrons, atoms, nuclei, and molecules, in order to effectively discriminate between these sources <cit.>. Despite more than 70 years of researches in the pursuit of EDMs, the Λ hyperon remains the sole member of the hyperon family for which the upper limit of EDM, 1.5× 10^-16 e cm, has been measured utilizing spin precession at Fermilab <cit.>. The indirectly predicted absolute value of the Λ EDM, based on the experimental upper limit of the neutron EDM, is < 4.4× 10^-26 e cm <cit.>. There are no indirect predictions for hyperons with two or three strange valence quarks. A variety of experimental approaches have been proposed, such as Λ EDM measurement utilizing spin precession induced by dipole magnetic at the LHCb experiment <cit.>, Ξ^+ and Ω^+ EDM measurements employing spin precession induced by bent crystal at a fixed-target experiment <cit.>. Due to the short lifetimes of hyperons, conducting direct measurements of EDM through the spin precession presents significant challenges. Preparing sources of various hyperons for EDM measurements in a single fixed-target experiment is also challenging, due to different production mechanisms and lifetimes of hyperons. Unlike fixed-target experiments and hadron collider experiments, a large number of entangled Λ, Σ, and Ξ hyperon-antihyperon pairs can be readily produced and reconstructed from charmonium J/ψ decays at Tau-Charm factories. The substantial production cross-section of J/ψ in e^+e^- annihilation, along with the large branching fraction of J/ψ to hyperon-antihyperon pairs and the outstanding performance of modern detectors, ensure that the reconstruction of hyperon-antihyperon pairs is usually achieved with a purity greater than 95%. This capability allows for the search of subtle violations of conservation laws <cit.>. The production of entangled hyperon-antihyperon pairs, with the electric dipole form factor embeded in the P and CP violating term of the Lorentz invariant amplitude, offers a distinctive opportunity for indirectly extracting hyperon EDM. The electric dipole form factor is generally a complex number for non-zero timelike momentum transfer, and becomes EDM in the zero momentum transfer limit. In practice, this kind of form factor can be treated as an EDM assuming that the momentum transfer dependence is negligible due to an unknown extension to the zero region. This Letter reports a proposal to extract the hyperon EDMs through full angular analysis. EDM measurements will be discussed in e^+e^- collision within the region of J/ψ resonance, considering two different types: (i) J/ψ→ BB where B are Λ, Σ^+ hyperons. (ii) J/ψ→ BB where B are Ξ^-, Ξ^0. Sequential hyperon decays are reconstructed as Λ→ pπ^-, Σ^+→ pπ^0, Ξ^-→Λπ^-, and Ξ^0→Λπ^0, correspondingly. A comprehensive angular analysis using multi-dimensional information in the full decay chain yields enhanced sensitivity for EDM measurement when compared to one-dimensional analysis, such as a CP-odd triple-product moment encompassing hyperons Λ, Σ^+, Ξ^- and Ξ^0 <cit.>. Scenarios for the BESIII experiment and a proposed future Super Tau-Charm Facility (STCF) are investigated. The first experiment has already collected the world's largest dataset of 10 billion J/ψ particles <cit.>, while the latter one is designed to collect approximately 3.4×10^12 J/ψ particles per year <cit.>. Charmonium J/ψ is produced via e^+e^- annihilation, where interference between the contributions from virtual γ and Z-boson exchanges leads to a small longitudinal polarization of J/ψ meson. The leading contribution from Z-boson exchange in SM, which violates parity symmetry, is suppressed by a factor of M^2_J/ψ/m^2_Z. Polarization effects are encoded in BB hyperon pair spin density matrix defined as R(λ_1,λ_2;λ^'_1,λ^'_2)∝∑_m,m^' ρ_m,m^'d^j=1_m,λ_1-λ_2(θ)d^j=1_m^',λ^'_1-λ^'_2(θ) ×ℳ_λ_1,λ_2ℳ^*_λ^'_1,λ^'_2δ_m,m^', where the indices m^(') and λ^(')_1,2 represent the helicities of the J/ψ meson and B(B) hyperons, respectively. The ρ_m,m^' is spin density matrix of J/ψ meson, d^j_m^('),λ^(')_1-λ^(')_2(θ) is Wigner rotational function, and ℳ_λ^(')_1,λ^(')_2 is the helicity amplitude of J/ψ→ BB. The θ represents the angle between the momentum of the hyperon B, denoted as p̂, and the motion of electron beam Z axis as shown in Fig <ref>. The helicity m^(') is denoted as +, -, and 0 corresponding to the helicity states of J/ψ meson. The 3×3 matrix ρ_m,m^' is reduced to a 2×2 matrix due to the component ρ_00 suppressed by a factor of m^2_e/M^2_J/ψ. The Lorentz invariant helicity amplitude in J/ψ→ BB decay with four independent form factors fixed at q^2=M^2_J/ψ is written as <cit.> ℳ_λ_1,λ_2=ϵ_μ(λ_1-λ_2)u̅(λ_1,p_1) (F_Vγ^μ+i/2mσ^μνq_νH_σ +γ^μγ^5F_A+σ^μνγ^5q_νH_T )v(λ_2,p_2), where m is B hyperon mass, and p_1 and p_2 are four momentum of B and B, respectively. Processes involving a flavor-diagonal CP-violating vertex contribute to the electric dipole form factor H_T. An effective Lagrangian, encompassing all of these CP-violating operators, plays a crucial role as a bridge between hyperon EDM and the fundamental theories. The diverse extensions of the SM result in distinct contributions to these operators, leading to different impact on the hyperon EDM. Taking the Λ hyperon as an example, there are several expressions in the literature for evaluating the contributions arising from the QCD θ term <cit.>, quark chromo-electric dipole moment (qCEDM), four-quark operators <cit.>, and the quark EDM (qEDM) <cit.>. Hyperon EDM measurements offer direct sensitivity to the contributions from qEDM and qCEDM, owing to the suppressed effects of high-dimensional operators and the experimental constraint imposed by neutron EDM measurements on the QCD θ term. The flavour-diagonal CP-violating contributions in the SM are extremly tiny, while new physics, such as SUSY and left-right symmetrical model, may give large enhancement on hyperon EDM as discussed extensively by analysing EDM results from electron, neutron and ^199Hg systems <cit.>. The unexpectedly large hyperon EDM may suggest a special coupling between the strange quark and new physics. Consequently, in the decay chain under consideration, we provide an opportunity to explore these possible effects in the hyperon family by relating H_T to hyperon EDM contribution <cit.>, H_T=2e/3M^2_J/ψg_Vd_B. The form factor H_T here, in fact, varies with q^2. Assuming q^2 dependence is ignored, d_B is then EDM of hyperon B. Considering the dispersive part of time-like reaction, the imaginary part of H_T is also investigated in this angular analysis. The aforementioned discussions will also be applicable to the hyperons Σ and Ξ in this Letter. The form factors F_V and H_σ are related to the redefined G_1,2 as described in <cit.> F_V=G_1-4m^2(G_1-G_2)/(p_1-p_2)^2,  H_σ=4m^2(G_1-G_2)/(p_1-p_2)^2. The form factors G_1 and G_2 are linked to the experimental observables α_J/ψ, ΔΦ, and Γ(J/ψ→ BB) through the relations α_J/ψ=M^2J/ψ|G1|^2-4m^2|G_2|^2/M^2J/ψ|G1|^2+4m^2|G_2|^2 and G_1/G_2=|G_1/G_2|e^-iΔΦ <cit.>. The form factor F_A, primarily arising from Z-boson exchange between cc and light quark pairs qq within the SM can be related to the effective weak mixing angle θ^eff_W through F_A≈ -1/6Dg_Vg^2/4cos^2θ^eff_W1-8sin^2θ^eff_W/3/m^2_Z, which leads to a parity violation effect estimated to be the order of 10^-6, where g_V is defined as ⟨0|c̅γ^μc|J/ψ⟩=g_Vϵ^μ, D is a non-perturbative parameter that is fitted from data <cit.>. By conducting precise measurements utilizing large statistics, it becomes possible to extract the weak mixing angle sin^2θ^eff_W which is essential in testing the SM, particularly in regards to the effects derived from quantum corrections of heavy particles, such as the Higgs boson and the top quark, at the loop level <cit.>. The longitudinal polarization of the J/ψ meson, denoted as P_L, is defined as the relative difference between the diagonal elements of the density matrix, ρ_++ and ρ_–. Moreover, in experiment such as BESIII where there is no beam polarization, the polarization P_L is closely connected to the left-right asymmetry 𝒜^0_LR, P_L=𝒜^0_LR=σ_R-σ_L/σ_R+σ_L=-sin^2θ^eff_W+3/8/2sin^2θ^eff_Wcos^2θ^eff_WM^2_J/ψ/m^2_Z. Here, σ_R(L) represents the J/ψ cross section with right-handed(left-handed) electrons. This asymmetry induced by the effective weak mixing angle θ^eff_W and hence suppressed to the order of 10^-4 <cit.>. When there is longitudinally polarized electron beam polarization with magnitude of P_e, as in the experiment of STCF <cit.>, the P_L can be replaced by ξ ξ=σ_R(1+P_e)/2-σ_L(1-P_e)/2/σ_R(1+P_e)/2+σ_L(1-P_e)/2=𝒜^0_LR+P_e/1+P_e𝒜^0_LR≈ P_e. The longitudinally polarized electron beam instead of Z- boson exchange may play a crucial role in enhancing the sensitivity of measurements. Based on the rotational symmetry, helicity representation of the complete angular distribution for type (ii) is given dσ/dΩ∝∑_[λ] R(λ_1,λ_2;λ^'_1,λ^'_2) D^*j=1/2_λ_1,λ_3(ϕ_1,θ_1)D^j=1/2_λ^'_1,λ^'_3(ϕ_1,θ_1)ℋ_λ_3ℋ^*_λ^'_3 D^*j=1/2_λ_2,λ_4(ϕ_2,θ_2)D^j=1/2_λ^'_2,λ^'_4(ϕ_2,θ_2)ℋ_λ_4ℋ^*_λ^'_4 D^*j=1/2_λ_3,λ_5(ϕ_3,θ_3)D^j=1/2_λ^'_3,λ_5(ϕ_3,θ_3)ℱ_λ_5ℱ^*_λ_5 D^*j=1/2_λ_4,λ_6(ϕ_4,θ_4)D^j=1/2_λ^'_4,λ_6(ϕ_4,θ_4)ℱ_λ_6ℱ^*_λ_6 where [λ] is a set containing all of possible helicity symbols appearing in the summation like λ_1,λ_2,λ^'_1,λ^'_2.... Polar and azimuthal angles θ_1,ϕ_1 and θ_2,ϕ_2 parameterize momenta directions of Λ and Λ in the frame of Ξ and Ξ, respectively. Polar and azimuthal angles θ_3,ϕ_3 and θ_4,ϕ_4 are that of proton and anti-proton in the frame of Λ pairs. The definitions of these helicity angles are illustrated in Fig <ref>, and analogous definitions are employed for the subsequent decay of antiparticles. Helicity amplitudes ℋ_λ_i and ℱ_λ_i are used to parameterize dynamics of weak decay Ξ→Λπ and Λ→ pπ, and corresponding charge conjugated process are denoted by ℋ and ℱ with bar. The formula for type (i) is obtained by retaining only θ_1,2 and ϕ_1,2 and identifying ℋ as ℱ. Following the definition of asymmetry parameters α and ϕ, originally introduced by Lee and Yang <cit.>, the hyperon CP violating observables, induced by these asymmetry parameters, are quantified as A^B_CP=(α_B+α̅_B)/(α_B-α̅_B) and Δϕ^B_CP=(ϕ_B+ϕ̅_B)/2 <cit.>. Two observables are complementary as they rely on the sine and cosine of the strong phase difference, respectively. In hyperon decays, the relative strong phases are small, leading to the latter exhibiting better sensitivity <cit.>. Moreover, in this Letter, the latter in Ξ decays can be determined due to the measurable polarization of Λ hyperon. To assess the statistical sensitivity of the measurement, 500 pseudo experiments of each decay are generated and fitted by using a probability density function based on the full angular distributions shown in Equation (<ref>). The estimated yields presented in Table <ref>, as well as the form factors and decay parameters obtained from the published articles <cit.>, are fixed for the generation. The EDM, along with other form factors, decay parameters and polarization, can be simultaneously determined from fitting. The study further investigates sensitivities for different statistics at BESIII and STCF experiments, taking into account branching fractions, detection efficiencies, and the impact of longitudinally polarized electron beam. Figure <ref> presents the estimated sensitivities for hyperon EDMs. With the statistics from BESIII experiment, the Λ EDM sensitivity, 10^-19 e cm (red full circle), demonstrates a remarkable three-order-of-magnitude enhancement over the only existing measurement at Fermilab with similar statistics <cit.>, while maintaining cutting-edge sensitivities, 10^-19 e cm, for Σ^+, Ξ^-, and Ξ^0 hyperons. The EDM sensitivities will be further improved by 1∼2 orders of magnitude (open square and full triangle) at STCF experiment. Figure <ref> illustrates the estimated sensitivities for CPV in hyperon decays. With an 80% longitudinally polarized electron beam at STCF experiment, the best sensitivities for CPV induced by the α_B parameter (red full triangle) can reach 5×10^-5 (6×10^-5) in J/ψ→ΛΛ (J/ψ→Σ^+Σ^-) decays, while for the ϕ_B parameter (blue full triangle), they can reach 2×10^-4 (3×10^-4) in J/ψ→Ξ^-Ξ^+ (J/ψ→Ξ^0Ξ^0) decays. The sensitivities for A^B_CP and Δϕ^B_CP observables have reached the prediction of the SM <cit.>. Figure <ref> shows the estimated sensitivities for F_A and sin^2θ^eff_W. Only the sensitivities for the module of F_A are reported due to a negligible dependence on the phase from toy study. The sensitivity for sin^2θ^eff_W associated to F_A can reach to 8×10^-3. Figure <ref> depicts the estimated sensitivities for J/ψ polarization and sin^2θ^eff_W. The sensitivity for sin^2θ^eff_W associated to P_L can reach to 2×10^-2 at STCF experiment. Additionally, by applying simultaneous constraint on F_A and P_L, the sensitivity for sin^2θ^eff_W can be further enhanced to 5×10^-3 in J/ψ→ΛΛ decays. Longitudinal polarization for electron beam can also be determined through angular analysis with the highest precision sensitivity reaching up to 6×10^-5, as depicted in Figure <ref> (red full triangle up), which can used for more precise weak mixing angle measurement from Bhabha scattering events <cit.>. In conclusion, to investigate largely unexpored territory of hyperon EDMs, we have established a comprehensive angular analysis, considering P violation in J/ψ production and CP and P violation in J/ψ decay. The EDM, along with CP violating observables in hyperon decays, effective weak mixing angle, and beam polarization can be simultaneously extracted from angular analysis. The statistical sensitivities for physical observables have been investigated for BESIII and STCF scenarios. Utilizing the expected statistics obtained from the BESIII experiment, the Λ EDM measurement can achieve an impressive upper limit of 10^-19 e cm, presenting a remarkable improvement of three orders of magnitude compared to the only existing measurement at Fermilab with similar statistics. The EDM measurement of Σ^+, Ξ^-, and Ξ^0 hyperons at the same level of 10^-19 e cm could represent a groundbreaking accomplishment as the first-ever achievement and the later two will be the first exploration in hyperons with two strange valence quarks. At the STCF experiment, with a longitudinally polarized electron beam, a search for hyperon EDMs could potentially reach levels of 10^-21∼10^-20 e cm. The EDM measurements for hyperons will be a significant milestone and serve as a stringent test for new physics, such as SUSY and left-right symmetrical model. At the same time, the verification of CPV in hyperon decays could be achieved at levels of 10^-5∼10^-4, which has already matched the predictions of the SM. The effective weak mixing angle parameter can be measured at a level of 10^-3 and can be further enhanced by utilizing the precisely determined beam polarization obtained from this angular analysis. This method can also be extended to ψ(2S) decays for investigating the pure strange quark hyperon Ω, taking into account additional form factors due to its spin-3/2 property. We would like to thank Prof. Fengkun Guo, Prof. Xiaogang He, Prof. Jianping Ma and Prof. Yangheng Zheng for very useful discussion. This work is supported by National Key R&D Program of China No. 2022YFA1602204; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11935018, 12221005 and 11975112; Fundamental Research Funds for the Central Universities.
http://arxiv.org/abs/2307.04505v1
20230710115649
Analysis of the possible satellite contamination in LAMOST-MRS spectra
[ "Mikhail Kovalev", "Olivier R. Hainaut", "Xuefei Chen", "Zhanwen Han" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.IM" ]
firstpage–lastpage Distributed Decisions on Optimal Load Balancing in Loss Networks Qiong Liu1, Chenhao Wang2, Ce Zheng1 1Télécom Paris, Institut Polytechnique de Paris, France 2Beijing Normal University, China Email: [email protected], [email protected], [email protected] ========================================================================================================================================================================================================================== We present the detection of false positive double-lined spectroscopic binaries candidates (SB2) using medium-resolution survey (MRS) spectra from the one time-domain field of LAMOST data release 10 (DR10). The secondary component in all these binaries has near zero radial velocity and solar-like spectral lines. Highly likely this is light from the semi-transparent clouds illuminated by the full Moon. However we also suspect that partially this contamination can be caused by a solar light reflected from the surface of low-orbital artificial satellites launched in the beginning of 2022. We found several possible contaminant candidates using archival orbital data. We propose measures to reduce risk of such contamination for the future observations and methods to find it in archived ones. binaries : spectroscopic – techniques : spectroscopic § INTRODUCTION Since the launch of the Sputnik-1 in 1957, we can see artificial satellites flying in the night sky. Such observations can be very useful for Earth-related science (i.e. determination of the geopotential), although for astrophysics, satellites can be an obstacle. This problem has become more serious with the start of the active populating of low earth orbits, which now host many thousands of telecommunication satellites, which form huge constellations. In most pessimistic scenario such intensive commercialisation of space can be the end of all ground base astronomy. In spectroscopic observations, the flyby of an artificial satellite will result as a fake spectroscopic binary, where contamination will be visible as a solar-like spectral component. For low orbit satellites, the line of sight velocity () is near zero when the satellite rise close to culmination, but the transverse velocity is very high, so contamination lasts much less than a second for typical values of field of view. Thus for a typical bright astrophysical target, contamination is usually negligible and only relatively faint objects are affected <cit.>. <cit.> identified many double-lined spectroscopic binary (SB2) candidates in LAMOST (Large Sky Area Multi-Object fiber Spectroscopic Telescope) MRS <cit.>. However some of them can be false positives, which can be identified by taking advantage of multiple observations in a time domain sub-survey. Here we present results for one particular field, where these false-positive SB2s can be caused by satellite contamination. The paper is organised as follows: in Sections <ref> and <ref>, we describe the observations and methods. Section <ref> presents our results. In Section <ref> we discuss the results. In Section <ref> we summarise the paper and draw conclusions. § OBSERVATIONS LAMOST is a 4-meter quasi-meridian reflective Schmidt telescope with 4000 fibers installed on its 5 FoV focal plane. These configurations allow it to observe spectra for at most 4000 celestial objects simultaneously (<cit.>). For the analysis in this paper, we downloaded all available time-domain DR10 spectra from <www.lamost.org/dr10/v0/> observed within the field “TD164021N701415T01". We use the spectra taken at a resolving power of R=λ/ Δλ∼ 7 500. Each spectrum is divided on two arms: blue from 4950 Å to 5350 Å and red from 6300 Å to 6800 Å. During the reduction, heliocentric radial velocity corrections in range of _h=-5,-2 were applied to all spectra. We convert the wavelength scale in the observed spectra from vacuum to air using <cit.>. Observations are carried out in MJD=59676.8-59692.8 days, spanning an interval of 16 days. We selected only spectra stacked for whole night[ Each epoch contains seven short 20 min individual exposures, which were stacked to increase ] and apply a cut on the signal-to-noise (>=20). In total we have 5625 spectra from 1323 targets. The number of epochs varies from 2 to 4 per target, as very noisy epochs were not selected for some targets. § METHODS We use the same spectroscopic models and method as <cit.> to analyse individual LAMOST-MRS spectra, see very brief description below. The normalised binary model spectrum is generated as a sum of the two Doppler-shifted normalised single-star spectral models f_λ,i[they are designed as a good representation of the LAMOST-MRS spectra], scaled according to the difference in luminosity, which is a function of the and stellar size. We assume both components to be spherical and use the following equation: f_λ, binary=f_λ,2 + k_λf_λ,1/1+k_λ,  k_λ= B_λ(_,1) R^2_1/B_λ(_,2) R^2_2 where k_λ is the luminosity ratio per wavelength unit, B_λ is the black-body radiation (Plank function), is the effective temperature and R is the stellar radius. Throughout the paper we always assume the primary star to be brighter one. In comparison with <cit.> we directly use the ratio of stellar radii q as a fitting parameter, instead of the mass ratio with difference of the surface gravity . Each spectrum is analysed with the single and binary spectral model, thus we can calculate the difference in reduced χ^2 between two solutions and the improvement factor (), computed using Equation <ref> similar to <cit.>. This improvement factor estimates the absolute value difference between two fits and weights it by the difference between the two solutions. f_ imp=∑[ (|f_λ, single-f_λ|-|f_λ, binary-f_λ|)/σ_λ] /∑[ |f_λ, single-f_λ, binary|/σ_λ] , where f_λ and σ_λ are the observed flux and corresponding uncertainty, f_λ, single and f_λ, binary are the best-fit single-star and binary model spectra, respectively, and the sum is over all wavelength pixels. § RESULTS We carefully checked the quality of the spectral fits through visual inspection of the plots. Several spectra were selected as SB2 candidates using criteria formulated in <cit.>, although this selection was not complete as these criteria prioritise purity. This study is focused on possible satellite contamination, so we introduce a new selection of the fitted parameters, like and improvement factor, see Table <ref>. Out of four epochs, one with MJD=59685.8 d has significantly more selected candidates, so we explored it more carefully. Thus we keep only stars that appear as a regular single star in all epoch except MJD=59685.8 d. In total we left with 37 SB2 candidates, with a secondary component at _2∼ 0. They are marked as open triangles on Figure <ref>. We show the most clear example J162843.74+680439.7 (G=14.55 mag) with very large ∼410 in Fig. <ref>. In the top panel we show fits of the co-added spectrum by the single-star and binary model. The single-star model obviously failed to fit double-lined spectrum, while binary models fit the primary component (67 per cent) at _1=-418.72 and catch another additional spectrum component (33 per cent) at _2=-7.62. In the middle panel we show fitting results for a mock spectrum of J162843.74+680439.7 contaminated by solar spectrum of V=16 mag, where we applied Gaussian noise according to . As you can see both panels are very similar. In the bottom panel we show all seven short 20 min exposures spectra before coaddition. It is clear that contamination happened at UTC times t=19:59 and t=20:21 as these two exposures have an additional spectral component, which has a brightness comparable with the main target. When all exposures were co-added we got the double-lined spectrum with significantly smaller noise. In the other candidates contamination is not that clearly visible as they have smaller . The majority of the candidates have G∼14.5 and _ red<50 in the co-added spectrum, thus probably for brighter targets contamination was negligible and comparable to the noise level. § POSSIBLE SOURCE OF CONTAMINATION Highly-likely such contamination was caused by the clouds, illuminated by the full Moon, which significantly increased sky background in the spectra. Unfortunately, sky subtraction failed to completely remove it during the spectral reduction, see the last two individual 20-minute exposures in the bottom panel of Fig. <ref>. This can explain such solar-like spectral component very well as sky becomes brighter as sun is rising. At the moment of the end of observation it's height was around -12. This also supported by the fact that contamination is visible only in relatively faint targets. Nevertheless we decided to test other possible sources of contamination. We checked if this contamination can be due to a solar system object. We used Minor Planet Center checker[<https://minorplanetcenter.net/cgi-bin/mpcheck.cgi>] to check 1284083 known objects and found none of them brighter than 18 mag in our field. With slightly larger search radii we found comet C/2019 K7 (Smith) with coordinates α=16:20:45.9, δ=+67^∘ 56' 19", although it is unlikely to be our contaminant, because otherwise it will be visible in all exposures, as it moves very slowly. In order to investigate whether this contamination could have been caused by a satellite passing through the field of view, we verified that, at the time of the observations, low-Earth orbit (LEO) satellites were illuminated by the Sun. This was tested using the formalism described in <cit.> for generic LEOs as well as for Starlink and OneWeb satellites. To evaluate the number of fibres typically affected by a satellite trail, a million trails, randomly positioned, were shot through a realistic LAMOST field of view. For the considered field, 1324 fibres had object of interest (with suitable S/N>20) over the 4000 fibres of the instrument, so 1324 fibres were considered in this experiment. A trail is considered to affect a fibre if the impact distance is less than 3”, which accounts for the radius of the fibre (whose diametre is 3.3”) and the width of the trail, which is set to 2” accounting for the seeing and the marginally resolved satellite. For each trail, the number of fibres affected was counted. Figure <ref> illustrates this for 100 trails on the left panel, and displays a histogram of the number of fibres affected on the left panel. This method is the same that was used to evaluate (Michevat priv.comm.) the impact on 4MOST, a similar spectrograph built at ESO <cit.>. About 64% of the trails hit no fibre and while 0.01% of the satellites hit 7 fibres, a trail will hit 0.44 fibres on average. As 37 fibres were contaminated, this suggests up to ∼80 satellites crossed the 5 field of view during the exposure. These numbers should be taken with a fairly large uncertainty, as the seeing and the width of the trail will cause the number of fibres affected to be larger, but the contamination for a larger impact distance will be smaller. To estimate the visual magnitude of the satellite causing the contamination, one must estimate the level of contamination of the spectra, and take into account the effect of motion of the satellite. With typical angular velocities of the order of 1 s^-1 at zenith, a LEO satellite spends only a few milliseconds t_ eff crossing the fibre during the total exposure time t_ exp = 1200 s. The apparent magnitude m of the object can be estimated from its effective magnitude m_ eff measured on the spectrum, m = m_ eff + 2.5 log_10t_ eff/t_ exp = m_ eff + 2.5 log_10r_ fibre/ω_ sat t_ exp , where r_ fibre = 3.3” is the angular diameter of a fibre on the sky. Using the method in <cit.>, the angular velocity of the satellite in the direction of observations was estimated for Starlink (0.66 s^-1) and OneWeb (0.30 s^-1) satellites. The effective magnitude can be estimated from the contamination. The S/N of the G ∼ 14.5 was up to 50 in the co-added spectrum, corresponding to ∼ 20 in the individual 1200s exposures. To be noticeable, the contamination must have S/N > 5 (which corresponds to G∼16), and to be detectable at all, S/N>2 (G∼ 17). Combining these pieces of information, Eq. <ref> gives visual magnitudes ∼1–2. Fainter satellites will not be detected. As of the time of the observations, about 4500 satellites were present on LEOs (roughly 2000 pre-existing, and 2002 Starlink[ Jonathan McDowell’s Starlink web page <https://planet4589.org/space/con/star/stats.html> ] and 426 OneWeb[ Jonathan McDowell’s OneWeb web page <https://planet4589.org/space/con/ow/stats.html> ] from recently launched mega-constellations). Using the method of <cit.>, this results in ∼ 15 satellite trails per exposure during long twilight, as illustrated in Fig. <ref>. This number is much too low to explain the observed contamination. Furthermore, the magnitudes of the satellites differs widely (some of them, such as HST or ISS can be as bright as V -5 to 2), but the bulk of the Starlink satellites are in the 5.6–7.2 range <cit.> and OneWeb in the 7–9 range <cit.>, ie well below the reach of the spectrograph. We also checked Satellite Track Predictor (STP)[<http://www.astro.amu.edu.pl/STP>] for time interval UTC=19:30, 20:30 and found that 12 bright satellites with V≤6 mag crossed our field. We show their tracks in Fig. <ref>. STP reports that errors can be up to 0.1-0.5 for sky-positions and σ_V=2 mag for brightness, so some of these satellites (like Starlink and Cosmos with reported V=4 mag) can be bright enough to cause contamination. In the week after their launch, the satellites appear as a train, or like a string of pearls while they slowly disperse in elongation along their very low orbit. During that phase, they appear much brighter than when on their operational orbit, because of the shorter distance to the observer, and because the configuration and attitude of the satellites are different than when in operations. In the days of the earliest Starlink launches, they could be as bright as mag ∼ 0. Since then, the operator has modified the attitude of the satellites so that they are much dimmer, in the 1–3 range most of the time[Although very bright (up to V∼ 0 mag) and short (∼1 sec) flashes are possible. First author saw them several times.]. A batch of satellites launched with one rocket consist typically of 60 satellites. In order to test whether such a train of recently launched satellites could have crossed our field of view, Two Line Elements (TLEs), the orbital elements of the satellites, were retrieved for the date of the observations using CelesTrack [<https://celestrak.org/NORAD/archives/request.php>]. Using the skyfield[<https://rhodesmill.org/skyfield/>] package, the visibility of the satellite was verified, from LAMOST for the time of the observations. It appears that a series of Starlink satellites from the 2022 Feb. 21 launch^<ref> crossed the sky during the exposure. While their tracks, as computed by us, are in the general vicinity of our observation, they does not cross the field of view. However, the TLEs are notoriously not very accurate –especially at a phase when the operator frequently adjust the orbit, and our method to compute the satellite position is not verified. At that time, the satellites were at an altitude of 350km, with a magnitude in the 1–2 range. The apparent angular velocity of these satellites was ω∼ 1.0 s^-1, which leads to effective magnitudes m_ eff∼ 16–17, i.e. in the range of the contamination. Therefore, we suggest that the observations can be theoretically, "photobombed" by a train of Starlink satellites on their low, parking orbit, although contamination by clouds is more likely. In the future, the number of satellites in mega-constellations is likely to grow significantly. Assuming 65 000 satellites (as in <cit.>), this would result in a typical 1200s exposure being crossed by about 200 satellite trails, potentially resulting in ∼ 260 fibres contaminated per exposure taken during long twilight (3% of the fibres). However, the limiting magnitude of the LAMOST-MRS instrument for 1200s exposure is V∼ 15 (5σ). Converting the apparent magnitudes of the satellites (using the crude photometric model described in <cit.>) into effective magnitudes, these will be in the 18 to 23 range (depending on the satellite's orbit and altitude and azimuth), well below the limit of LAMOST-MRS, even accounting for a possible 1 mag error on the photometric model. As usual, it is important to note that once the sun dips far enough under the horizon, most of the satellites fall in the shadow of the Earth. This problem is therefore only critical during the first and last hours of the night. While the satellites on operational orbits will not be a major concern for LAMOST, the compact trains of very low satellites can affect the observations. The probability of such a train crossing a telescope field of view is low, but considering that constellations will need to be regularly replenished, new satellites will need to be continuously launched. Considering 100 000 satellites with a life-time of 5 years, this would result in about one launch per day (each with 60 satellites). If the satellites stay one month in low orbit, this would result in about 60 trains in orbit, at various stage of dispersion. It is therefore important that the satellite operators also keep the brightness of the satellites to the absolute minimum possible during their stay on transit orbit. The changes of satellite attitude implemented by Starlink illustrate the improvements than can be made. § CONCLUSIONS We successfully detected false-positive SB2 candidates in the LAMOST-MRS spectra. The secondary component in all these binaries have near zero radial velocity and solar-like spectral lines. Highly likely this is light from the semi-transparent clouds illuminated by the full Moon. However we also suspect that partially this contamination can be a solar light reflected from the surface of low-orbital artificial satellites launched in the beginning of 2022. We found several possible contaminant candidates using archival orbital data from CelesTrack and STP web service. Unfortunately results presented in this paper cannot definitely confirm satellites as contaminant, as other sources like clouds and problem with sky subtraction will have similar effect on the spectral observations. To identify and remove such contamination we recommend analysis of all spectra taken during twilight, assuming a binary spectrum model, where one component has solar-like spectrum with radial velocity in the range =-10,+10. Also the short exposures should be carefully checked prior the co-addition to avoid the production of false double-lined spectra with contaminated exposures. During the scheduling of the observation one should consider possibility of the contamination by the bright "train" of newly launched satellites and avoid observations near the twilight if possible. Also we recommend to take additional image of the observed field, to reliably identify possible satellite tracks. § ACKNOWLEDGEMENTS MK is grateful to his parents, Yuri Kovalev and Yulia Kovaleva, for their full support in making this research possible. We thank Hans Bähr for his careful proof-reading of the manuscript. We thank Zhang Haotong and Luo A-Li for useful discussions. We thank Dr. Nikolay Emelyanov for providing the link to Minor Planet Center Checker. We thank Monika Kamińska for providing sky positions for satellites from STP. We are grateful to Dr. T.S. Kelso for development and maintaining of the CelesTrack. This work is supported by National Key R&D Program of China (Grant No. 2021YFA1600401/3), and by the Natural Science Foundation of China (Nos. 12090040/3, 12125303, 11733008). Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. The authors gratefully acknowledge the “PHOENIX Supercomputing Platform” jointly operated by the Binary Population Synthesis Group and the Stellar Astrophysics Group at Yunnan Observatories, Chinese Academy of Sciences. This research has made use of NASA’s Astrophysics Data System. It also made use of TOPCAT, an interactive graphical viewer and editor for tabular data <cit.>. § DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the corresponding author. LAMOST-MRS spectra are downloaded from <www.lamost.org>. mnras @urlcharsothermakeother $&#_% @doi@urlcharsother ifnextchar [ @doi@ @doi@[] @doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1 @eprint#1#2@eprint@#1:#2::nil @eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1 @eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml dblp:#1 @eprint@#1:#2:#3:#4niltempa #1tempb #2tempc #3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc [Bassa, Hainaut & Galadí-EnríquezBassa et al.2022]bassa2022 Bassa C. G., Hainaut O. R., Galadí-Enríquez D., 2022, @doi [] 10.1051/0004-6361/202142101, https://ui.adsabs.harvard.edu/abs/2022A A...657A..75B 657, A75 [Cui et al.,Cui et al.2012]2012RAA....12.1197C Cui X.-Q., et al., 2012, @doi [Research in Astronomy and Astrophysics] 10.1088/1674-4527/12/9/003, https://ui.adsabs.harvard.edu/abs/2012RAA....12.1197C 12, 1197 [Czesla, Schröter, Schneider, Huber, Pfeifer, Andreasen & ZechmeisterCzesla et al.2019]pya Czesla S., Schröter S., Schneider C. P., Huber K. F., Pfeifer F., Andreasen D. T., Zechmeister M., 2019, PyA: Python astronomy-related packages (@eprint ascl 1906.010) [El-Badry et al.,El-Badry et al.2018]bardy2018 El-Badry K., et al., 2018, @doi [] 10.1093/mnras/sty240, https://ui.adsabs.harvard.edu/abs/2018MNRAS.476..528E 476, 528 [Kovalev, Li, Zhang, Li, Chen & HanKovalev et al.2022a]tyc Kovalev M., Li Z., Zhang X., Li J., Chen X., Han Z., 2022a, @doi [] 10.1093/mnras/stac1177, https://ui.adsabs.harvard.edu/abs/2022MNRAS.513.4295K 513, 4295 [Kovalev, Chen & HanKovalev et al.2022b]bincat Kovalev M., Chen X., Han Z., 2022b, @doi [] 10.1093/mnras/stac2513, https://ui.adsabs.harvard.edu/abs/2022MNRAS.517..356K 517, 356 [Liu et al.,Liu et al.2020]lamostmrs Liu C., et al., 2020, arXiv e-prints, https://ui.adsabs.harvard.edu/abs/2020arXiv200507210L p. arXiv:2005.07210 [MallamaMallama2020]mallama2020 Mallama A., 2020, @doi [arXiv e-prints] 10.48550/arXiv.2012.05100, https://ui.adsabs.harvard.edu/abs/2020arXiv201205100M p. arXiv:2012.05100 [MallamaMallama2021a]mallama2021a Mallama A., 2021a, @doi [arXiv e-prints] 10.48550/arXiv.2101.00374, https://ui.adsabs.harvard.edu/abs/2021arXiv210100374M p. arXiv:2101.00374 [MallamaMallama2021b]mallama2021b Mallama A., 2021b, @doi [arXiv e-prints] 10.48550/arXiv.2111.09735, https://ui.adsabs.harvard.edu/abs/2021arXiv211109735M p. arXiv:2111.09735 [TaylorTaylor2005]topcat Taylor M. B., 2005, in Shopbell P., Britton M., Ebert R., eds, Astronomical Society of the Pacific Conference Series Vol. 347, Astronomical Data Analysis Software and Systems XIV. p. 29 [Zhao, Zhao, Chu, Jing & DengZhao et al.2012]2012RAA....12..723Z Zhao G., Zhao Y.-H., Chu Y.-Q., Jing Y.-P., Deng L.-C., 2012, @doi [Research in Astronomy and Astrophysics] 10.1088/1674-4527/12/7/002, https://ui.adsabs.harvard.edu/abs/2012RAA....12..723Z 12, 723 [de Jong et al.,de Jong et al.2019]4most2019 de Jong R. S., et al., 2019, @doi [The Messenger] 10.18727/0722-6691/5117, https://ui.adsabs.harvard.edu/abs/2019Msngr.175....3D 175, 3
http://arxiv.org/abs/2307.06124v1
20230712122503
Enhancing Portuguese Sign Language Animation with Dynamic Timing and Mouthing
[ "Inês Lacerda", "Hugo Nicolau", "Luisa Coheur" ]
cs.CL
[ "cs.CL", "ACM-class: I.6, I.7, J.6" ]
[ Ritwick Sarkar and Pritam Roy ================================= Current signing avatars are often described as unnatural as they cannot accurately reproduce all the subtleties of synchronized body behaviors of a human signer. In this paper, we propose a new dynamic approach for transitions between signs, focusing on mouthing animations for Portuguese Sign Language. Although native signers preferred animations with dynamic transitions, we did not find significant differences in comprehension and perceived naturalness scores. On the other hand, we show that including mouthing behaviors improved comprehension and perceived naturalness for novice sign language learners. Results have implications in computational linguistics, human-computer interaction, and synthetic animation of signing avatars. § INTRODUCTION Spoken/written language and sign language are different: one is an audio-oral language while the other is a spatial-visual language. Vocabulary and grammatical rules are also quite different. These differences lead to a language barrier between Deaf[Deaf with a capital refers to people who identify with the deaf culture and have been deaf before they started to learn a language. They are pre-lingually deaf.] and hearing people. In this paper, we contribute to breaking down the language barrier between European Portuguese and LGP. Sign language translators typically require two components: a language translator and a signing avatar. The translator converts written text (or speech) into a sequence of glosses (i.e., lexical units representing each gesture or sign); then, the avatar displays the synthesized glosses and additional linguistic components as signing animations. Signing avatars must account not only for multiple co-occurring linguistic processes but also for the naturalness of the movements. However, most avatars are described as unnatural, emotionless, and stiff <cit.> because they cannot accurately reproduce a human signer's subtleties. Therefore, one of the goals of an automatic sign language translator is to perform secondary movements based on human kinematics, so that animations are understandable and perceived as natural. Planning and scripting a signing avatar's facial and body movements is difficult. Minor variations in timing and speed parameters can lead to significant differences in the quality and understandability of sign animations <cit.>. In the case of sign languages, transitions between signs rely heavily on the phonology of the previous and following signs and determine the movement fluidity that allows sign streams to be intelligible. Therefore, transitions can impact the comprehension and naturalness of sign animations. In this paper, we introduce a new approach for interpolating signs, dynamic transitions, which change according to the previous and following signs. We aimed to answer the following research question: RQ1: Do dynamic transitions improve linguistic comprehension, naturalness, and preference of sign Language animations? We conducted a user study with 11 native speakers to understand the effect of dynamic transitions and optional transition speed. Overall, results show that dynamic transitions are equivalent to constant transitions in each of the former dimensions (comprehension, naturalness and preference). However, dynamic transitions enhance linguistic comprehension for signs that comprise one sole meaning (i.e., composite utterances and negatives) and require faster transitions. In addition to transitions between signs, the avatar's secondary movements – those that are added to improve the naturalness of the avatar and are not part of the morphosyntactic structure of sign languages – can also greatly impact the signing quality and the way the thought or feeling is conveyed. Secondary movements include eye blink, mouthing, and facial and corporal movements. In this paper, we study the effect of adding mouthing – the production of visual morphemes or syllables that derive from spoken language (Figure <ref>) – to an existing avatar. Some believe that mouthing is incorporated into the morphosyntactic structures of sign languages, and some believe they are not <cit.>. Nevertheless, it is interesting that mouthing can be combined with manual signs to create complex signs with a composite meaning. For example, the manual sign “mouse” in BSL can be accompanied by the mouth action “baby”, forming the composite meaning “baby mouse” <cit.>. Notice that mouthing should only be reproduced when there are no phonological facial expressions that contain the mouth (e.g., cheeks puffed, tongue touching the chin, morphemes). Thus, our second research question was: RQ2: Does mouthing impact linguistic comprehension, naturalness, and preference of sign language animations? Results show that the avatar with mouthing achieves better results in terms of comprehension, naturalness, and preference. To the best of our knowledge, research in the field has not yet been published on whether mouthing can improve comprehension; therefore, this user study can give valuable input into this topic. The paper is organized as follows: in Section <ref> we present related work, in Section <ref> we present our models and in Section <ref> we evaluate them. Then, in Section <ref>, we present the main conclusions and future work. § STATE OF THE ART Research regarding hand signs and facial expressions in Sign Language animations is scarce, and in a synthetic context, the blending of the two is still an open challenge. Existing solutions rely on Sign Language Annotations <cit.>, Keyframe Animations <cit.>, and Motion Capture methods <cit.>. Each approach provides advantages and disadvantages but all require a balance between quality and cost. The more accurate and natural the animations are, the more costly they are to be generated. Some work regarding modeling timing and pausing parameters for manual components has been done for ASL <cit.>. However, to the best of our knowledge, no study proposes any kind of dynamic transition that rely on the phonology of the previous and following signs for sign language animations. Prior research has been dedicated to mouthing. Mouthing or lip-sync appeared first in the 1920s with the advent of sound cartoons <cit.>. Speech can be discretized as a sequence of sounds known as phonemes. Each phoneme is associated with a facial pose; however, not all vocal articulations are visible, and some are irrelevant in the visual domain. For instance, nasality and voicing <cit.>. Phonemes usually have many-to-one relationships with visemes (i.e., facial and oral poses of phonemes) because different phonemes can have the same facial pose. It is also important to note that visual speech cannot be directly generated by concatenating visemes because it will over-articulate the produced animation. Mouthing animations can be produced manually or automatically <cit.>. A manual approach requires animators to draw each viseme by hand and later use an interpolation scheme that concatenates the visemes according to the animated utterances. This method is time-consuming since it requires animators to manually select the viseme and its timing. These limitations led to the development of automatic techniques that synchronize audio with visemes. In automation approaches, visemes are collections of 3D data and artists can rely on muscle-based systems or blend shapes expressed as polygon meshes to model avatars' lip positions to depict visemes. Automated techniques depend on the source of dialog to generate animation. If it is a pre-recorded voice track, a speech recognition system can be used; otherwise, if it is a text containing a dialog, a text-to-speech system must be used. Both techniques require the same process: detecting the phonemes and then selecting the corresponding visemes that can be interpolated between keyframes in the avatar. No matter the technique, the best mapping between phonemes and visemes is still a debatable issue. Many studies have been developed to understand the best Phoneme-Viseme mappings. The work presented in <cit.> examined 120 mappings and analyzed their effect on visual lip reading using hidden Markov model (HMM) recognizers. Although phoneme-viseme mappings are not universal among languages or within a language, some phoneme-viseme mappings have overlapping sets. For instance, similarly to English, Amazon Polly's Phoneme-Viseme mapping[<https://docs.aws.amazon.com/polly/latest/dg/ph-table-portuguese.html>] for European Portuguese also contains the viseme [/p/ /b/ /m/] as a set, even though these are two different languages. Some projects have been exploring the possibility of incorporating mouthings in Sign Language animations. The ViSiCAST project <cit.> developed the Signing Gesture Markup Language, which is an XML-compliant representation of signs based on HamNoSys[<http://www.sign-lang.uni-hamburg.de/dgs-korpus/index.php/hamnosys-97.html>]. In this project, phonemes are described based on the International Phonetic Alphabet (IPA) transcription and then visemes are mapped using the Speech Assessment Methods Phonetic Alphabet (SAMPA) encoding conventions <cit.>. Some work in the area of visual speech animation has been done for ASL and Swiss German Sign Language <cit.>. The work described in <cit.> is the first automatic visual speech system for EP based on viseme concatenations. This project used two phoneme-viseme mappings: one mapping with 14 different viseme classes and another mapping with 10 different viseme classes. Both Phoneme-Viseme mappings resulted in slightly different vowel classifications, but the number of vocalic viseme classes remained unchanged. Each viseme class was then created in the avatar by an experienced digital artist. After creating the mappings and the visemes, the system was divided into two main components: a speech processing component and a 3D animation engine. The speech process component processed the data (e.g., text, audio, or both) and obtained the phonetic transcriptions using an EP phonetic lexicon developed by Microsoft together with Microsoft Speech API (SAPI)[<https://docs.microsoft.com/en-us/previous-versions/windows/desktop/ms723627(v=vs.85)>] as an automatic speech recognition (ASR) model. The SAPI system used recognition events to detect the different utterances from the audio and stored a list of words and their corresponding IPA formats. The SAPI does not provide the phonemes' duration and timing, therefore, the EP phonemes' duration was gathered from a database of 100 hours of Portuguese speech provided by Microsoft. The 3D animation engine encapsulated the data obtained from the speech process component and translated it into the 3D animation. The cartoon character relied on a bone-based rig, and each viseme was interpolated using the timing obtained by the speech process component and the animation curves defined by the animator. Finally, we highlight the work developed in <cit.>, where the authors studied the mouth actions from a cross-linguistic perspective for three European Sign Languages: Sign Language of the Netherlands (NGT), British Sign Language (BSL), and Swedish Sign Language (SSL). Considering all the possible mouth action, the authors concluded that mouthing is the mouth action with the highest values, almost in all three languages (57% in SSL, 51% in BSL, and 39% in NGT). In this study, the authors also confirmed the hypothesis that mouthings spread analogously to “native” mouth gestures; thus, mouthings have indeed a grammatical function in sign languages. Based on all results gathered from the study, mouthing occurs for all three sign languages, and without it, a signing avatar would look unnatural and could omit important information, thus, resulting in incomprehensible utterances. Therefore, we can conclude that an avatar capable of producing mouthing is an essential part of any automatic written/spoken to sign translation system. § SYNTHESIS OF SIGN LANGUAGE ANIMATIONS We used an existing text-to-sign language translator <cit.> to evaluate our animations (overall architecture in Figure <ref>). This system is divided into two main modules. The first module performs a translation process, consisting of the translation of text from Portuguese into LGP, in which the LGP sentence is represented by a sequence of glosses and additional morphosyntactic information. The second module consists of an avatar that animates the LGP translated message received from the first module, by using a database with synthesized signs (i.e., animations). In the following sections, we describe the dynamic transitions and how we have implemented the mouthing process. §.§ Dynamic transitions Considering that transitions between signs rely heavily on the phonology of the previous and following signs and determine the movement fluidity that allows sign streams to be intelligible, we propose dynamic transitions, which interpolate signs through transitions that change according to the previous/following signs. (Equation <ref>). RightHandPosDiff = (CurrentSign[firstFrame] - perviousSign[lastFrame]).sqrMagnitude LeftHandPosDiff = (CurrentSign[firstFrame] - perviousSign[lastFrame]).sqrMagnitude While we iterate over each gloss in run-time, the differences between hand positions in the last keyframe of the previous sign and the first keyframe of the following sign are calculated and then the squared magnitude of these vectors is computed[The hand positions are obtained from the bones of the skeleton. While creating the database that contains the synthesized sign animations, we also created a JSON file that contains, for each sign and in each keyframe, the facial expressions used, and the hand positions based on the bones of the skeleton. In runtime, we use this information to obtain the hand positions to generate the dynamic transitions.]. Calculating the squared magnitude of a vector is much faster than using the magnitude property since it does not require a slow square root operation that makes the magnitude property take longer to execute[<https://docs.unity3d.com/ScriptReference/Vector3-sqrMagnitude.html>]. These squared magnitude values are then converted to percentages by defining a scale. To decide this scale, we checked all signs created to find two signs that have the closest hand position differences (e.g., signs “EU” and “TER”) and two signs that have the furthest hand position differences (e.g., signs “ELE” and “TER”). Based on our findings, we defined two scales: one that includes both hands (if the left hand has movement), and another that only considers the right hand (if the left hand has no movement). Using these scales, the squared magnitude values are converted to percentages that range between 0% and 100%. (Equation <ref>). if (BothHands) percentage = (RightHandPosDiff + LeftHandPosDiff) - 0.01 (CurrentSign[firstFrame] - perviousSign[lastFrame]).sqrMagnitude LeftHandPosDiff = (CurrentSign[firstFrame] - perviousSign[lastFrame]).sqrMagnitude Finally, to find the duration value used in the transition between signs, we use the percentage calculated to linearly interpolate between two duration values. These two duration values correspond to the lowest and highest values that the duration of transitions can take. We defined these values by analyzing the lowest and highest transition duration in multiple videos of an LGP corpus[<https://portallgp.ics.lisboa.ucp.pt/corpus_lgp/>]. Furthermore, two empirical studies developed by Sedeeq <cit.> found that ASL signers prefer slower transitions than the timing of human signers and that they prefer animations with an average transition time of 0.5 seconds. Based on the analysis of our LGP corpus, we decided that the duration of transitions would range between 0.3 seconds and 1.1 seconds because this range would include 0.5 seconds as the average transition time and these are slightly slower than the human signing transitions in our LGP corpus. Using the calculated duration values in the process previously described, we created an interpolation between the current sign and the next sign using dynamic transitions by defining a duration value and an offset value[The offset was introduced as a workaround due to the limitations of the Unity animation engine because the animation blending of Unity requires overlap between the animations. Another way to do it would be by using the UnityEditor package which provides much more animation resources to manipulate the avatar’s animation in runtime. However, these scripts cannot be included in the WebGL build which we needed because we want to have the translator deployed in a website.]. The first keyframe of every sign in the database starts at 1 second, which is what allows transitions between signs to be executed without cutting the signs shorter because without it the transition would overlap the beginning of each sign. Using the offset value, we can adjust the timing until the first keyframe matches the transition duration time; therefore, the offset value is 1 second minus the transition duration value. Transitions must be seen as a continuous stream of motion without being too paused because co-articulation, similarly to oral languages, also constitutes an important part of sign languages. To create transitions that are fluid and not too paused between signs, we decided to define the offset value as 1.2 seconds minus the transition value rather than 1 second. Thus, signs will be slightly overlapped and transitions more fluid. Another aspect taken into consideration was the phonological assimilation processes of composite utterances. Composite utterances are utterances that have meanings derived from the composition of multiple signs (e.g., “VERMELHO” + “MELÃO” means “MELÂNCIA” (“RED” + “MELON” means “WATERMELON”). Since multiple signs can be combined for one sole meaning, the transitions between these must be smaller than transitions between signs that have separate meanings. This is another reason why dynamic transitions are so important. These can have an impact on the perception of composite utterances if the phonological assimilation processes are not taken into consideration. Based on empirical experiments, we defined 0.2 seconds as the transition duration in-between all signs that comprise a composite utterance. Using the indices that define composite utterances obtained from the translation process, we transition between signs that comprise composite utterances with a transition value of 0.2 seconds, making the transitions for composite utterances faster than transitions for other signs. §.§ Mouthing Mouthing is an essential part of any automatic written-to-sign translation system and without it, a signing avatar would look unnatural and could omit important information. The existing translator was extended to create mouthing animations. Notice that mouthing should be done with words in Portuguese and not their glosses. For instance, verbs are not conjugated while signing, but these should be conjugated while mouthing. Therefore, we extended the system to gather all words in Portuguese and, afterwards, combine them into a sentence so that we consider the assimilation between words when executing the phonetic transcription. (Figure <ref>) The phonetic transcription (Step 1 in Figure <ref>) is done by employing the phonemizer tool[<https://github.com/bootphon/phonemizer>], where the speak backend is used to produce phoneme sequences described based on the International Phonetic Alphabet transcription. After this, normalization (Step 2 in Figure <ref>) is done by encoding non-ASCII to ASCII, words are separated into their corresponding syllables (Step 3 in Figure <ref>) using syllabification rules and, then, each phoneme is mapped into one viseme (Step 4 in Figure <ref>) using the phoneme-viseme mapping we created. While mapping visemes, we need to be careful not to over-articulate as it would generate unnatural mouthing animations. We prevented the over-articulation problem by removing visemes that are irrelevant in the visual domain. For instance, we remove visemes that have equal consecutive visemes, and we remove 'C' viseme consonants (i.e., in a visual domain, represent a slight open mouth) that are at the end of a syllable (e.g., 'r' is removed from 'per' syllable in Figure <ref>). The avatar contains 7 visemes – A, B, C, E, F, O, and U – to animate the 33 phonemes of the Portuguese language. To create visemes as close as possible to human visemes, animations for each viseme were created by adjusting the weights of blend shapes. In the translation process, words are translated into phonemes, separated into syllables, and then mapped into visemes. In the animation process, when the manual signs are being animated, mouthing is animated by using an interpolation scheme that concatenates the visemes according to the animated signs. The duration value for the mouthing is defined based on the duration of the sign it is applied to and based on the number of syllables for that sign. The reason behind this is that we do not want mouthing to either overlap the duration of a sign or be too slow if the duration of a sign is too large. The synchronization between mouthing and signs is extremely important because studies <cit.> have reported that a mismatch between the duration of signs and their corresponding mouthings can provoke a disturbing oscillation of the user's visual focus from hands to face. Finally, we would like to highlight that the mouth movement is slightly anticipated with respect to the hands. § EVALUATION We have designed and conducted two experimental user studies[Both studies were approved by the Ethics Committee of our university, and, before participating in the studies, each participant was handed an informed consent that they had to sign to participate.]: we analyze the impact of both dynamic transitions and mouthing on: a) linguistic comprehension, b) naturalness, and c) preference of LGP animations. In the case of dynamic transitions, speed was also evaluated. All the signing animations were generated using the previously described sign language translator. §.§ Evaluating Dynamic Transitions We recruited 11 participants fluent in LGP. Participants had to fill in a questionnaire. In the first 10 sections of the questionnaire, participants had to visualize a video, write the understood content, and describe whether the sentence contained an error. As we wanted to evaluate the impact transitions could have on the phonology of signs, particularly on the phonological assimilation of composite sentences, the created sentences contained one or more composite sentences. Furthermore, for each sentence, they also had to evaluate the transitions' speed on a 1-5 Likert scale with one as too slow and five as too fast, and evaluate the avatar's naturalness on a 1-5 Likert scale with one as robotic and five as natural. We presented both conditions - dynamic transitions and constant conditions - to all participants in a counterbalanced order. The order of the first ten sections was randomized. In the next three sections of the questionnaire, participants had to select the preferred video between two side-by-side videos (one with dynamic transitions and one with constant transitions of 0.5 seconds). To mitigate experimental bias, the video position was randomized. We also asked participants “How the naturalness of the avatar could be improved”, whether they think “transitions between signs affect naturalness” and whether they think “transitions between signs affect comprehension”. §.§.§ Comprehension For each sentence in the questionnaire, we measured the percentage of content understood by calculating the number of glosses correctly described with 100% as all glosses correctly understood by a participant. This process had to be done manually as synonyms of signs also counted as correct. Overall, the average comprehension scores for all participants with both conditions was 81.56% (SD = 23.29). As shown in Figure <ref>, 7 participants had higher comprehension results in sentences with dynamic transitions, 3 participants had higher comprehension results with constant transitions and 1 participant had equal comprehension results in both transitions. According to a Shapiro-Wilk test, we retained the null hypothesis of population normality (p = 0.901, p = 0.722); therefore, we conducted a Paired samples T-test to compare differences in comprehension scores between our conditions. Based on the results, there was no significant difference (t(10) = -1.379, p = 0.198) in the scores for dynamic transitions (M = 82.97, SD = 9.43) and constant transitions (M = 80.15, SD = 11.55). In almost all cases, participants would either understand a sign or not, independently of the transition approach, which could be explained by the difference between the transition values of both approaches is not significant. However, there were 4 cases in the two-paired sentences (i.e., eight sentences) where the same sign was only perceived correctly with a dynamic approach. Moreover, there were no cases where a sign was only perceived correctly with a constant approach. Furthermore, seven participants believed transitions between signs impact comprehension, whereas only four believed they do not. §.§.§ Naturalness For each sentence in the questionnaires, we measured the percentage of naturalness by using the scores submitted on the Likert scale (i.e., one as robotic and five as natural), with 5 being 100%. Overall, the average naturalness scores for all participants with both conditions was 50.73% (SD = 22.78) and the average overall naturalness[The first percentage is the average naturalness scores for all sections in the questionnaire (each video where they had to write the sentence they comprehended). The second percentage is the overall naturalness asked at the end of the questionnaire.] given at the end of the questionnaire by all participants was 50.91% (SD = 25.87). The scores for naturalness were significantly lower than those for the other two measures, which is unsurprising because naturalness is the most demanding criterion of all. As shown in Figure <ref>, there were large discrepancies between naturalness scores throughout our participants with 20% as the lowest average score and 100% as the highest score. Furthermore, 3 participants had higher naturalness results in sentences with dynamic transitions, 2 participants had higher naturalness results with constant transitions, and 6 participants had equal naturalness results in both transitions. According to a Shapiro-Wilk test, we retained the null hypothesis of population normality (p = 0.548, p = 0.215); therefore, we conducted a Paired samples T-test to compare differences in naturalness scores between our conditions. Based on the results, there was no significant difference (t(10) = -0.820, p = 0.432) in the scores for dynamic transitions (M = 51.27, SD = 22.61) and constant transitions (M = 50.18, SD = 22.51). However, seven participants believed transitions between signs impact naturalness, whereas only four believed they do not. When asked “How the naturalness could be improved", One participant said: “Do not evaluate and correct naturalness only by the execution of signs, but also see the sentence as a whole and add small pauses or accelerations between signs, depending on the sentence and its meaning. Since LGP is a visual language, everything you see counts". The answers of participants regarding suggestions to improve naturalness were unanimous and revolved on the following aspects: [(1)] * Add more facial expression throughout the sentence and not only when this is applied to signs individually * There is a lack of corporal movement, * Add small pauses in appropriate places (this step is linked to prosody and we were suggested to incorporate topicalization), * improve grammar and signs, * add more fluidity in the movements, * one participant commented that “the robotic appearance can be caused by the disproportional body of the avatar". The first aspect could be improved by adding emotions to the avatar. The second aspect could be improved by incorporating the secondary facial and body movements we described in Section <ref>, but unfortunately, we did not have time to conduct a fourth user study. Based on the comments made from participants, we conducted a Spearman's rank to assess the relationship between facial expressions and naturalness, and between comprehension and naturalness. There is a statistically significant bivariate association between quality of facial expressions and naturalness (r_s = 0.782, p = 0.004) with a strong magnitude and positive correlation at the 0.01 level. There is also a statistically significant bivariate association between comprehension and naturalness (r_s = 0.621, p = 0.042) with a strong magnitude and positive correlation at the 0.05 level. §.§.§ Preference We conducted a Chi-Square test to analyze which transition approach was preferred. We found a statistically significant relation between participants and the transition approach (X^2(1, N = 33) = 6.818, p = .009), as participants preferred dynamic transitions (N = 24) rather than constant transitions (N = 9). §.§.§ Transitions Speed For each sentence in the questionnaires, we measured the percentage of optimal transition speed by using the scores submitted in the Likert scale (i.e., one as too slow and five as too fast) with 3 being the optimal speed with 100% and decreasing the percentage value according to the closeness to the limits of the scale with 2 and 4 as 66.67% and 1 and 5 as 33.33%. Overall, the average optimal transition speed scores for all participants with both conditions was 83.64% (SD = 17.36), and the average overall quality of transitions given at the end of the questionnaire by all participants was 81.82% (SD = 17.41). As shown in Figure <ref>, three participants had higher optimal transition speed results in sentences with dynamic transitions, three participants had higher optimal transition speed results with constant transitions, and five participants had equal optimal transition speed results in both conditions. According to a Shapiro-Wilk test, we retained the null hypothesis of population normality (p = 0.283, p = 0.064); therefore, we conducted a Paired samples T-test to compare differences in optimal transition speed scores between our conditions. Based on the results, there was no significant difference (t(10) = -0.319, p = 0.756) in the scores for dynamic transitions (M = 83.64, SD = 11.68) and constant transitions (M = 83.032, SD = 13.45). However, three participants commented on the importance of faster transitions in-between signs that comprise one sole meaning and noted that constant transitions were too slow for composite utterances, and surprisingly, in negatives. The latter is one aspect we did not consider, but coincidentally our dynamic approach produced faster transitions between the negated verb and the “NÃO” (no) sign because the difference between hand locations of these signs is quite small. This difference allowed participants to note that constant transitions were too slow for transitions between the negated verb and the “NÃO” sign while our dynamic approach was optimal. We conclude that dynamic transitions might positively impact the optimal speed and that signs that comprise one sole meaning, as composite utterances and negation of verbs, should have faster transitions than other signs. §.§.§ Discussion Based on the previously reported findings, we can conclude that the null hypothesis could not be rejected in evaluating comprehension, transitions' speed, and naturalness. Nevertheless, we found particular cases where the same signs with the dynamic approach were perceived correctly and with the constant approach perceived incorrectly, but the opposite was not found. Therefore, dynamic transitions could enhance linguistic comprehension, particularly for signs that comprise one sole meaning (i.e., composite utterances and negatives) and require faster transitions. The dynamic transitions approach was also the most preferred approach by our participants, showing the positive impact they can have on animations. Regarding naturalness, neither approach had a significant impact, and this criterion is still the most demanding of all. It is interesting to note that participants tend to relate naturalness to the comprehension of animations. Furthermore, it is also interesting to note that naturalness is not only linked to comprehension but also to syntax, because sentences that were completely understood but were not correct in terms of grammar, also scored lower in naturalness. §.§ Mouthing Evaluation We recruited 20 participants that are learning LGP. Recruiting beginners for this user study was essential because we wanted people that had sufficient knowledge to understand some signs but not all, so that we could evaluate whether mouthing could indeed have an impact on comprehension. Again, we recurred to a questionnaire with thirteen sentences. For this user study, we removed all phonological facial expressions from signs, so that all signs could execute mouthing. The level of complexity and difficulty in this user study was lower than the previous one, but not too easy so that we could see the impact of mouthing. This experiment was similar to the previous one (questions should evaluate naturalness, comprehension and preference). We also evaluated general quality, signs quality, and facial expressions quality in the same Likert scales. Additionally, we also asked participants whether they think “mouthing affects naturalness", whether they think “mouthing affects comprehension", and “If yes, in which situations and why?". §.§.§ Comprehension We evaluated comprehension using a similar procedure to the previous study. Overall, the average comprehension scores for all participants with both conditions was 70.94% (SD = 37.88) which we found surprisingly high considering that participants were beginners and sentences had a level of complexity and difficulty higher than beginner level with some sentences composed by interrogatives, one composite utterance (i.e., sign “IRMÃ") and dactylology words comprised of numbers with two digits and names with seven letters. As shown in Figure <ref>, there were large discrepancies between comprehension scores among our participants with 33.33% as the lowest average score and 100% as the highest score. Furthermore, 10 participants had higher comprehension results in sentences with mouthing, three participants had higher comprehension results without mouthing, and seven participants had equal comprehension results in both. According to a Shapiro-Wilk test, we rejected the null hypothesis of population normality (p = 0.012, p = 0.050); therefore, we conducted a Wilcoxon signed-rank test to compare differences in comprehension scores between our conditions. Based on these results, the comprehension scores for sentences with mouthing were statistically significantly higher than for sentences without mouthing (Z = -2.029, p = 0.043). Furthermore, 16 participants believe mouthing does indeed have an impact on comprehension, whereas only 4 participants believe it does not. Additionally, many comments were made by participants throughout the questionnaires noting that mouthing makes it easier to understand the sentences. §.§.§ Naturalness Naturalness was also evaluated as in the previous experiment. Overall, the average naturalness scores for all participants with both conditions was 78.29% (SD = 16.91) and the average overall naturalness given at the end of the questionnaire by all participants was 78.95% (SD = 15.60). The scores for naturalness were significantly higher than scores given in the previous user study, which can be explained by the fact that participants in this study are still beginners and are still not sensible about all subtleties of sign languages; therefore, they did not notice aspects that might still be missing in our avatar. As shown in Figure <ref>, 11 participants had higher naturalness results in sentences with mouthing, five participants had higher naturalness results without mouthing, and four participants had equal naturalness results in both. According to a Shapiro-Wilk test, we retained the null hypothesis of population normality (p = 0.160, p = 0.793); therefore, we conducted a Paired samples T-test to compare differences in naturalness scores between our conditions. Based on the results, the naturalness scores for sentences with mouthing (M = 80.40, SD = 15.24) were statistically significantly higher (t(19) = -2.094, p = 0.050) than for sentences without mouthing (M = 76.10, SD = 13.11). Furthermore, 18 participants believe mouthing has an impact on naturalness, whereas only 2 participants believe it does not. §.§.§ Preference We conducted a Chi-Square test to analyze which animations were preferred on the three trials each participant had; therefore, there were 60 trials overall. There was a statistically significant relation between participants and the mouthing approach (X^2(1, N = 60) = 15, p < 0.001), as participants preferred more animations with mouthing (N = 45) than animations without (N = 15). One participant said that “mouthing can distract the participant from the signs” as being the reason for not choosing animations with mouthing. §.§.§ Discussion Based on the previously reported findings, we conclude that sentences that incorporated mouthing had higher comprehension and naturalness scores than sentences without mouthing. Therefore, our study suggests that mouthing can indeed enhance linguistic comprehension and naturalness, and participants prefer LGP animations with mouthing. Our user study demonstrates not only the impact mouthing has on signing animations but also that the quality of our mouthing approach was good enough to improve comprehension. § CONCLUSIONS AND FUTURE WORK We used an existing text-to-sign language translator to demonstrate how we improved LGP animations. We introduced a new way of performing transitions between signs – the dynamic transitions – and added to the translator's avatar the possibility of performing mouthing. The positive results indicate that the generated animations show great potential in the field of synthetic animation of signing avatars. Despite the positive results, some aspects must be improved and extended in future research to improve perceived naturalness. Suggestions include adding more facial expressions, corporal movements, appropriate pauses and accelerations between signs, and creating more fluid movements. plainnat
http://arxiv.org/abs/2307.07352v1
20230714140100
Studying quantum entanglement and quantum discord in the cavity QED models
[ "Miao Hui-hui", "Li Wang-shun" ]
quant-ph
[ "quant-ph" ]
[Email address: ][email protected] Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, Vorobyovy Gory, Moscow, Russia Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, Vorobyovy Gory, Moscow, Russia Based on the two-qubit Jaynes–Cummings model — a common cavity quantum electrodynamics model, and extending to modification of the three-qubit Tavis–Cummings model, we investigate the quantum correlation between light and matter in bipartite quantum systems. By resolving the quantum master equation, we are able to derive the dissipative dynamics in open systems. To gauge the degree of quantum entanglement in the two-qubit system, von Neumann entropy and concurrence are introduced. Quantum discord, which can properly measure the quantum correlation in both closed and open systems, is also introduced. In addition, consideration is given to the impacts of initial entanglement and dissipation strength on quantum discord. Finally we discussed two different cases of nuclei motion: quantum and classical. Studying quantum entanglement and quantum discord in the cavity QED models Li Wang-shun August 12, 2023 ========================================================================== § INTRODUCTION The Einstein–Podolsky–Rosen (EPR) paradox put out by Einstein et al. <cit.> served as the inspiration for the idea of quantum entanglement <cit.>. The study of fundamental problems in quantum information theory has always focused heavily on quantum entanglement. Quantum information processing (QIP), which is related to entanglement, provides a larger variety of information manipulation techniques than the conventional approach. Lacking a classical counterpart, the quantum entanglement is unique and has been seen as a key resource for QIP, including quantum computation <cit.>, quantum cryptography <cit.>, and other works <cit.>. However, according to the current theory, quantum entanglement is merely a particular type of quantum correlation, which is a more fundamental idea than quantum entanglement <cit.>. The correlation that contains both classical and quantum components may be more extensive and basic than entanglement. The quantum discord is a novel but interesting candidate of quantum correlation, which was first proposed by Ollivier and Zurek <cit.>, explained in detail by Vedral et al. <cit.>, and introduced into many fields for research recently <cit.>. Similar to quantum entanglement, the quantum correlation will continue to attenuate under the effect of external environmental noise, which is the process of quantum decoherence. The decoherence process is unavoidable since the actual quantum system is not an entirely closed ideal system. How to deal with decoherence, which is the major obstacle to the superiority of QIP, is the main issue. The quantum electrodynamics (QED) model, which presents a distinct physical paradigm for examining interaction between light and matter, is a fundamental contribution to this paper. In this paradigm, fields of cavities are coupled to impurity two- or multi-level systems, which are typically referred to as atoms. The most studied cavity QED model is the Jaynes–Cummings model (JCM) <cit.> and its generalizations and modifications <cit.>, which are typically easier to realize in experiment. In the last time a lot of research has been carried out in the area of cavity QED models <cit.>. Previous studies have rarely involved quantum correlation in cavity QED systems. Sometimes quantum correlation can be used for certain quantum calculations. Thus, the study of quantum correlation dynamics in the cavity QED systems is very necessary. We investigate the quantum entanglement via von Neumann entropy and concurrence. Consideration is also given to the quantum discord in bipartite systems, based on the two- and three-qubit cavity QED models (JCM and Tavis–Cummings model (TCM), respectively). We can obtain the dissipative dynamics in open systems by solving the quantum master equation (QME). We study the quantum entanglement and compare it with quantum discord. We also investigate the impact of initial entanglement and dissipation strength on quantum discord. Finally we discussed the quantum and classical motion of nuclei. This paper is organized as follows. After introducing the basic principles of the quantum theory of information, and the definitions of von Neumann entropy, concurrence and quantum discord in Sec. <ref>, we introduce in succession two-qubit JCM and three-qubit TCM in Sec. <ref> and Sec. <ref>, respectively. We do the numerical simulations and get some results in Sec. <ref>. Some brief comments on our results and extension to future work in Sec. <ref> close out the paper. § QUANTUMNESS OF CORRELATION §.§ Quantum entanglement When a group of particles are created, interact, or share spatial proximity in a way that prevents each particle's quantum state from being independently described from the states of the others, this phenomenon known as quantum entanglement takes place. Quantum entanglement occurs even when the particles are separated by a great distance. The fundamental difference between classical and quantum physics is the issue of quantum entanglement, which is a key aspect of quantum mechanics that is absent from classical mechanics. The following singlet state of two-spin system is the most prevalent type of quantum entanglement |Ψ⟩=1/√(2)(|↑↓⟩-|↓↑⟩) where ↑ — spin up and ↓ — spin down. For a bipartite system, we set the observed subsystem as 𝒜 and the unobserved subsystem as ℬ. As shown in Fig. <ref>, it is more convenient to detect photons in the cavity QED model than to detect other particles. In simulations, we divide the measurement of quantum entanglement into two cases for discussion: closed and open quantum system. We introduce von Neumann entropy and concurrence to measure the degree of entanglement. §.§.§ von Neumann entropy When the quantum system is closed, the whole system is in pure state |pure⟩, then we can directly use the von Neumann entropy of reduced density matrix, which is used to measure quantum correlation, but it can be regarded as entropy of entanglement when system is closed. In this case, due to the Schmidt decomposition, the von Neumann entropy of each of two subsystems 𝒜 and ℬ can be used as a gauge of the system's entanglement <cit.> E=S(𝒜)=S(ℬ) where S(𝒜), S(ℬ) are corresponding to ρ_𝒜, ρ_ℬ, respectively. And ρ_𝒜(ℬ) — density matrix reduction of ρ_𝒜ℬ, which is the density matrix of whole system, has the following form ρ_𝒜 =Tr_ℬ(ρ_𝒜ℬ) =∑_b(I_𝒜⊗⟨ b|_ℬ)ρ_𝒜ℬ(I_𝒜⊗|b⟩_ℬ) ρ_ℬ =Tr_𝒜(ρ_𝒜ℬ) =∑_a(⟨ a|_𝒜⊗ I_ℬ)ρ_𝒜ℬ(|a⟩_𝒜⊗ I_ℬ) and ρ_𝒜ℬ=|pure⟩⟨ pure| In quantum theory, the definition of von Neumann entropy of reduced density matrix ρ_𝒜(ℬ) is as follows S(𝒜(ℬ))=-Tr(ρ_𝒜(ℬ) log_2ρ_𝒜(ℬ)) In terms of eigenvalues, we have S(𝒜(ℬ))=-∑_iλ_i^𝒜(ℬ)log_2λ_i^𝒜(ℬ) where λ_i^𝒜(ℬ) — eigenvalues of density matrix ρ_𝒜(ℬ). 0≤ S(𝒜(ℬ))≤ log_2N, N=2^n, N — Hilbert space dimension, n — number of qubits. S(𝒜(ℬ))=0 — separable state, S(𝒜(ℬ))>0 — entangled state, S(𝒜(ℬ))=log_2N — maximum entangled state. §.§.§ Concurrence Now we introduce concurrence into the two-qubit system, especially when the system is dissipative (at this point, the whole system is in a mixed state), to measure the quantum entanglement. Entanglement describes a concurrence that looks like this 𝒞=max{√(λ_3)-√(λ_2)-√(λ_1)-√(λ_0), 0} This equation is also called Hill–Wootters equation <cit.>. Here λ_1, λ_2, λ_3 and λ_4 are matrix eigenvalues of ρ̃_𝒜ℬ, and λ_3>λ_2>λ_1>λ_0≥ 0. Density matrix ρ̃_𝒜ℬ has the following form ρ̃_𝒜ℬ=ρ_𝒜ℬ(σ_y⊗σ_y)ρ_𝒜ℬ^*(σ_y⊗σ_y)^† where ρ_𝒜ℬ^* — complex conjugation of a matrix ρ_𝒜ℬ. σ_y=-i|0⟩⟨ 1|+i|1⟩⟨ 0|. 𝒞=0 — separable state, 𝒞>0 — entangled state, 𝒞=1 — maximum entangled state. §.§ Quantum discord Quantum discord is a measure of nonclassical correlations between two subsystems of a quantum system used in quantum information theory. It also contains correlations resulting from quantum physical phenomena but not necessarily from quantum entanglement. Moreover, it serves as a gauge for the quantumness of correlations. §.§.§ Overall correlation Firstly, we introduce the concepts of Shannon's theory into the field of quantum informatics. In classical information theory, the information can be quantified by Shannon entropy H(X)=∑_xP_|X=xlnP_|X=x where P_|X=x is the probability with X being x. For two random variables X and Y, the total correlation between them can be measured by the mutual information which is defined as I(X:Y)=H(X)+H(Y)-H(X,Y) We extend the above equation to the field of quantum information, and obtain the equation of quantum mutual information ℐ(ℬ:𝒜) =S(𝒜)+S(ℬ)-S(𝒜ℬ) =ℐ(𝒜:ℬ) where S(𝒜ℬ) — von Neumann entropy for the system ρ_𝒜ℬ, S(𝒜) — entropy for the subsystem ρ_𝒜, S(ℬ) — entropy for the subsystem ρ_ℬ. ℐ(ℬ:𝒜) describes the overall correlation of the system. §.§.§ Classical correlation Then 𝒥(ℬ:𝒜), which describes the maximum classical correlation, is introduced. It has the following form 𝒥(ℬ:𝒜) ={Π^𝒜_k}max[S(ℬ)-S(B|{Π_k^𝒜})] ={Π^𝒜_k}max[S(ℬ)-∑_kp_kS(ρ_k)] =S(ℬ)-{Π^𝒜_k}min∑_kp_kS(ρ_k) where {Π_k^𝒜} is a complete set of projectors for the von Neumann projective measurement (orthogonal projection basis) of subsystem 𝒜. The von Neumann projective measurement basis for a Hilbert space with dimension 2 (one qubit) is as follows {|b_k⟩}={cosθ|0⟩+sinθ|1⟩, sinθ|0⟩-cosθ|1⟩} where θ∈[0,π/2] and projection operators Π_0^𝒜 =cos^2θ|0⟩⟨0|+cosθ sinθ|0⟩⟨1| +cosθ sinθ|1⟩⟨0|+sin^2θ|1⟩⟨1| Π_1^𝒜 =sin^2θ|0⟩⟨0|-cosθ sinθ|0⟩⟨1| -cosθ sinθ|1⟩⟨0|+cos^2θ|1⟩⟨1| The von Neumann projective measurement basis in Eq. (<ref>) can be extended to {|b_k⟩}={cosθ|0⟩+sinθ e^iφ|1⟩,sinθ e^-iφ|0⟩-cosθ|1⟩} where φ∈[0,2π]. After constructing the projection operators of subsystem 𝒜, we can measure it and get ρ_k, which is defined as follows ρ_k=Tr_𝒜[(Π_k^𝒜⊗ I_ℬ)ρ_𝒜ℬ(Π_k^𝒜⊗ I_ℬ)^†]/p_k where p_k=Tr[(Π_k^𝒜⊗ I_ℬ)ρ_𝒜ℬ(Π_k^𝒜⊗ I_ℬ)^†] ρ_k is the state of the subsystem ℬ after a measurement of subsystem 𝒜 leading to an outcome k with a probability p_k. It should be noted that in most cases 𝒥(ℬ:𝒜) is not equal to 𝒥(𝒜:ℬ), described as follows 𝒥(𝒜:ℬ) ={Π^ℬ_k}max[S(𝒜)-S(A|{Π_k^ℬ})] ={Π^ℬ_k}max[S(𝒜)-∑_kp_k'S(ρ_k')] =S(𝒜)-{Π^ℬ_k}min∑_kp_k'S(ρ_k') where ρ_k'=Tr_ℬ[(I_𝒜⊗Π_k^ℬ)ρ_𝒜ℬ(I_𝒜⊗Π_k^ℬ)^†]/p_k' p_k'=Tr[(I_𝒜⊗Π_k^ℬ)ρ_𝒜ℬ(I_𝒜⊗Π_k^ℬ)^†] §.§.§ Quantum correlation Now we define that the quantum discord is the difference between the total correlation and the maximum of classical correlation 𝒟(ℬ:𝒜)=ℐ(ℬ:𝒜)-𝒥(ℬ:𝒜) where 0≤𝒟(ℬ:𝒜)<ℐ(ℬ:𝒜), 𝒟(ℬ:𝒜)≤ S(𝒜). As in the case of the maximum classical correlation, 𝒟(ℬ:𝒜) is not equal to 𝒟(𝒜:ℬ) in most cases. The physical meaning of discord is that the greater 𝒟(ℬ:𝒜), the greater the minimum loss of information in the subsystem ℬ after measurement of subsystem 𝒜. In other words, the greater the minimum disturbance caused to subsystem ℬ after measuring the subsystem 𝒜, the stronger the correlation between ℬ and 𝒜. The maximization in 𝒥(ℬ:𝒜) represents the most information obtained about the system ℬ as a result of perfect measurement {Π_k^𝒜}. It can be shown that the quantum discord is zero for states with only classical correlation and nonzero for states with quantum correlation. § TWO-QUBIT JAYNES–CUMMINGS MODEL In this paper, we investigate quantum entanglement and quantum discord dynamics on two cavity QED models. First, we introduce the two-qubit JCM. The JCM is a theoretical quantum optics model. It explains the interaction between a two-level atom and a quantized mode of an optical cavity. In order to better understand the phenomena of spontaneous emission and photon absorption in a cavity, it was initially designed to examine how atoms interact with the quantized electromagnetic field. Atomic physics, quantum optics, solid-state physics, and QIP all have a lot of theoretical and experimental interest in the JCM. The basic states of the JCM system is as follow |Ψ⟩=|p⟩_ph_𝒜|l⟩_at_ℬ where p=0 — no free photons, p=1 — one free photon, l=0 — electron in ground state, l=1 — electron in excited state. The photon state is partly viewed as the observed by photon detector. Interaction between atom and field is explained in detail in Fig. <ref>. In (a), an electron in the ground state absorbs a photon and transfer to the excited state, which is called excitation. In (b), an electron in the excited state transfer to the ground state after releasing a photon, this process is called de-excitation. (c), (e), (h) correspond respectively to three states for JCM: |01⟩, |10⟩, |00⟩. Excitation process is shown as (e) (f)⟶ (c) De-excitation process is shown as (c) (d)⟶ (e) If the optical cavity is ideal, then the entire system is closed, and the state of the system will oscillate between (c) and (e), since there is no photon leakage (dissipation). The quantum discord is currently greater than or equal to zero, and the system comprises of the entangled state of |01⟩ and |10⟩. The quantum discord is zero when one of the probabilities of |01⟩ and |10⟩ is zero. However, the optical cavity is not ideal in reality, known as open system, and photons will escape from the cavity to the external environment. Dissipation process is shown as (e) (g)⟶ (h) The atoms in the cavity will ultimately stabilize in the ground state |00⟩ due to photon leakage, which means that the system will no longer have quantum correlation and the quantum discord will be zero. §.§ Hamiltonian of JCM Before constructing the Hamiltonian, we first introduce rotating wave approximation (RWA) <cit.>, which is taken into account g/ħω_a≈g/ħω_c≪ 1 Usually, for convenience, we assume that the electron transition frequency ω_a and field frequency ω_c are equal, and ω=ω_a=ω_c. And Hamiltonian of JCM has following form H^RWA_JCM=ħω a^†a+ħωσ^†σ+g(a^†σ+aσ^†) where ħ is the reduced Planck constant, g is the coupling intensity between the photonic mode ω and the electron in the molecule. Here a is photon annihilation operator, a^† is photon creation operator, σ is electron excitation operator, and σ^† is electron relaxation operator. The coupling intensity of photon and the electron in the cavity takes the form: g=√(ħω b/V)dE(x) where ω is transition frequency, V is the effective volume of the cavity, d is the dipole moment of the transition between the ground and the perturbed states and E(x) describes the spatial arrangement of the atom in the cavity, which has the form E(x) = sin(π x/l), here l is the length of the cavity. To ensure the confinement of the photon in the cavity, l has to be chosen such that l = mλ/2 is a multiple of the photon wavelength λ. In experiments, m=1 is often chosen to decrease the effective volume of the cavity, which makes it possible to obtain dozens of Rabi oscillations <cit.>. §.§ Quantum master equation The quantum master equation (QME) in the Markovian approximation for the density operator ρ of the open system takes the following form iħρ̇=[H,ρ]+iL(ρ) where [H,ρ]=Hρ-ρ H is the commutator. L(ρ) is as follows L(ρ)=∑_kγ_k(A_kρ A_k^†-1/2{ρ, A_k^†A_k}) where {ρ, A_k^†A_k}=ρ A_k^†A_k + A_k^†A_kρ is the anticommutators, and A_k is jump operator. The term γ_k refers to the overall spontaneous emission rate for photons for k caused by photon leakage from the cavity to the external environment. § THREE-QUBIT TAVIS–CUMMINGS MODEL The OH^+ model within the framework of TCM is an extension of JCM. In the optical cavity exist two artificial two-level atoms, one of them is hydrogen atom, another is oxygen atom. And there is only one valence electron between them. The hybridization and de-hybridization of orbitals is shown in Fig. <ref> (a). Each atom has a structure similar to that in Sec. <ref>. The excitation and de-excitation are shown in Fig. <ref> (b) and (c). Electron in atomic ground orbital |0⟩_O (or |0⟩_H) is bound to the nucleus and not participating in the formation of chemical bonds. Only when the electron is in atomic excited state |1⟩_O (or |1⟩_H), atomic orbitals can be hybridized into molecular orbitals — the ground |Φ_0⟩ and excited |Φ_1⟩ ones |Φ_0⟩ =α|1⟩_O +β|1⟩_H |Φ_1⟩ =β|1⟩_O -α|1⟩_H where α and β are positive coefficients depend on depth of potential wells in atoms, α≥β, α^2+β^2=1. There is only one valence electron in the model, the basic state is denoted as follows: |ψ⟩=|p⟩_ph_𝒜|l⟩_mol|k⟩_n_ℬ where p — number of photons, n=0,1; l=0 — |Φ_0⟩, l=1 — |Φ_1⟩; k=0 means the nuclei are close from each other (at this point covalent bond is strong), and k=1 means far away (at this point covalent bond is weak). Fig. <ref> details the interactions and dissipations of the OH^+ model. (d) ∼ (i) correspond to six states for OH^+ model: |000⟩, |001⟩, |010⟩, |011⟩, |100⟩, |101⟩. Excitation processes are shown as (h) (k)⟶ (f), (i) (m)⟶ (g) De-excitation processes are shown as (f) (j)⟶ (h), (g) (l)⟶ (i) And dissipation processes are shown as (h) (n)⟶ (d), (i) (o)⟶ (e) §.§ Hamiltonian of the OH^+ model Hamiltonian of the OH^+ model has following form H^RWA_OH^+ =ħω_c a^† a+ħω_bσ_b^†σ_b+ħω_aσ_a^†σ_a +g_b(σ_b+σ_b^†)+g_a(a^†σ_a+aσ_a^†) where a, a^† — photon annihilation and creation operators; σ_b, σ_b^† — covalent bond breaking and formation operator, σ_a, σ_a^† — electron relaxation and excitation operator; ω_c — photon frequency in the cavity; ω_b — phonon frequency, corresponding to the energy of covalent bond; ω_a — atomic transition frequency, and ω_c=ω_a=ω; g_b — intensity of formation of covalent bond; g_a — intensity of interaction of a molecule with the field. When electron is in the excited state, the system will be more unstable than when in the ground state. This means that covalent bond is more easily broken or formed. At this time, g_b=g_b_1, which is large. When electron is in the ground state, the system will be more stable. It will be difficult to form or break covalent bonds. At this time, g_b=g_b_0, which is weak, and g_b_1≫ g_b_0. In addition, we stipulate that when the distance between two atoms is close, the covalent bond is strong (this means that the diatomic system tends towards the molecular form), at this time, g_a=g_a_0, which is large; when the distance is far, the covalent bond is weak (this means that the diatomic system tends towards the independent atoms form), at this time, g_a=g_a_1, which is weak. We assume g_a_1≪ g_a_0. The quantum master equation of the OH^+ model is similar to that in Subsec. <ref>. § SIMULATIONS AND RESULTS In this paper, quantum evolutions are schematically modelled by solving the QME with the Lindblad operators of photon leakage from the cavity to external environment. QME approach has been used to study the dynamics of quantum open system <cit.>, and it is consistent with the laws of quantum thermodynamics <cit.>. It is only applicable for Markovian approximation. The solution ρ(t) in Eq. (<ref>) may be approximately found as a sequence of two steps: in the first step we make one step in the solution of the unitary part of Eq. (<ref>) ρ̃(t+dt)=exp(-i/ħHdt)ρ(t)exp(i/ħHdt) and in the second step, make one step in the solution of Eq. (<ref>) with the commutator removed: ρ(t+dt)=ρ̃(t+dt)+1/ħL(ρ̃(t+dt))dt §.§ Two-qubit system In simulations for two-qubit system: ω=10^8, g=10^6. And the initial state is defined as follows |Ψ(0)⟩=cosα|01⟩+sinα|10⟩ where α∈[0,π/4]. §.§.§ Closed system In Fig. <ref>, the initial state |Ψ(0)⟩=|01⟩ (α=0), and (a) ∼ (c) are the results of the closed quantum system (γ=0). When the system is closed, the quantum evolution presents oscillation of |01⟩ and |10⟩ in Fig. <ref> (a). At this time, the |00⟩ is always 0, because there is no photon leakage, thus the |00⟩ cannot be obtained through dissipation. We set the state of the entire system as |Ψ⟩=a|01⟩+b|10⟩ where a^2+b^2=1. In Fig. <ref> (b), von Neumann entropy of entire system S(𝒜ℬ) is always equal to 0, because whole system is pure state. At this time, for a bipartite system, S(𝒜) is always equal to S(ℬ). When coefficients a and b from Eq. (<ref>) are equal, and |Ψ⟩=1/√(2)|01⟩+1/√(2)|10⟩. The system reaches the maximum entanglement. We get that S(𝒜) (S(ℬ)) is equal to 1 at this time, and the maximum value is obtained (S_max(𝒜)=log_2N, where N=2^n=2, because subsystem consists of a qubit (n=1), thus S_max(𝒜)=1). When a=1, b=0 (or a=0, b=1), the entanglement of system is 0. We get S(𝒜)=S(ℬ)=0. In Fig. <ref> (c), concurrence, defined in Eq. (<ref>), is introduced to measure quantum entanglement. Quantum discord, defined in Eq. (<ref>), is introduced to measure quantum correlation. Like von Neumann entropy, both concurrence and quantum discord reach the maximum value 1 when the system is at the maximum entanglement degree, and reach 0 when quantum entanglement disappears. Moreover, concurrence and quantum discord are completely synchronized with von Neumann entropy. Comparing the Fig. <ref> (c) and Fig. <ref> (a), (b) and (c), it can be found that when the initial entanglement degree is larger (α is larger), the lower limit of quantum discord and concurrence is larger. When α is 0, the initial entanglement degree is 0, and the lower limit is also 0 at this time, that is, the curves of quantum discord and concurrence oscillate between 0 and 1. As α becomes larger, the curves are gradually compressed towards the 1 boundary. Until α is π/4, the initial entanglement is the largest, and at this time the curves become horizontal lines that is always 1. §.§.§ Open system Now we focus on the open quantum system (γ=g). At this time, the whole system is no longer in the pure state. In Fig. <ref> (a), due to the presence of photon leakage, the oscillation between the |01⟩ and |10⟩ not exists. Instead, the probability of |00⟩ gradually increases from 0 to 1. Now we set the state of the entire system as |Ψ⟩=a|01⟩+b|10⟩+c|00⟩ where a^2+b^2+c^2=1. According to Fig. <ref> (a), the probability of |00⟩ is always non-zero when the evolution begins. And a|01⟩ + c|00⟩=(a+c)|0⟩(a|1⟩+c|0⟩), which shows that it is not entangled. Similarly, b|10⟩ + c|00⟩=(b|1⟩+c|0⟩)(b+c)|0⟩, which are also not entangled. So the entanglement property of the whole system is still determined by |01⟩ and |10⟩. In Fig. <ref> (b), S(𝒜) and S(ℬ) are no longer coincident. This is because S(𝒜) is synchronized with |10⟩ and S(ℬ) is synchronized with state |01⟩. Therefore, when the system is closed, states |10⟩ and |01⟩ are synchronized, so S(𝒜) and S(ℬ) can coincide. But when the system is open, these two states are not synchronized, then S(𝒜) and S(ℬ) cannot coincide. Thus, von Neumann entropy cannot be used to accurately measure quantum entanglement. But we can try to multiply S(𝒜) and S(ℬ). At this time, S(𝒜)S(ℬ) combines S(𝒜) (only synchronized with |10⟩) and S(ℬ) (only synchronized with |01⟩), which can be used to simply describe the entanglement of both closed (see gray curve in Fig. <ref> (b)) and open quantum system (see gray curve in Fig. <ref> (b)). In addition, S(𝒜)S(ℬ) also synchronizes with quantum discord and concurrence in Fig. <ref> (c). According to quantum discord in Fig. <ref>, when there is photon leakage, the quantum correlation of the system will gradually weaken to 0 (at this point, the system only exists the state of |00⟩). Comparing the Fig. <ref> (c), Fig. <ref> (c) and Fig. <ref> (a), (b), (c), it can be found that when the γ is larger, the oscillation of quantum discord and concurrence fade faster. Only when γ is 0, the oscillation always exists. §.§ Three-qubit system In simulations for three-qubit system: ω_c=ω_a=10^9, ω_b=10^8, g_b_0=10^4, g_b_1=100g_b_0, g_a_1=2g_b_1, g_a_0=100g_a_1. And the initial state is defined as follows |Ψ(0)⟩=|100⟩ In the initial condition, an electron is at ground-level, a photon is present in the cavity, and two atoms are far away from one another. §.§.§ Quantum motion In this simulation, g_a_0 and g_a_1 are both constants. g_b_0, and g_b_1 are also constants due to the quantum motion of nuclei. The quantum movement of nuclei refers to the tunneling effect, that is, two atomic nuclei move from the close place to the far away place instantaneously. At this time, the intensities of covalent bond formation or breaking are constants. Likewise, movement of nuclei from the far away place to the close place is also instantaneous. In Fig. <ref>, the inner figure shows quantum evolution of the OH^+ model with the dissipation of photon, associated only with electron transition. The evolution presents Rabi oscillation of states |000⟩ and |001⟩, other states quickly decay to 0. And the outer figure shows the quantum discord dynamics of this system. We can set the final state of entire system as Ψ=a|000⟩+b|001⟩ where a^2+b^2=1. And a|000⟩+b|001⟩=(a+b)|0⟩(a|00⟩+b|01⟩), which shows that it's not entangled between subsystems 𝒜 and ℬ, defined in Eq. (<ref>). Therefore, the quantum discord rises to 1 from the beginning, and then decays to 0 (at this time, only the oscillations of |000⟩ and |001⟩ remain in the system) in violent oscillations. §.§.§ Classical motion In this simulation, g_a_0 and g_a_1 are also both constants, but g_b_0 and g_b_1 decrease gradually from maximum value to 0, at the same time, the movement of atomic nuclei gradually stopped. The biggest difference between the classical and quantum motion of atomic nuclei is that the classical motion refers to the slow movement of the nucleus from near to far, while the quantum motion refers to the repeated jumping of nuclei between the far away place and the close place due to the quantum tunneling effect. From Fig. <ref>, we can see that as the number of iterations increases, g_b_0 and g_b_1 are getting smaller and smaller, the two nuclei are farther and farther apart, and the oscillation between |000⟩ and |001⟩ begins to deform (from vigorous to slow, in other words, the oscillation period becomes larger and larger). If the moving distance of the nuclei from near to far is constant, then the larger the superior limit of the number of iterations, the slower the nucleus moves. Comparing (a), (b) and (c), it can be found that the slower the movement, the more times the Rabi oscillations. Now we focus on the quantum discord dynamics in Fig. <ref> (d), (e) and (f). Whether it is quantum motion or classical motion, quantum dynamics is almost the same. This is because the classic motion only makes |000⟩ and |001⟩ deform significantly, and the oscillation of |000⟩ and |001⟩ cannot cause the entanglement between subsystems 𝒜 and ℬ. At the same time, the deformation of other states is not obvious by classical motion. § CONCLUDING DISCUSSION AND FUTURE WORK In this paper, we simulate the quantum entanglement and quantum discord dynamics in the cavity QED models. We have derived some analytical results of it: * In Sec. <ref>, we study two-qubit closed system of JCM. We found that quantum discord is synchronized with quantum entanglement (von Neumann entropy and concurrence). At the same time, it is found that the greater the initial entanglement degree, the higher the lower limit of the quantum discord and concurrence, preferably as the initial entanglement reaches the maximum, the quantum discord and concurrence becomes a horizontal line. * In Sec. <ref>, we study two-qubit open system of JCM. We found that von Neumann entropies cannot be used to describe the entanglement of the system, but concurrence can. In addition, we define S(𝒜)S(ℬ), which may will be a good measure of the degree of entanglement in two-qubit open system. And quantum discord is synchronized with quantum entanglement (concurrence). At the same time, it is found that the greater the dissipation strength, the faster the quantum discord and concurrence decay. * In Sec. <ref>, we have studied the quantum motion of atomic nuclei, and obtained that the quantum discord between subsystems 𝒜 and ℬ does not exist when the final system is in the oscillation of |000⟩ and |001⟩. * In Sec. <ref>, we have studied the classical motion of atomic nuclei. According to findings, although classical motion gives a significant deformation to the oscillations, the impact on quantum discord dynamics is negligible. Although we only studied the quantum entanglement and quantum discord dynamics in the simplest cavity QED models via bipartite measurement, the results we found will be used as a basis to extend the research to more complex modifications of cavity QED model in the future. The reported study was funded by China Scholarship Council, project numbers 202108090327, 202108090483.
http://arxiv.org/abs/2307.04204v1
20230709151645
Trajectory Alignment: Understanding the Edge of Stability Phenomenon via Bifurcation Theory
[ "Minhak Song", "Chulhee Yun" ]
cs.LG
[ "cs.LG", "math.OC", "stat.ML" ]
Sharper Asymptotically Optimal CDC Schemes via Combinatorial Designs Yingjie Cheng, Gaojun Luo, Xiwang Cao, Martianus Frederic Ezerman, and San Ling Y. Cheng, and X. Cao are with the Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China, and also with Key Laboratory of Mathematical Modeling and High Performance Computing of Air Vechicles (NUAA), MIIT, Nanjing 210016, China, e-mails: { xwcao,chengyingjie}@nuaa.edu.cn G. Luo, M. F. Ezerman, and S. Ling are with the School of Physical and Mathematical Sciences, Nanyang Technological University, 21 Nanyang Link, Singapore 637371, e-mails: { gaojun.luo, fredezerman, lingsan}@ntu.edu.sg. G. Luo, M. F. Ezerman, and S. Ling are supported by Nanyang Technological University Research Grant No. 04INS000047C230GRT01. X. Cao, Y. Cheng, and G. Luo are also supported by the National Natural Science Foundation of China under Grant 12171241. August 12, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== <cit.> empirically study the evolution of the largest eigenvalue of the loss Hessian, also known as sharpness, along the gradient descent (GD) trajectory and observe the Edge of Stability (EoS) phenomenon. The sharpness increases at the early phase of training (referred to as progressive sharpening), and eventually saturates close to the threshold of 2 / (step size). In this paper, we start by demonstrating through empirical studies that when the EoS phenomenon occurs, different GD trajectories (after a proper reparameterization) align on a specific bifurcation diagram independent of initialization. We then rigorously prove this trajectory alignment phenomenon for a two-layer fully-connected linear network and a single-neuron nonlinear network trained with a single data point. Our trajectory alignment analysis establishes both progressive sharpening and EoS phenomena, encompassing and extending recent findings in the literature. § INTRODUCTION It is widely believed that implicit bias or regularization of gradient-based methods plays a key role in generalization of deep learning <cit.>. There is a growing literature <cit.> studying how the choice of optimization methods induces an implicit bias towards specific solutions among the many global minima in overparameterized settings. <cit.> identify a surprising implicit bias of gradient descent (GD) towards global minima with a certain sharpness[Throughout this paper, the term “sharpness” means the maximum eigenvalue of the training loss Hessian.] value depending on the step size η. Specifically, for reasonable choices of η, (a) the sharpness of the loss at the GD iterate gradually increases throughout training until it reaches the stability threshold[For quadratic loss, GD becomes unstable if the sharpness is larger than a threshold of 2/(step size).] of 2/η (known as progressive sharpening), and then (b) the sharpness saturates close to or above the threshold for the remainder of training (known as Edge of Stability (EoS)). These findings have sparked a surge of research aimed at developing a theoretical understanding of the progressive sharpening and EoS phenomena <cit.>. In this paper, we study these phenomena through the lens of bifurcation theory, both empirically and theoretically. Motivating observations: Figure <ref> illustrates the GD trajectories with different initializations and fixed step sizes trained on three types of two-dimensional functions: (Figure:toy_a) log(cosh(xy)), (Figure:toy_b) 1/2(tanh(x)y)^2, and (Figure:toy_c) 1/2(ELU(x)y)^2, where x and y are scalars. The functions ℒ:^2→ have sharpness y^2 at the global minimum (0, y) for all three models. These toy models can be viewed as examples of single-neuron models, where (Figure:toy_a) represents a linear network with log-cosh loss, while (Figure:toy_b) and (Figure:toy_c) represent nonlinear networks with squared loss. These simple models can capture some interesting aspects of neural network training in the EoS regime, which are summarized below: * EoS phenomenon: GD converges to a global minimum near the point (0,√(2/η)) with sharpness close to 2/η. During the convergence phase, the training dynamics exhibit period-2 oscillations. * For different initializations, GD trajectories for a given step size align on the same curve. For example, Figure <ref> shows that GD trajectories with different initializations closely follow a specific U-shaped curve until convergence. We call this phenomenon trajectory alignment. * In Figures <ref> and <ref>, GD trajectories are aligned on a curve with a fractal structure that qualitatively resembles the bifurcation diagram of a typical polynomial map, such as the logistic map. Particularly, Figure <ref> demonstrates a period-halving phase transition in the GD dynamics, shifting from period-4 oscillation to period-2 oscillation. * Surprisingly, the curve that GD trajectories approach and follow coincides with the bifurcation diagram of a one-dimensional map x ↦ x - η∂/∂ xℒ(x, y) with a fixed “control parameter” y. The stability of its fixed point x=0 changes at the bifurcation point (x, y) = (0, √(2/η)), where period-doubling bifurcation occurs. Note that this point is a global minimum with sharpness 2/η. Interestingly, such striking behaviors can also be observed in more complex models, up to a proper reparameterization, as we outline in the next subsection. §.§ Our contributions In this paper, we discover and study the trajectory alignment behavior of (reparameterized) GD dynamics in the EoS regime. To our best knowledge, we are the first to identify such an alignment with a specific bifurcation diagram solely determined by the loss. Our empirical findings are rigorously proven for both two-layer fully-connected networks and single-neuron nonlinear networks. Our main contributions are summarized below: * In Section <ref>, we introduce a novel canonical reparameterization of training parameters, which incorporates the data, network, and GD step size. This reparameterization allows us to study the trajectory alignment phenomenon in a unified framework. Through empirical study, Section <ref> demonstrates that the alignment of GD trajectories on a bifurcation diagram is not limited to toy models but also occurs in wide and deep networks. Remarkably, these bifurcation diagrams are exclusively determined by the loss and independent of initialization. Furthermore, we find that the alignment trend becomes more pronounced as the network width increases. * In Section <ref>, we use our canonical reparameterization to establish the trajectory alignment phenomenon for two-layer fully-connected linear networks trained with a single data point. Our theoretical analysis rigorously proves both progressive sharpening and the EoS phenomenon, extending the work of <cit.> to a much broader class of networks and also providing more accurate bounds on the limiting sharpness. * Our empirical and theoretical analyses up to Section <ref> are applicable to convex Lipschitz losses, hence missing the popular squared loss. In Section <ref>, we take a step towards handling the squared loss. Employing an alternative reparameterization, we prove the same set of theorems as Section <ref> for a single-neuron nonlinear network trained with a single data point under squared loss. §.§ Related works The Edge of Stability (EoS) phenomenon has been extensively studied in recent years, with many works seeking to provide a deeper understanding of the evolution of sharpness and the oscillating dynamics of GD. <cit.> first formalize EoS through empirical study, and subsequent works have built on their findings. <cit.> analyze EoS through experiments and identify the relations between the behavior of loss, iterates, and sharpness. <cit.> suggest that subquadratic growth of the loss landscape is the key factor of oscillating dynamics. <cit.> show that (normalized) GD enters the EoS regime, by verifying the convergence to some limiting flow on the manifold of global minimizers. <cit.> divide GD trajectory into four phases and explain progressive sharpening and EoS by using the norm of output layer weight as an indicator of sharpness. <cit.> use the third-order Taylor approximation of the loss to theoretically analyze EoS, assuming the existence of progressive sharpening. <cit.> prove that normalization layers encourage GD to reduce sharpness. Concurrent to our work, <cit.> study the logistic regression problem with separable dataset and establish that GD exhibits an implicit bias toward the max-margin solution in the EoS regime, extending prior findings in the small step size regime <cit.>. Some recent works rigorously analyze the full GD dynamics for some toy cases and prove that the limiting sharpness is close to 2/η. <cit.> study the loss (x,y) ↦1/4(x^2y^2-1)^2 and prove that the sharpness converges close to 2/η with a local convergence guarantee. Notably, <cit.> study the function (x,y) ↦ℓ(xy) where ℓ is convex, even, and Lipschitz, and provide a global convergence guarantee. The authors prove that when ℓ is log-cosh loss or square root loss, the limiting sharpness in the EoS regime is between 2/η - 𝒪(η) and 2/η. Our theoretical results extend their results on a single-neuron linear network to two-layer fully-connected linear networks and provide an improved characterization on the limiting sharpness, tightening the gap between upper and lower bounds to only 𝒪(η^3). The trajectory alignment phenomenon is closely related to <cit.> which shows empirical evidence of bifurcation-like oscillation in deep neural networks trained on real-world data. However, their empirical results do not show the alignment property of GD trajectory. In comparison, we observe that GD trajectories align on the same bifurcation diagram, independent of initialization. Very recently, <cit.> observe a similar trajectory alignment phenomenon for scalar linear networks, employing a reparameterization based on the sharpness of the gradient flow solution. However, their empirical findings on trajectory alignment are confined to scalar linear networks, and do not provide a theoretical explanation. In contrast, our work employs a novel canonical reparameterization and offers empirical evidence for the alignment phenomenon across a wide range of networks. Moreover, we provide theoretical proofs for two-layer linear networks and single-neuron nonlinear networks. § PRELIMINARIES Notations. For vectors and , we denote the ℓ_p norm of by _p , their tensor product as ⊗, and ⊗ by ^⊗ 2. For a matrix , we denote the spectral norm by _2. Given a function ℒ and a parameter Θ, we use λ_max(Θ)λ_max (∇_Θ^2 ℒ(Θ)) to denote the sharpness (i.e., the maximum eigenvalue of the loss Hessian) at Θ. We use asymptotic notations with subscripts (e.g., 𝒪_ℓ(·), 𝒪_δ, ℓ(·)) in order to hide constants that depend on the parameters or functions written as subscripts. §.§ Problem settings We study the optimization of neural network f( · ; Θ) : ^d→ parameterized by Θ. We focus on a simple over-parameterized setting trained on a single data point {(, y)}, where ∈^d and y∈. We consider the problem of minimizing the empirical risk ℒ(Θ) = ℓ(f(; Θ) - y), where ℓ is convex, even, and twice-differentiable with ℓ”(0)=1. We minimize ℒ using GD with step size η: Θ_t+1 = Θ_t - η∇_Θℒ(Θ_t). The gradient and the Hessian of the function are given by ∇_Θℒ (Θ) = ℓ'(f(; Θ) - y) ∇_Θ f(; Θ), ∇_Θ^2 ℒ (Θ) = ℓ”(f(; Θ) - y) (∇_Θ f(; Θ))^⊗ 2 + ℓ'(f(; Θ) - y) ∇_Θ^2f(; Θ). Suppose that Θ^* be a global minimum of ℒ, i.e., f(; Θ^*) = y. In this case, the loss Hessian and the sharpness at Θ^* are simply characterized as ∇_Θ^2 ℒ (Θ^*) = (∇_Θ f(; Θ^*))^⊗ 2 , and λ_max(Θ^*) = ∇_Θ f(; Θ^*)_2^2. §.§ Canonical reparameterization For given step size η, the canonical reparameterization of Θ is defined as (p, q) ( f(; Θ) - y, 2/η∇_Θ f(; Θ)_2^2). Under the canonical reparameterization, p=0 represents global minima, and Eq. (<ref>) implies that the point (p,q)=(0,1) is a global minimum with sharpness 2/η. The update of p can be written as p_t+1 = f(x; Θ_t+1) - y = f(x; Θ_t - ηℓ'(f(; Θ_t) - y) ∇_Θ f(; Θ_t)) - y ≈ f(x; Θ_t) - ∇_Θ f(; Θ_t)^⊤(ηℓ'(f(; Θ_t) - y) ∇_Θ f(; Θ_t)) - y = (f(x; Θ_t) - y) - ηℓ'(f(; Θ_t) - y) ∇_Θ f(; Θ)_2^2 = p_t - 2 ℓ'(p_t)/q_t, which can be obtained by first-order Taylor approximation on f for small step size η.[The approximation is used just to motivate Lemma <ref>; in our theorems, we analyze the exact dynamics.] §.§ Bifurcation analysis Motivated from the approximated 1-step update rule given by Eq. (<ref>), we conduct the bifurcation analysis on this one-dimensional map, considering q_t as a control parameter. We first review some basic notions used in bifurcation theory <cit.>. Let z_0 be a fixed point of a differentiable map f:→, i.e., f(z_0)=z_0. We say z_0 is the stable fixed point of f if f'(z)<1, and we say z_0 is the unstable fixed point of f if f'(z)>1. A point z_0 is called a period-p point of a map f:→ if z_0 is the fixed point of f^p and f^j(z_0) z_0 for any 1≤ j ≤ p-1. The orbit of z_0, given by { z_j=f^j(z_0) | j=0,1,…,p-1} is called the period-p orbit of f. A period-p orbit is stable (unstable) if its elements are stable (unstable) fixed points of f^p, i.e., ∏_j=0^p-1f'(z_j) < 1 (> 1). Now we analyze the bifurcation of the one-parameter family of mappings f_q: → given by f_q(p) p(1-2r(p)/q), where q is a control parameter and r is a differentiable function satisfying Assumption <ref> below. A function r:→ is even, continuously differentiable, r(0)=1, r'(0)=0, r'(p)<0 for any p>0, and lim_p→∞r(p) = lim_p→ -∞r(p)= 0. In other words, r is a smooth, symmetric bell-shaped function with the maximum value r(0)=1. We note that Eq. (<ref>) can be rewritten as p_t+1 = f_q_t(p_t) if we define r by r(p) ℓ'(p)/p for p 0 and r(0) 1. Below are examples of ℓ for which the corresponding r's satisfy Assumption <ref>. These loss functions were previously studied by <cit.> to explain EoS for (x,y)↦ℓ(xy). * log-cosh loss: ℓ_log-cosh(p) log(cosh(p)). Note ℓ'_log-cosh(p) = tanh(p). * square-root loss: ℓ_sqrt(p) √(1 + p^2). Note ℓ'_sqrt(p) = p/√(1+p^2). If r satisfies Assumption <ref>, then for any 0<q≤ 1, there exists a nonnegative number p such that r(p)=q, and the solution is unique which we denote by r̂(q). In particular, r̂: (0,1]→_≥0 is a function satisfying r(r̂(q))=r(-r̂(q))=q for any q∈ (0,1]. [period-doubling bifurcation of f_q]lemmaLemmaBifurcation Suppose that r is a function satisfying Assumption <ref>. Let p^* = sup{p≥ 0 |xr'(x)/r(x) > -1 for any x≤ p} and c = r(p^*). If p^*=∞, we choose c=0. Then, the one-parameter family of mappings f_q: → given by Eq. (<ref>) satisfies * If q>1, p=0 is the stable fixed point. * If q ∈ (c,1), p=0 is the unstable fixed point and {±r̂(p)} is the stable period-2 orbit. The map f_q has the unique fixed point p=0 for any q>0. Since f_q'(0) = 1 - 2/q, p=0 is a stable fixed point if q>1 and p=0 is an unstable fixed point if 0<q<1. Now suppose that q∈ (c,1). Then, we have f_q(r̂(q)) = -r̂(q) and f_q(-r̂(q)) = r̂(q), which implies that {±r̂(q)} is a period-2 orbit of f_q. Then, f_q'(r̂(q)) = f_q'(-r̂(q)) = | 1+2r̂(q)r'(r̂(q))/q| < 1 implies that {±r̂(q)} is a stable period-2 orbit. According to Lemma <ref>, the stability of the fixed point p=0 undergoes a change at q=1, resulting in the emergence of a stable period-2 orbit. The point (p,q)=(0,1) is referred to as the bifurcation point, where a period-doubling bifurcation occurs. A bifurcation diagram illustrates the points asymptotically approached by a system as a function of a control parameter. In the case of the map f_q, the corresponding bifurcation diagram is represented by p=0 for q≥ 1 and p=±r̂(q) (or equivalently, q = r(p)) for q∈ (c,1). It is worth noting that the period-2 orbit {±r̂(p)} becomes unstable for q∈(0,c). If we choose r to be r(p) = ℓ'(p)/p for p 0 and r(0)=1, then 1 + pr'(p)/r(p) = ℓ”(p)/r(p) > 0 for all p, assuming ℓ is convex. Consequently, for log-cosh loss and square root loss we have c=0, indicating that the period-2 orbit of f_q remains stable for all q∈ (0,1). However, in Section <ref>, we will consider r with c>0, which may lead to additional bifurcations. § TRAJECTORY ALIGNMENT OF GD: AN EMPIRICAL STUDY In this section, we conduct experimental studies on the trajectory alignment phenomenon in GD dynamics under the canonical reparameterization proposed in Section <ref>. We consider a fully-connected L-layer neural network f( · ; Θ): ^d → written as f(; Θ) = _L^⊤ϕ(_L-1ϕ(⋯ϕ(_2 ϕ(_1 ))⋯)), where ϕ is an activation function, _1∈^m× d, _l∈^m × m for 2≤ l ≤ L-1, and _L∈^m. All L layers have the same width of m. We minimize the empirical risk ℒ(Θ) = ℓ(f(; Θ)-y). We visualize GD trajectories under the canonical parameterization, where each plot shows five different randomly initialized weights using Xavier initialization multiplied with a rescaling factor of α. For this analysis, we fix the training data point and hyperparameters as = _1 = (1,0,…,0), y=1, η=0.01, d=10, and focus on the log-cosh loss for ℓ, with either ϕ(t) = t (linear) or ϕ(t) = tanh(t). We note that the trajectory alignment phenomenon is consistently observed in other settings, including square root loss, different activations (e.g., ELU), and various hyperparameters, in particular for sufficiently wide networks (additional experimental results are provided in Appendix <ref>). The effect of initialization scale. In Figures <ref> and <ref>, we examine the effect of the initialization scale α on GD trajectories in a two-layer fully-connected linear network with a width of m=256. In Figure <ref>, when the weights are initialized with a smaller scale (α=5), the initial value of q is greater than 1, and it converges towards the minimum with only a small change in q_t until convergence. In this case, the limiting sharpness is relatively smaller than 2/η, and the EoS phenomenon does not occur. This case is referred to as the gradient flow regime <cit.>. On the other hand, in Figure <ref>, when the weights are initialized with a larger scale (α=10), the initial value of q is less than 1, and we observe convergence towards the point (close to) (p,q) = (0,1). This case is referred to as the EoS regime. We note that choosing larger-than-standard scale α is not a necessity for observing EoS; we note that even with α = 1, we observe the EoS regime when η is larger. Trajectory alignment on the bifurcation diagram. In order to investigate the trajectory alignment phenomenon on the bifurcation diagram, we plot the bifurcation diagram q = ℓ'(p)/p and observe that GD trajectories tend to align with this curve, which depends solely on ℓ. Figure <ref> clearly demonstrates this alignment phenomenon. Additionally, we analyze the evolution of q_t/r(p_t) and p_t in Figure <ref>. We observe that the evolution of q_t/r(p_t) follows two phases. In Phase 1, q_t/r(p_t) approaches to 1 quickly. In Phase 2, the ratio remains close to 1. Notably, the convergence speed of q_t/r(p_t) towards 1 is much faster than the convergence speed of p_t towards 0. In Sections <ref> and <ref>, we will provide a rigorous analysis of this behavior, focusing on the separation between Phase 1 and Phase 2. The effect of width and depth. In Figure <ref>, we present the GD trajectories of tanh-activated networks with different widths and depths (α = 5). All three cases belong to the EoS regime, where GD converges to a point close to (p,q)=(0,1), resulting in a limiting sharpness near 2/η. However, when comparing Figures <ref> and <ref>, we observe that the trajectory alignment phenomenon is not observed for the narrower network with m=64, whereas the GD trajectories for the wider network with m=256 are clearly aligned on the bifurcation diagram. This suggests that network width plays a role in the trajectory alignment phenomenon. Furthermore, we note that the trajectory alignment phenomenon is also observed for a deeper network with L=10, as depicted in Figure <ref>. Multiple training data points. In our trajectory alignment analysis, we have primarily focused on training with a single data point. However, it is important to explore the extension of this phenomenon to scenarios with multiple training data points. To investigate this, we train a neural network on a dataset {(_i, y_i)}_i=1^n, where _i∈^d and y_i∈, by minimizing the empirical risk ℒ(Θ) 1/n∑_i=1^nℓ(f(_i; Θ) - y_i). Defining ∈^n× d as the data matrix and ∈^n as the label vector, we introduce a generalized canonical reparameterization: (p, q)( P(f(; Θ) - ) , 2n/η‖∑_i=1^n (∇_Θ f(_i; Θ))^⊗ 2‖_2), where P: ^n→ can be a function such as a mean value or a specific vector norm. In Figure <ref>, we consider training on a 50 example subset of CIFAR-10 and vary the network architecture. We use three-layer fully-connected network with tanh activation and convolutional network (CNN) with tanh activation. Under the generalized canonical reparameterization (<ref>) for various choices of P, including the mean and the ℓ_2 norm, we observe the trajectory alignment phenomenon throughout all settings, indicating a common alignment property of the GD trajectories. However, unlike the single data point case, the alignment does not happen on the curve q = ℓ'(p)/p. The precise characterization of the coinciding curve is an interesting direction for future research. § TRAJECTORY ALIGNMENT OF GD: A THEORETICAL STUDY In this section, we study a two-layer fully-connected linear network defined as f(x; Θ) ^⊤, where ∈^d× m, ∈^m, and Θ denote the collection of all parameters (, ). We consider training this network with a single data point {(, 0)}, where ∈^d and _2=1. We run GD with step size η on the empirical risk ℒ(Θ) ℓ(f(; Θ) - 0) = ℓ(^⊤), where ℓ is a loss function satisfying Assumption <ref>. We note that our assumptions on ℓ is motivated from the single-neuron linear network analysis (d=m=1) by <cit.>. The loss ℓ is convex, even, 1-Lipschitz, and twice differentiable with ℓ”(0) = 1. The canonical reparameterization (Definition <ref>) of Θ = (, ) is given by (p, q) (^⊤, 2/η (_2^2 + _2^2)). Under the canonical reparameterization, the 1-step update rule of GD can be written as p_t+1 = [ 1 - 2r(p_t)/q_t + η^2 p_t^2 r(p_t)^2 ] p_t, q_t+1 = [ 1 - η^2 p_t^2 r(p_t) ( 2q_t - r(p_t)) ]^-1 q_t , where we define the function r by r(p)ℓ'(p)/p for p 0 and r(0) 1. Note that the sequence (q_t)_t=0^∞ is monotonically increasing if q_0 ≥1/2, which is the case our analysis will focus on. We have an additional assumption on ℓ as below, motivated from Lemma <ref>. The function r(p) = ℓ'(p)/p corresponding to the loss ℓ satisfies Assumption <ref>. We now present our theoretical results on this setting, and defer the proofs to Appendix <ref>. §.§ Gradient flow regime We first consider the gradient flow regime, where q is initialized with q_0 > 1. [gradient flow regime]theoremTheoremGFdiag Let η∈ (0, 2/33) be a fixed step size and ℓ be a loss function satisfying Assumptions <ref> and <ref>. Suppose that the initialization (p_0, q_0) satisfies p_0≤ 1 and q_0 ∈(2/2-η, min{1/16η, r(1)/2η}). Consider the GD trajectory characterized in Eq. (<ref>). Then, the GD iterations (p_t, q_t) converge to the point (0, q^*) such that q_0 ≤ q^* ≤exp ( C η^2) q_0 ≤ 2q_0, where C = 8 q_0 [min{2(q_0-1)q_0, r(1)2q_0}]^-1 > 0. Theorem <ref> implies that in gradient flow regime, GD with initialization Θ_0 = (_0, _0) and step size η converges to Θ^* which has the sharpness bounded by: (1 - C η^2) (_0 _2^2 + _0_2^2) ≤λ_max(Θ^*) ≤ (_0 _2^2 + _0_2^2). Hence, for small step size η, if the initialization satisfies _0 _2^2 + _0_2^2 < 2/η-1, then the limiting sharpness is slightly below _0 _2^2 + _0_2^2. Note that we assumed the bound p_0≤ 1 for simplicity, but our proof also works with the assumption p_0≤ K for any positive constant K modulo some changes in numerical constants. Moreover, our assumption on the upper bound of q_0 is 1/η up to a constant factor, which covers most realistic choices of initialization. §.§ EoS regime We now provide rigorous results in the EoS regime, where the GD trajectory aligns on the bifurcation diagram q = r(p). To establish these results, we introduce additional assumptions on the loss ℓ. The function r(z) = ℓ'(z)/z is C^4 on and satisfies * z↦r'(z)/r(z)^2 is decreasing on , * z↦zr'(z)/r(z) is decreasing on z>0 and increasing on z<0, * z↦zr(z)/r'(z) is decreasing on z>0 and increasing on z<0. We note that both the log-cosh loss and the square root loss satisfy Assumptions <ref>, <ref>, and <ref>. [EoS regime, Phase 1]theoremTheoremEoSearlylinear Let η be a small enough step size and ℓ be a loss function satisfying Assumptions <ref>, <ref>, and <ref>. Let z_0 sup_z{zr'(z)/r(z)≥ -1/2} and c_0 max{r(z_0), 1/2}. Let δ∈ (0, 1-c_0) be any given constant. Suppose that the initialization (p_0, q_0) satisfies p_0≤ 1 and q_0∈ (c_0, 1-δ). Consider the reparameterized GD trajectory characterized in Eq. (<ref>). We assume that for all t≥ 0 such that q_t<1, we have p_t 0. Then, there exists a time step t_a = 𝒪_δ, ℓ(log(η^-1)), such that for any t≥ t_a, q_t/r(p_t) = 1 + h(p_t)η^2 + 𝒪_δ, ℓ(η^4), where h(p) - 1/2( p r(p)^3/r'(p) + p^2 r(p)^2 ) for p0 and h(p) - 1/2r”(0) for p = 0. One can check that for log-cosh and square-root losses, the ranges of h are (0,3/4] and (0,1/2], respectively. Theorem <ref> implies that in the early phase of training (t≤ t_a = 𝒪(log(η^-1))), GD iterates (p_t, q_t) approach closely to the bifurcation diagram r(p)=q, which we called Phase 1 in Section <ref>. In Phase 2, GD trajectory aligns on this curve in the remaining of the training (t≥ t_a). Theorem <ref> provides an analysis on Phase 2 stated as below. [EoS regime, Phase 2]theoremTheoremEoSlatediag Under the same settings as in Theorem <ref>, there exists a time step t_b = Ω((1-q_0)η^-2) such that q_t_b≤ 1 and q_t > 1 for any t > t_b. Moreover, the GD iterates (p_t, q_t) converge to the point (0, q^*) such that q^* = 1 - η^2/2r”(0) + 𝒪_δ, ℓ(η^4). Theorem <ref> implies that in EoS regime, GD with step size η converges to Θ^* with sharpness λ_max(Θ^*) = 2/η - η/r”(0) + 𝒪_δ, ℓ(η^3). Note that <cit.> study the special case d=m=1 and prove that the limiting sharpness is between 2/η - 𝒪(η) and 2/η. Theorem <ref> provides tighter analysis on the limiting sharpness in more general settings, up to only 𝒪(η^3). Also, our result is the first to prove that the limiting sharpness in the EoS regime is bounded away from 2/η by a nontrivial margin. We also study the evolution of sharpness along the GD trajectory and prove that progressive sharpening (i.e., sharpness increases) occurs during Phase 2. [progressive sharpening]theoremTheoremPSdiag Under the same setting as in Theorem <ref>, let t_a denote the obtained iteration. Define the function λ̃: _>0→ given by λ̃(q) (1 + r̂(q) r'(r̂(q))/q) 2/η if q≤ 1, and 2/η otherwise. Then, the sequence (λ̃(q_t))_t=0^∞ is monotonically increasing. Moreover, for any t≥ t_a, the sharpness at GD iterate Θ_t closely follows the sequence (λ̃(q_t))_t=0^∞ by satisfying |λ_max(Θ_t) - λ̃(q_t) |≤ 1 + 𝒪_ℓ(η). The gap between λ_max(Θ_t) and λ̃(q_t) is bounded by a numerical constant, which becomes negligible compared to 2/η for small η. In Figure <ref>, we perform numerical experiments on a single-neuron case and observe that λ̃(q_t) closely approximates the sharpness. § EOS IN SQUARED LOSS: SINGLE-NEURON NONLINEAR NETWORK Our canonical parameterization has a limitation in explaining the EoS phenomenon under squared loss ℓ(p)=1/2p^2, as the function r(p) = ℓ'(p)/p = 1 does not satisfy Assumption <ref>. However, empirical studies by <cit.> have observed the EoS phenomenon in GD training with squared loss. In this section, we analyze a simple toy model to gain insight into the EoS phenomenon and trajectory alignment of GD under squared loss. We study the GD dynamics on a two-dimensional function ℒ(x,y) 1/2(ϕ(x)y)^2, where x, y are scalars and ϕ is a nonlinear activation satisfying Assumption <ref> below. [sigmoidal activation] The activation function ϕ: → is odd, increasing, 1-Lipschitz and twice continuously differentiable. Moreover, ϕ(0)=0, ϕ'(0)=1, lim_x→∞ϕ(x) = 1, and lim_x→-∞ϕ(x) = -1. One good example of ϕ satisfying Assumption <ref> is tanh. For this section, we use an alternative reparameterization defined as below. For given step size η, the (p,q) reparameterization of (x,y)∈^2 is defined as (p, q) ( x, 2/η y^2). Under the reparameterization, the 1-step update rule can be written as p_t+1 = ( 1 - 2r(p_t)/q_t) p_t, q_t+1 = (1- ηϕ(p_t)^2)^-2 q_t, where the function r is given by r(z)ϕ(z)ϕ'(z)/z for z 0 and r(0) 1. We can observe a notable resemblance between Eq. (<ref>) and Eq. (<ref>). Indeed, our theoretical findings for a single-neuron nonlinear network closely mirror those of the two-layer linear network discussed in Section <ref>. Due to lack of space, we summarize our theorems in this setting as the following: Under suitable assumptions on ϕ, step size, and initialization, GD trained on the squared loss ℒ(x,y) 1/2(ϕ(x)y)^2 exhibits the same gradient flow, EoS (Phase 1, 2), and progressive sharpening phenomena as shown in Section <ref>. In Theorem <ref>, we prove that in the EoS regime, the limiting sharpness is 2/η - 2/r”(0) + 𝒪(η). For formal statements of the theorems and the proofs, we refer the reader to Appendix <ref>. § CONCLUSION In this paper, we provide empirical evidence and rigorous analysis to demonstrate the remarkable phenomenon of GD trajectory alignment in the EoS regime. Importantly, we show that different GD trajectories, under the canonical parameterization, align on a bifurcation diagram determined by the loss function and independent of initialization. Our theoretical analysis not only characterizes the behavior of limiting sharpness but also establishes progressive sharpening of GD. Our findings raise intriguing questions for future research: How can we understand the trajectory alignment when trained on multiple data points? How can we theoretically explain the impact of network width on the presence of this phenomenon? How can we extend our analysis to encompass squared loss for general neural network, going beyond the toy single-neuron example? §.§.§ Acknowledgments This paper was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant (No. 2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)) funded by the Korea government (MSIT), two National Research Foundation of Korea (NRF) grants (No. NRF-2019R1A5A1028324, RS-2023-00211352) funded by the Korea government (MSIT), and a grant funded by Samsung Electronics Co., Ltd. abbrvnat § ADDITIONAL EXPERIMENTS In this section, we present additional empirical evidence demonstrating the phenomenon of trajectory alignment, which supports the findings discussed in Section <ref> of our main paper. §.§ Experimental Setup Objective function. We run gradient descent (GD) to minimize the objective function defined as ℒ(Θ) = ℓ(f(; Θ) - y), where Θ represents the parameters, ℓ:→ is a loss function, f:^d→ is a neural network, and {(,y)} denotes a single data point with ∈^d and y∈. We also consider training on multiple data points {(_i,y_i)}_i=1^n with _i∈^d and y_i∈ for each 1≤ i≤ n, where we minimize the objective function ℒ(Θ) 1/n∑_i=1^nℓ(f(_i; Θ) - y_i). In our experiments, we primarily focus on the log-cosh loss function ℓ_log-cosh(p) = log(cosh(p)), but we also investigate the square root loss ℓ_sqrt(p) = √(1+p^2). Model architecture. We train a fully-connected L-layer neural network, denoted as f( · ; Θ): ^d →. The network is defined as follows: f(; Θ) = _L^⊤ϕ(_L-1ϕ(⋯ϕ(_2 ϕ(_1 ))⋯)), where ϕ:→ is an activation function applied entry-wise, _1∈^m× d, _l∈^m × m for 2≤ l ≤ L-1, and _L∈^m. All L layers have the same width of m, and the biases of all layers are fixed to 0.[While we fix the biases of all layers to 0 to maintain consistency with our theory, the trajectory alignment phenomenon is consistently observed even for neural networks with bias. In Appendix <ref>, we consider training networks with bias.] We consider three activations: hyperbolic tangent ϕ(t) = tanh(t), exponential linear unit ϕ(t) = ELU(t), and linear ϕ(t) = t. Weight initialization. We perform gradient descent (GD) using five different randomly initialized sets of weights. The weights are initialized using Xavier initialization, and each layer is multiplied by a rescaling factor (gain) of α. In the plots presented throughout this section, we mark the initialization points with an `x' to distinguish them from other points on the trajectories. Canonical reparameterization. We plot GD trajectories after applying the canonical reparameterization introduced in Definition <ref>: (p, q) ( f(; Θ) - y, 2/η∇_Θ f(; Θ)_2^2), where η denotes the step size. For training on multiple data points, we employ the generalized canonical reparameterization as defined in Eq. (<ref>): (p, q)( P(f(; Θ) - ) , 2n/η‖∑_i=1^n (∇_Θ f(_i; Θ))^⊗ 2‖_2), where P: ^n→ can represent the mean value or vector norms. Specifically, we mainly focus on the mean value P() = 1/n∑_i=1^n z_i, but we also examine vector norms such as P() = _1, P() = _2, and P() = _∞, where = (z_1, z_2, …, z_n)∈^n. For large networks, explicitly calculating the ℓ_2 matrix norm ‖∑_i=1^n (∇_Θ f(_i; Θ))^⊗ 2‖_2 is infeasible. Therefore, we adopt a fast and efficient matrix-free method based on power iteration, as proposed by <cit.>. This method allows us to numerically compute the ℓ_2 norm of large-scale symmetric matrices. §.§ Training on a Single Data Point Training data. We conduct experiments on a synthetic single data point (, y), where = _1 = (1,0,…,0)∈^d and y ∈. Throughout the experiments in this subsection, we keep the data dimension fixed at d=10, the data label at y=1, and the step size at η=0.01. The effect of initialization scale. We investigate the impact of the initialization scale α while keeping other hyperparameters fixed. We vary the initialization scale across {0.5, 1.0, 2.0, 4.0}. Specifically, in Figure <ref>, we train a 3-layer fully connected neural network with tanh activation. We observe that smaller initialization scales (α = 0.5, 1.0) lead to trajectories in the gradient flow regime, while larger initialization scales (α = 2.0, 4.0) result in trajectories in the EoS regime. This behavior is primarily due to the initial value of q being smaller than 1 for larger initialization scales, which causes the trajectory to fall into the EoS regime. The effect of network width. We investigate the impact of network width on the trajectory alignment phenomenon. While keeping other hyperparameters fixed, we control the width m with values of {64, 128, 256, 512}. In Figure <ref>, we train 3-layer fully connected neural networks with tanh activation and an initialization scale of α=4. Additionally, in Figure <ref>, we examine the same setting but with different depth, training 10-layer fully connected neural networks. It is commonly observed that the alignment trend becomes more pronounced as the network width increases. In Figures <ref> and <ref>, all trajectories are in the EoS regime. However, narrower networks (m=64, 128) do not exhibit the trajectory alignment phenomenon, while wider networks (m=256, 512) clearly demonstrate this behavior. These results indicate that network width plays a significant role in the trajectory alignment property of GD. The effect of loss and activation functions. The trajectory alignment phenomenon is consistently observed in various settings, including those with square root loss and different activation functions. In Figure <ref>, we investigate a 3-layer fully connected neural network with a width of m=256 and an initialization scale of α=2. We explore different activation functions, including tanh, ELU, and linear, and consider both log-cosh loss and square-root loss. Across all these settings, we observe the trajectory alignment phenomenon, where the GD trajectories align on a curve q = ℓ'(p)/p. §.§ Training on Multiple Synthetic Data Points Training data. We consider training on a synthetic dataset consisting of n data points, denoted as (_i, y_i)_i=1^n. The input vectors _i are sampled from a standard Gaussian distribution 𝒩(0, ), where _i∈^d, and the corresponding target values y_i are sampled from a Gaussian distribution 𝒩(0,1), where y_i∈. Throughout our experiments in this subsection, we use a fixed data dimension of d=10 and a step size of η=0.01. The effect of function P. To investigate the impact of different choices of the function P, we train a 3-layer fully connected neural network with tanh activation. The network has a width of m=256 and is initialized with a scale of α=4. The training is performed on a dataset consisting of n=10 data points. Figure <ref> displays the trajectories of GD trajectories under the generalized canonical reparameterization defined in Eq. (<ref>) for various choices of the function P. These choices include the mean, ℓ_1 norm, ℓ_2 norm, and ℓ_∞ norm. We observe that GD trajectories exhibit alignment behavior across different choices of the function P. Notably, when P is selected as the mean, the trajectories align on the curve q = ℓ'(p)/p. However, when P is based on vector norms, the alignment occurs on different curves. The precise characterization of these curves remains as an interesting open question for further exploration. The effect of the number of data points. We examine how the size of the training dataset, denoted by n, influences the trajectory alignment behavior of GD. While keeping other hyperparameters constant, we vary n with values {2, 4, 8, 16, 32, 64, 128, 512, 1024}. In Figure <ref>, we train a 3-layer fully connected neural network with tanh activation, a width of m=256, and an initialization scale of α=4. Additionally, in Figure <ref>, we investigate the same setting but with a different activation function, training ELU-activated fully connected neural networks. The GD trajectories are plotted under the generalized canonical reparameterization using the mean function P()=1/n∑_i=1^n z_i. We observe a consistent trajectory alignment phenomenon across different choices of the number of data points. Interestingly, for small values of n, the trajectories clearly align on the curve q = ℓ'(p)/p. However, as the number of data points n increases, it seems that the trajectories no longer align on this curve but different “narrower” curves. Understanding the underlying reasons for this phenomenon poses an intriguing open question. §.§ Training on Real-World Dataset Training data. In this subsection, we investigate a binary classification problem using a subset of the CIFAR-10 image classification dataset. Our dataset consists of 50 samples, with 25 samples from class 0 (airplane) and 25 samples from class 1 (automobile). We assign a label of +1 to samples from class 0 and a label of -1 to samples from class 1. This dataset was used in the experimental setup by <cit.>. Architectures. In Figure <ref>, we examine the training of two types of network architectures: (top row) fully-connected tanh network and (bottom row) convolutional tanh network. The PyTorch code for the fully-connected tanh network is provided as follows: Similarly, the PyTorch code for the convolutional tanh network is as follows: Note that we consider networks with bias. We use a step size of η = 0.01 for the fully-connected network and η = 0.001 for the CNN. The default PyTorch initialization <cit.> is applied to all these networks. In this subsection, we further explore the (reparameterized) GD trajectories of fully-connected networks with different activation functions, network widths, and the choice of function P in (<ref>). The effect of function P. We investigate the impact of different choices of the function P on the GD trajectories. We train a 3-layer fully connected neural network with ELU activation, a width of m=256, and an initialization scale of α=1. Figure <ref> illustrates the GD trajectories under the generalized canonical reparameterization defined in Eq. (<ref>) for various choices of the function P, including the mean, ℓ_1 norm, ℓ_2 norm, and ℓ_∞ norm. We observe that the GD trajectories exhibit alignment behavior, which is more pronounced when P is chosen to be the mean or ℓ_1 norm, but less evident for the ℓ_∞ norm. Unlike in Figure <ref>, the trajectories do not align on the curve q = ℓ'(p)/p when P is selected as the mean P()=1/n∑_i=1^n z_i. The effect of network width. We investigate how the width of the network influences the trajectory alignment phenomenon. We vary the width m using values from {64, 128, 256, 512} while keeping other hyperparameters constant. In Figure <ref>, we train 3-layer fully connected neural networks with tanh activation and an initialization scale of α=1. Similarly, in Figure <ref>, we conduct experiments using the same configuration but with ELU activation, training ELU-activated fully connected neural networks. Consistent with the observations from Figures <ref> and <ref> in the single data point setting, we commonly find that as the network width increases, the alignment trend becomes more pronounced. In both Figure <ref> and Figure <ref>, all trajectories fall within the EoS regime. However, narrower networks (m=64) show less evidence of the trajectory alignment phenomenon, while wider networks (m=256, 512) clearly demonstrate this behavior. These findings emphasize the significant impact of network width on the trajectory alignment property of GD. The effect of data label. In our previous experiments, we assigned labels of +1 and -1 to the dataset. However, in this particular experiment, we investigate the training process on a dataset with zero labels. This means that all samples in the dataset are labeled as zero (y_i = 0 for all 1 ≤ i ≤ n). Figure <ref> visualizes the training of 3-layer fully connected neural networks with tanh activation and an initialization scale of α=1. The network widths m are varied from {256, 512, 1024}. Interestingly, the GD trajectories align with the curve q = ℓ'(p)/p, in contrast to our observations in Figures <ref> and <ref>. These results suggest that the data label distribution also influences the alignment curve of GD trajectories. As a future research direction, it would be intriguing to investigate why setting the labels as zero leads to alignment towards the curve q = ℓ'(p)/p, which aligns with our theoretical findings in the single data point setting. § PROOFS FOR THE TWO-LAYER FULLY-CONNECTED LINEAR NETWORK §.§ Proof of Theorem <ref> We give the proof of Theorem <ref>, restated below for the sake of readability. * We note that in the given interval (2/2-η, min{1/16η, r(1)/2η}), it is possible that the interval is empty, depending on the value of r(1). However, this does not impact the correctness of the theorem. By Proposition <ref>, p_t converges to 0 as t→∞, and for all t≥ 0, we have q_0 ≤ q_t ≤exp(Cη^2) q_0. Since the sequence (q_t)_t=0^∞ is monotonic increasing and bounded, it converges. Suppose that q_t→ q^* as t→∞. Then, we can obtain the inequality q_0 ≤ q^* ≤exp(Cη^2) q_0, as desired. Suppose that η∈ (0, 2/33), p_0≤ 1 , and q_0 ∈(2/2-η, min{1/16η, r(1)/2η}). Then for any t≥ 0, we have p_t≤[ 1 - min{2(q_0-1)/q_0, r(1)/2q_0}]^t ≤ 1, and q_0 ≤ q_t ≤exp( 8 η^2 q_0 [min{2(q_0-1)/q_0, r(1)/2q_0}]^-1) q_0 ≤ 2 q_0. We give the proof by induction; namely, if p_t≤[ 1 - min{2(q_0-1)/q_0, r(1)/2q_0}]^t, q_0 ≤ q_t ≤exp( 8 η^2 q_0 [min{2(q_0-1)/q_0, r(1)/2q_0}]^-1) q_0 ≤ 2 q_0 are satisfied for time steps 0≤ t≤ k for some k, then the inequalities are also satisfied for the next time step k+1. For the base case, the inequalities are satisfied for t=0 by assumptions. For the induction step, we assume that the inequalities hold for any 0≤ t≤ k. We will prove that the inequalities are also satisfied for t=k+1. By induction assumptions, q_0≤ q_k≤ 2q_0 and p_k≤ 1, so that we have 1 - 2/q_0≤p_k+1/p_k = 1 - 2 r(p_k)/q_k + η^2 p_k^2r(p_k)^2 ≤ 1 - r(1)/q_0 + η^2, where we used r(1) ≤ r(p_k) ≤ r(0) = 1. Since q_0 ≤r(1)/2η≤r(1)/2η^2, we have |p_k+1/p_k| ≤max{2/q_0-1, 1 - r(1)/q_0 + η^2 } = 1 - min{2(q_0-1)/q_0, r(1)/q_0 - η^2 } ≤ 1 - min{2(q_0-1)/q_0, r(1)/2q_0}. This implies that p_k+1≤( 1 - min{2(q_0-1)/q_0, r(1)/2q_0}) p_k≤[ 1 - min{2(q_0-1)/q_0, r(1)/2q_0}]^k+1, which is the desired bound for p_k+1. For any t≤ k, we also have 1 - 4η^2 p_t^2 q_0 ≤q_t/q_t+1 = 1 - η^2 p_t^2 r(p_t) (2q_t-r(p_t)) ≤ 1, where we used the induction assumptions to deduce 4 q_0 ≥ 2q_t - r(p_t) ≥ 2q_0 - 1 ≥4/2-η-1 > 0. This gives q_k+1≥ q_k ≥ q_0. Furthermore, note that 4η^2 p_t^2 q_0 ≤ 4 η^2 ·1/16η≤1/4, so the ratio q_t/q_t+1∈ [3/4,1]. From this, we have |log( q_0/q_k+1) |≤∑_t=0^k|log(q_t/q_t+1) | ≤ 2 ∑_t=0^k|q_t/q_t+1 - 1 | ≤ 8η^2 q_0 ∑_t=0^k p_t^2 ≤ 8η^2 q_0 ∑_t=0^k[ 1 - min{2(q_0-1)/q_0, r(1)/2q_0}]^2t ≤ 8 η^2 q_0 [min{2(q_0-1)/q_0, r(1)/2q_0}]^-1, where the second inequality holds since log (1 + z)≤ 2 z if z≤1/2. Moreover, this implies that q_k+1≤exp( 8 η^2 q_0 [min{2(q_0-1)/q_0, r(1)/2q_0}]^-1) q_0. Since q_0 ∈(2/2-η, r(1)/2η), we have min{2(q_0-1)/q_0, r(1)/2q_0}≥η. Therefore, since q_0 ≤1/16η, we can conclude that q_0 ≤ q_k+1≤exp( 8 η^2 q_0 [min{2(q_0-1)/q_0, r(1)/2q_0}]^-1) q_0 ≤exp(8η q_0)q_0 ≤ 2 q_0, the desired bounds for q_k+1. §.§ Proof of Theorem <ref> In this subsection, we prove Theorem <ref>. From here onwards, we use the following notation: s_tq_t/r(p_t). All the lemmas in this subsection are stated in the context of Theorem <ref>. Suppose that the initialization (p_0, q_0) satisfies p_0≤ 1 and q_0∈ (c_0, 1-δ). Then for any t≥ 0 such that q_t ≤ 1, it holds that p_t≤ 4, and q_t≤ q_t+1≤ (1 + 𝒪(η^2)) q_t. We prove by induction. We assume that for some t≥ 0, it holds that p_t≤ 4 and 1/2≤ q_t ≤ 1. We will prove that p_t+1≤ 4 and 1/2≤ q_t≤ q_t+1≤ (1 + 𝒪(η^2)) q_t. For the base case, p_0≤ 1 ≤ 4 and 1/2≤ c_0 < q_t ≤ 1 holds by the assumptions on the initialization. Now suppose that for some t≥ 0, it holds that p_t≤ 4 and 1/2≤ q_t ≤ 1. Then for small step size η, we have |2ℓ'(p_t)/q_t|≥ 2ℓ'(p_t)≥1/2ℓ'(p_t)^2p_t≥η^2 ℓ'(p_t)^2p_t. Consequently, by Eq. (<ref>), p_t+1 = | (1 + η^2 ℓ'(p_t)^2)p_t - 2ℓ'(p_t)/q_t|≤max{p_t, 2/q_t}≤ 4. where we used 1-Lipshitzness of ℓ. Moreover, 1 - 8η^2 ≤ 1 - 2p_tη^2 ≤q_t/q_t+1 = 1 - η^2 p_t^2 r(p_t) (2q_t - r(p_t))≤ 1, where we used q_t∈ [1/2, 1] and |p_t r(p_t)| = |ℓ'(p_t)| ≤ 1 from 1-Lipschitzness of ℓ. Hence, q_t≤ q_t+1≤ (1 + 𝒪(η^2)) q_t, as desired. Lemma <ref> implies that p_t is bounded by a constant throughout the iterations, and q_t monotonically increases slowly, where the increment for each step is 𝒪(η^2). Hence, there exists a time step T=Ω(δη^-2) = Ω_δ(η^-2) such that for any t≤ T, it holds that q_t ≤ 1 - δ/2. Through out this subsection, we focus on these T early time steps. Note that for all 0≤ t ≤ T, it holds that q_t ∈ (c_0, 1-δ/2). Intuition on Theorem <ref>. Before we dive in to the rigorous proofs, we provide an intuition on Theorem <ref>. Lemma <ref> establishes that p_t is bounded and q_t monotonically increases slowly, with an increment of 𝒪(η^2) per step. Lemma <ref> shows that the map f_q_t(p) = (1 - 2r(p_t)/q_t)p_t has a stable 2-period orbit {±r̂(q_t)} when q_t ∈ (0,1). Consequently, when q_t is treated as a fixed value, p_t converges to the orbit {±r̂(q_t)}, leading to s_t converging to 1. In the (early) short-term dynamics, q_t is nearly fixed for small step size η, and hence s_t converges to 1. In long-term dynamics perspective, q_t gradually increases and at the same time, s_t stays near the value 1. In Theorem <ref>, we prove that it takes only t_a=𝒪_δ, ℓ(log(η^-1)) time steps for s_t to converge close to 1 (Phase 1, t≤ t_a), and after that, s_t stay close to 1 for the remaining iterations (Phase 2, t > t_a). We informally summarize the lemmas used in the proof of Theorem <ref>. Lemma <ref> states that in the early phase of training, there exists a time step t_0 where s_t_0 becomes smaller or equal to 2/2-r(1), which is smaller than 2(1+η^2)^-1. Lemma <ref> demonstrates that if s_t is smaller than 2(1+η^2)^-1 and p_t≥r̂(1-δ/4), then s_t-1 decreases exponentially. For the case where p_t < r̂(1-δ/4), Lemma <ref> proves that p_t increases at an exponential rate. Moreover, Lemma <ref> shows that if s_t < 1 at some time step, then s_t+1 is upper bounded by 1+𝒪(η^2). Combining these findings, Proposition <ref> establishes that in the early phase of training, there exists a time step t_a^* such that s_t_a^* = 1 + 𝒪_δ, ℓ(η^2). Lastly, Lemma <ref> demonstrates that if s_t = 1 + 𝒪_δ, ℓ(η^2), then s_t - 1 - h(p_t) η^2 decreases exponentially. Now we prove Theorem <ref>, starting with the lemma below. There exists a time step t_0 = 𝒪_δ, ℓ(1) such that s_t_0≤2/2-r(1). We start by proving the following statement: for any 0≤ t≤ T, if 2/2-r(1) < s_t < 2r(1)^-1, then s_t+1 < 2r(1)^-1 and p_t+1≤ (1-r(1)/2)p_t. Suppose that 2/2-r(1) < s_t < 2r(1)^-1. Then from Eq. (<ref>), it holds that |p_t+1/p_t| = | 1 - 2/s_t + η^2p_t^2r(p_t)^2|≤| 1 - 2/s_t| + η^2 ≤ 1-r(1) + η^2 ≤ 1 - r(1)/2, for small step size η. Hence, p_t+1≤ (1-r(1)/2)p_t. Now we prove s_t+1< 2r(1)^-1. Assume the contrary that s_t+1≥ 2r(1)^-1. Then, r(p_t+1) = q_t+1/s_t+1 < q_t+1 < 1-δ/2 so that p_t+1≥r̂(1-δ/2). By Mean Value Theorem, there exists p_t^* ∈ (p_t+1, p_t) such that (recall that r'(p)/r(p)^2 < 0 for p > 0) 1/r(p_t+1) = 1/r(p_t - (p_t-p_t+1)) = 1/r(p_t) + r'(p_t^*)/r(p_t^*)^2 (p_t-p_t+1) ≤1/r(p_t) + r'(p_t+1)/r(p_t+1)^2(r(1)p_t/2) ≤1/r(p_t) - r'(r̂(1-δ/2))/(1-δ/2)^2(r(1)r̂(1-δ/2)/2) = 1/r(p_t) - Ω_δ,ℓ(1), where we used Assumption <ref> <ref> and r̂(1-δ/2) ≤p_t+1≤ (1-r(1)/2)p_t≤p_t. Consequently, s_t+1 = q_t+1/r(p_t+1) = (1+𝒪(η^2))q_t (1/r(p_t) - Ω_δ,ℓ(1)) ≤q_t/r(p_t) = s_t < 2r(1)^-1, for small step size η. This gives a contradiction to our assumption that s_t+1≥ 2r(1)^-1. Hence, we can conclude that s_t+1 < 2r(1)^-1, as desired. We proved that for any 0≤ t≤ T, if 2/2-r(1) < s_t < 2r(1)^-1, it holds that s_t+1 < 2r(1)^-1 and p_t+1≤ (1-r(1)/2)p_t. At initialization, p_0≤ 1 and q_0 < 1, so that s_0 < r(1)^-1. If s_0 ≤2/2-r(1), then t_0=0 is the desired time step. Suppose that s_0 > 2/2-r(1). Then, we have s_1 < 2r(1)^-1 and p_1≤ (1-r(1)/2)p_0≤ 1-r(1)/2. Then we have either s_1 ≤2/2-r(1), or 2/2-r(1) < s_1 < 2r(1)^-1. In the previous case, t_0=1 is the desired time step. In the latter case, we can repeat the same argument and obtain s_2 < 2r(1)^-1 and p_2≤ (1-r(1)/2)^2. By inductively repeating the same argument, we can obtain a time step t_0≤log(r̂(1-δ/2)) / log(1-r(1)/2) = 𝒪_δ, ℓ(1) such that either s_t_0≤2/2-r(1), or p_t_0≤r̂(1-δ/2). In the latter case, r(p_t_0) ≥ 1-δ/2 > q_t_0, and hence s_t_0 < 1 < 2/2-r(1). Therefore, t_0 = 𝒪_δ, ℓ(1) is the desired time step satisfying s_t_0≤2/2-r(1). According to Lemma <ref>, there exists a time step t_0=𝒪_δ,ℓ(1) such that s_t_0≤2/2-r(1) < 2(1+η^2)^-1 for small step size η. Now we prove the lemma below. Suppose that s_t≤ 1. Then, it holds that s_t+1≤ 1 + 𝒪(η^2). For any p∈ (0, r̂(q_t/2)), we have r(p)≥q_t/2 so that f_q_t(p) = (-1+2r(p)/q_t)p. Hence, ∂/∂ pf_q_t(p) = 2 r(p)/q_t(1 + p r'(p)/r(p)) - 1, for any p∈ (0, r̂(q_t/2)). By Assumption <ref> <ref> and convexity of ℓ, both r(p) and 1+pr'(p)/r(p) = ℓ”(p)/r(p) are positive, decreasing function on (0, r̂(q_t/2)). Consequently, ∂/∂ pf_q_t(p) is a decreasing function on (0, r̂(q_t/2)). Now note that q_t/2 < q_t < 1, which means r̂(1) = 0 < r̂(q_t) < r̂(q_t/2) by the definition of r̂. Note that ∂/∂ pf_q_t(p) at p = r̂(q_t) evaluates to ∂/∂ pf_q_t(r̂(q_t)) = 1 + 2 r̂(q_t) r'(r̂(q_t))/r(r̂(q_t))≥ 1 + 2 r̂(c_0)r'(r̂(c_0))/r(r̂(c_0))≥ 0, where the first inequality used Assumption <ref> <ref> and r̂(q_t) < r̂(c_0), which comes from q_t > c_0 max{r(z_0), 1/2}. The second inequality holds because q_t > c_0 ≥ r(z_0) where z_0 := sup_z{zr'(z)/r(z)≥ -1/2}, from the statement of Theorem <ref>. Therefore, since ∂/∂ pf_q_t(p) is decreasing on (0, r̂(q_t/2)) and is nonnegative at r̂(q_t), for any p∈ (0, r̂(q_t)), it holds that ∂/∂ pf_q_t(p)≥ 0. In other words, f_q_t(p) is an increasing function on (0, r̂(q_t)). Since 0≤ s_t≤ 1, we have p_t≤r̂(q_t) and it holds that p_t+1 = (-1 + 2/s_t - η^2 p_t^2 r(p_t)^2 ) p_t≤(-1 + 2/s_t) p_t = f_q_t(p_t)≤f_q_t(r̂(q_t)) = r̂(q_t). Therefore, with this inequality and Lemma <ref>, we can conclude that s_t+1 = q_t+1/r(p_t+1) = q_t/r(p_t+1) (1 + 𝒪(η^2)) ≤q_t/r(r̂(q_t)) (1 + 𝒪(η^2)) = 1 + 𝒪(η^2). Using Lemma <ref>, we prove the following lemma. For any 0≤ t≤ T, if s_t < 2(1+η^2)^-1, s_t-1 > η^2/2, and r(p_t)≤ 1 - δ/4, then s_t+1-1≤ (1-d)s_t-1 + 𝒪(η^2), where d∈ (0,1/2] is a constant which depends on δ and ℓ. By Eq. (<ref>) and 1-Lipschitzness of ℓ, p_t+1/p_t = 1 - 2/s_t + η^2 p_t^2 r(p_t)^2 < 1 - (1+η^2) + η^2 = 0, so that p_t and p_t+1 have opposite signs. By Mean Value Theorem, there exists θ_t between -1 and (1-2/s_t+η^2p_t^2r(p_t)^2) satisfying 1/r(p_t+1) = 1/r(-p_t + (2(s_t-1)/s_t+η^2 p_t^2 r(p_t^2))p_t) = 1/r(-p_t) - r'(θ_t p_t)/r(θ_t p_t)^2(2(s_t-1)/s_t + η^2 p_t^2 r(p_t)^2) p_t = 1/r(p_t) - r'(θ_t p_t)/r(θ_t p_t)^2(2(s_t-1)/s_t + η^2 p_t^2 r(p_t)^2) p_t, where the last equality used the fact that p_t and θ_t p_t have opposite signs and r'(z) and z have opposite signs. Note that θ_t p_t is between p_t and p_t+1. Consequently, the value r'(θ_t p_t)/r(θ_t p_t)^2 is between r'(p_t)/r(p_t)^2 and r'(p_t+1)/r(p_t+1)^2 by Assumption <ref> <ref>. We will prove the current lemma based on Eq. (<ref>). We divide into following three cases: (1) s_t≥ 1 and s_t+1≥ 1, (2) s_t≥ 1 and s_t < 1, and (3) s_t < 1. Case 1. Suppose that s_t≥ 1 and s_t+1≥ 1. Here, we have p_t≥r̂(q_t)≥r̂(1-δ/2) and similarly p_t+1≥r̂(1-δ/2). By Assumption <ref> <ref>, r'(θ_t p_t)/r(θ_t p_t)^2≥r'(r̂(1-δ/2))/(1-δ/2)^2. Hence, Eq. (<ref>) gives 1/r(p_t+1)≤1/r(p_t) - r'(r̂(1-δ/2))/(1-δ/2)^2(2(s_t-1)/s_t) r̂(1-δ/2). Consequently, by Lemma <ref>, s_t+1 = q_t (1 + 𝒪(η^2))/r(p_t+1) = q_t/r(p_t+1) + 𝒪(η^2) ≤ s_t - r'(r̂(1-δ/2))/(1-δ/2)^2(2(s_t-1)/s_t) r̂(1-δ/2) q_t + 𝒪(η^2) ≤ s_t - r'(r̂(1-δ/2))/(1-δ/2)^2 (s_t-1) r̂(1-δ/2) 1/2 + 𝒪(η^2) ≤ s_t - r̂(1-δ/2)r'(r̂(1-δ/2))/2(1-δ/2)^2 (s_t-1) + 𝒪(η^2), where we used q_t > c_0 ≥1/2 and s_t < 2(1+η^2)^-1 < 2. Therefore, we can obtain the following inequality: 0 ≤ s_t+1-1 ≤( 1 - r̂(1-δ/2)r'(r̂(1-δ/2))/2(1-δ/2)^2) (s_t-1) + 𝒪(η^2). Case 2. Suppose that s_t≥ 1 and s_t+1< 1. Here, we have r(p_t+1) > q_t+1≥ q_t ≥ r(p_t), so that p_t+1 < p_t. Consequently, r'(θ_t p_t)/r(θ_t p_t)^2≤r'(p_t)/r(p_t)^2 by Assumption <ref> <ref>. Hence, we can deduce from Eq. (<ref>) that 1/r(p_t+1) ≥1/r(p_t) - r'(p_t)/r(p_t)^2( 2(s_t-1)/s_t + η^2 p_t^2 r(p_t)^2 ) p_t = 1/r(p_t) - 2p_tr'(p_t)/r(p_t)q_t (s_t-1) - η^2 p_t^3 r'(p_t) ≥1/r(p_t) - 2p_tr'(p_t)/r(p_t)q_t (s_t-1) - η^2 p_t^2 r(p_t) = 1/r(p_t) + 2p_tr'(p_t)/r(p_t)q_t (s_t-1) - 𝒪(η^2), where we used p_tr'(p_t)≤ r(p_t) since 1+p_tr'(p_t)/r(p_t) = ℓ”(p_t)/r(p_t) > 0 and p_t≤ 4 by Lemma <ref>. Consequently, by Lemma <ref> (q_t ≤ q_t+1) and Assumption <ref> <ref>, s_t+1≥q_t/r(p_t+1)≥ s_t + 2p_tr'(p_t)/r(p_t) (s_t-1) - 𝒪(η^2) ≥ s_t + 8r'(4)/r(4) (s_t-1) - 𝒪(η^2). Note that 1>1 + 4r'(4)/r(4) = ℓ”(4)/r(4) > 0 holds by convexity of ℓ. Therefore, we can obtain the following inequality: 0 ≤ 1 - s_t+1≤ - (1 + 8r'(4)/r(4)) (s_t-1) + 𝒪(η^2), where -1<1+8r'(4)/r(4) < 1. Case 3. Suppose that s_t < 1. By Lemma <ref>, it holds that s_t+1≤ 1+𝒪(η^2). Moreover, we assumed r(p_t) ≤ 1 - δ/4, so that p_t≥r̂(1-δ/4). We also have p_t+1 = (-1+2/s_t-η^2p_t^2r(p_t^2))p_t≥(-1 + 2/1-η^2/2 - η^2) p_t > p_t≥r̂(1-δ/4), where we used the assumption s_t-1 > η^2/2, and p r(p) = ℓ'(p)≤ 1 due to 1-Lipschitzness of ℓ. Consequently, by Assumption <ref> <ref>, it holds that r'(θ_t p_t)/r(θ_t p_t)^2≥r'(r̂(1 - δ/4))/(1 - δ/4)^2. Hence, by Eq. (<ref>), we have 1/r(p_t+1) ≥1/r(p_t) + r'(r̂(1 - δ/4))/(1 - δ/4)^2(2(1-s_t)/s_t) r̂(1-δ/4) ≥1/r(p_t) + r'(r̂(1 - δ/4))/(1 - δ/4)^2 2(1-s_t) r̂(1-δ/4) = 1/r(p_t) + 2 r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2 (1-s_t), and hence, by Lemma <ref> (q_t ≤ q_t+1) and q_t > c_0 ≥1/2, we get s_t+1≥q_t/r(p_t+1)≥ s_t + r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2 (1-s_t). Therefore, we can obtain the following inequality: -𝒪(η^2) ≤ 1-s_t+1≤(1 - r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2) (1-s_t), where we used Lemma <ref> to obtain the first inequality. Combining the three cases, we can finally conclude that if we choose d min{1/2, r̂(1-δ/2)r'(r̂(1-δ/2))/2(1-δ/2)^2, 2 (1 + 4r'(4)/r(4)), r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2}∈(0,1/2], then s_t+1-1≤ (1-d)s_t-1 + 𝒪(η^2). Lemma <ref> implies that if s_t<2(1+η^2)^-1 and p_t≥r̂(1-δ/4), then s_t-1 exponentially decreases. We prove Lemma <ref> to handle the regime p_t < r̂(1-δ/4), which is stated below. For any 0≤ t ≤ T, if r(p_t)≥ 1 - δ/4, it holds that |p_t+1/p_t|≥4/4-δ. If r(p_t)≥ 1 - δ/4, then s_t = q_t/r(p_t) < 1-δ/2/1-δ/4 = 4-2δ/4-δ, where we used q_t < 1-δ/2 for any 0≤ t≤ T. Consequently, |p_t+1/p_t| = 2/s_t - 1 - η^2 p_t^2 r(p_t^2) ≥2(4-δ)/4-2δ - 1 - η^2 = 2/2-δ - η^2 ≥4/4-δ, for small step size η. Now we prove Proposition <ref>, which proves that s_t reaches close to 1 with error bound of 𝒪(η^2). There exists a time step t_a^* = 𝒪_δ,ℓ(log(η^-1)) satisfying s_t_a^* = 1 + 𝒪_δ,ℓ(η^2). By Lemma <ref>, there exists a time step t_0 = 𝒪_δ,ℓ(1) such that s_t_0≤2/2-r(1). Here, we divide into two possible cases: (1) s_t_0<1, and (2) 1≤ s_t_0≤2/2-r(1). Case 1. Suppose that s_t_0<1. By Lemma <ref>, if r(p_t_0)≥ 1-δ/4 (or equivalently, p_t_0≤r̂(1-δ/4)), then there exists a time step t_1 ≤ t_0 + log(r̂(1-δ/4)/p_t_0) / log(4/4-δ) = 𝒪_δ, ℓ (1) such that p_t_1≥r̂(1-δ/4). We denote the first time step satisfying p_t_1≥r̂(1-δ/4) and t_1≥ t_0 by t_1 = 𝒪_δ, ℓ(1). By Lemma <ref>, it holds that s_t_1≤ 1+𝒪(η^2) since s_t_1-1<1. Consequently, if s_t_1≥ 1-η^2/2, then s_t_1-1≤𝒪(η^2) so that t_a^* = t_1 is the desired time step. Hence, it suffices to consider the case when s_t_1<1-η^2/2. Here, we can apply Lemma <ref> which implies that s_t_1+1-1≤ (1-d) s_t_1-1 + 𝒪(η^2), where d is a constant which depends on δ and ℓ. Then, there are two possible cases: either s_t_1-1≤𝒪(η^2 d^-1), or s_t_1+1-1≤ (1-d/2)s_t_1-1. It suffices to consider the latter case, suppose that s_t_1+1-1≤ (1-d/2)s_t_1-1. Since we are considering the case s_t_1<1-η^2/2, again by Lemma <ref>, we have s_t_1+1≤ 1 + 𝒪(η^2). Since p_t_1+1/p_t_1 = 2/s_t_1-1-𝒪(η^2), p_t_1+1≥p_t_1≥r̂(1-δ/4) must be satisfied unless s_t_1 = 1 + 𝒪(η^2) already holds. If s_t_1+1≥ 1-η^2/2, then s_t_1+1-1≤𝒪(η^2) so that t_a^* = t_1+1 is the desired time step; if not, we can again apply Lemma <ref> and repeat the analogous argument. Hence, there exists a time step t_2 ≤ t_1 + log(η^2/1-s_t_1) / log (1-d/2) = 𝒪_δ, ℓ(log(η^-1)), such that s_t_2-1≤𝒪(η^2d^-1) = 𝒪_δ,ℓ(η^2). Case 2. Suppose that 1≤ s_t_0≤2/2-r(1). Then, r(p_t_0) ≤ q_t_0≤ 1-δ/2, so we can apply Lemma <ref>. There are two possible cases: either s_t_0+1-1≤𝒪(η^2d^-1) = 𝒪_δ,ℓ(η^2), or s_t_0+1-1≤ (1-d/2)s_t_0-1. It suffices to consider the latter case. If s_t_0+1≥ 1, we can again apply Lemma <ref> and repeat the analogous argument. Hence, we can obtain a time step t_0' ≤ t_0 + log(η^2/1-s_t_0) / log(1-d/2) = 𝒪_δ,ℓ(log(η^-1)) such that either s_t_0' < 1 or s_t_0'-1 = 𝒪_δ,ℓ (η^2) is satisfied. If s_t_0' < 1, we proved in Case 1 that there exists a time step t_2' = t_0'+𝒪_δ,ℓ(log(η^-1)) such that s_t_2'-1≤𝒪_δ,ℓ(η^2), and this is the desired bound. Now we carefully handle the error term 𝒪(η^2) obtained in Proposition <ref> and a provide tighter bound on s_t by proving Lemma <ref> stated below. If s_t-1 = 𝒪_δ, ℓ(η^2), then it holds that s_t+1 - 1 - h(p_t+1) η^2≤( 1 + 2p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η^2 + 𝒪_δ, ℓ(η^4 p_t^2), where h(p) - 1/2( p r(p)^3/r'(p) + p^2 r(p)^2 ) for p0 and h(p) - 1/2r”(0) for p = 0. Suppose that s_t = 1+ 𝒪_δ, ℓ(η^2). Then, p_t+1 = | 1 - 2/s_t + η^2 p_t^2 r(p_t)^2 |·p_t= (1 + 𝒪_δ, ℓ(η^2)) p_t. By Eq. (<ref>) proved in Lemma <ref>, there exists ϵ_t = 𝒪_δ, ℓ(η^2) which satisfies the following: 1/r(p_t+1) = 1/r(p_t) + r'((1+ϵ_t)p_t)/r((1+ϵ_t)p_t)^2(2(s_t-1)/s_t + η^2 p_t^2 r(p_t)^2) p_t = 1/r(p_t) + (r'(p_t)/r(p_t)^2 + 𝒪_δ, ℓ(η^2p_t)) (2(s_t-1)/s_t + η^2 p_t^2 r(p_t)^2) p_t = 1/r(p_t) + r'(p_t)/r(p_t)^2(2(s_t-1)/s_t + η^2 p_t^2 r(p_t)^2) p_t + 𝒪_δ, ℓ(η^4 p_t^2), where we used the Taylor expansion on r'(p)/r(p)^2 with the fact that d/dp(r'(p)/r(p)^2) is bounded on [-4, 4] and that p_t≤ 4 to obtain the second equality. Note that q_t+1 = (1-η^2 p_t^2 r(p_t) (2q_t-r(p_t)))^-1q_t by Eq. (<ref>). Consequently, s_t+1 = (1 - η^2 p_t^2 r(p_t)(2q_t - r(p_t)))^-1(s_t + 2p_t r'(p_t)/r(p_t)(s_t-1) + η^2 p_t^3 r'(p_t) q_t ) + 𝒪_δ, ℓ(η^4 p_t^2) = (1 + η^2 p_t^2 r(p_t)(2q_t - r(p_t))) s_t + 2p_t r'(p_t)/r(p_t)(s_t-1) + η^2 p_t^3 r'(p_t) q_t + 𝒪_δ, ℓ(η^4 p_t^2) = 1 + (1 + 2p_tr'(p_t)/r(p_t)) (s_t-1) + η^2 p_t^2 r(p_t)(2q_t - r(p_t))s_t + η^2 p_t^3 r'(p_t) q_t + 𝒪_δ, ℓ(η^4p_t^2). Here, since s_t = 1 + 𝒪_δ, ℓ(η^2), we can rewrite η^2 p_t^2 r(p_t)(2q_t - r(p_t))s_t + η^2 p_t^3 r'(p_t) q_t =  η^2 p_t^2 r(p_t)^2 (2s_t - 1)s_t + η^2 p_t^3 r'(p_t) r(p_t) s_t =  η^2 p_t^2 r(p_t)^2 + η^2 p_t^3 r'(p_t) r(p_t) + 𝒪_δ, ℓ(η^4 p_t^2), which results in s_t+1 = 1 + (1 + 2p_tr'(p_t)/r(p_t)) (s_t-1) + η^2 p_t^2 r(p_t)^2 + η^2 p_t^3 r(p_t) r'(p_t) + 𝒪_δ, ℓ(η^4p_t^2). Note that h is even, and twice continuously differentiable function by Lemma <ref>. Consequently, h'(0) = 0 and h'(p) = 𝒪_ℓ(p), since h” is bounded on closed interval. Consequently, h(p_t+1) = h((1+𝒪_δ, ℓ(η^2))p_t) = h(p_t) + 𝒪_δ, ℓ(η^2 p_t^2). Hence, we can obtain the following: s_t+1 - 1 - h(p_t+1) η^2 = s_t+1 - 1 -h(p_t) η^2 + 𝒪_δ, ℓ(η^4 p_t^2) = s_t+1 - 1 + 1/2(p_t r(p_t)^3/r'(p_t) + p_t^2 r(p_t)^2) η^2 + 𝒪_δ, ℓ(η^4 p_t^2) = (1 + 2p_t r'(p_t)/r(p_t)) (s_t - 1 + 1/2(p_t r(p_t)^3/r'(p_t) + p_t^2 r(p_t)^2)η^2) + 𝒪_δ, ℓ(η^4 p_t^2) = (1 + 2p_tr'(p_t)/r(p_t)) (s_t-1-h(p_t)η^2) + 𝒪_δ, ℓ (η^4 p_t^2). Note that r(p_t) = (1 + 𝒪_δ, ℓ (η^2)) q_t ≥ (1 + 𝒪_δ, ℓ (η^2)) q_0 ≥ c_0 ≥ r(z_0) for small step size η, where z_0 = sup{z r'(z)/r(z)≥ -1/2}. Consequently, it holds that 1 + 2p_t r'(p_t)/r(p_t)≥ 0. Therefore, we have the desired inequality: s_t+1-1-h(p_t+1)η^2≤(1 + 2p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η^2 + 𝒪_δ,ℓ(η^4 p_t^2). We now provide the proof of Theorem <ref>, restated below for the sake of readability. * By Proposition <ref>, there exists a time step t_a^* = 𝒪_δ, ℓ(log(η^-1)) which satisfies: s_t_a^*-1 = |q_t_a^*/r(p_t_a^*) - 1 | = 𝒪_δ, ℓ(η^2). By Lemma <ref>, there exists a constant D>0 which depends on δ, ℓ such that if s_t-1 = 𝒪_δ,ℓ(η^2), then s_t+1-1-h(p_t+1)η^2≤( 1 + 2p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η^2 + D η^4 p_t^2. Hence, if s_t-1 = 𝒪_δ,ℓ(η^2) and s_t-1-h(p_t) η^2≥(-p_t r(p_t)/r'(p_t))Dη^4, then s_t+1 - 1 - h(p_t+1)η^2≤( 1 + p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η^2. For any t≤ T, we have q_t < 1-δ/2 so that if s_t-1 = 𝒪_δ,ℓ(η^2), then r(p_t)≤ (1+𝒪_δ,ℓ(η^2))q_t < 1-δ/4 for small step size η. From Eq. (<ref>) with t=t_a^*, we have either s_t_a^* - 1 - h(p_t_a^*)η^2 < (-p_t_a^* r(p_t_a^*)/r'(p_t_a^*))Dη^4, or s_t_a^*+1 - 1 - h(p_t_a^*+1)η^2≤( 1 + r̂(1-δ/4) r'(r̂(1-δ/4))/(1-δ/4)) s_t_a^* - 1 - h(p_t_a^*)η^2, where we used Assumption <ref> <ref> and p_t > r̂(1-δ/4). In the later case, s_t_a^*+1-1 = 𝒪_δ,ℓ(η^2) continues to hold and we can again use Eq. (<ref>) with t=t_a^*+1. By repeating the analogous arguments, we can obtain the time step t_a ≤ t_a^* + log( -Dη^4/r”(0) s_t_a^* - 1 - h(p_t_a^*)η^2)/log( 1 + r̂(1-δ/4) r'(r̂(1-δ/4))/(1-δ/4)) = 𝒪_δ, ℓ (log(η^-1)), which satisfies: either s_t_a - 1 - h(p_t_a)η^2 < (-p_t_a r(p_t_a)/r'(p_t_a))Dη^4, or s_t_a - 1 - h(p_t_a) η^2≤(-1/r”(0)) Dη^4 ≤(- p_t_a r(p_t_a)/r'(p_t_a)) Dη^4≤(- 4 r(4)/r'(4)) Dη^4, where we used p_t≤ 4 from Lemma <ref> and -zr(z)/r'(z)≥ -1/r”(0) for any z by Assumption <ref> <ref>. By Eq. (<ref>), if s_t - 1 - h(p_t) η^2≤(- 4 r(4)/r'(4)) Dη^4 is satisfied for any time step t, then s_t+1 - 1 - h(p_t+1) η^2≤( 1 + 2p_t r'(p_t)/r(p_t)) (- 4 r(4)/r'(4)) Dη^4 + D η^4 p_t^2 ≤(- 4 r(4)/r'(4)) Dη^4, by p_t≤ 4 from Lemma <ref> and Assumption <ref> <ref>. Hence, by induction, we have the desired bound as following: for any t≥ t_a, s_t - 1 - h(p_t) η^2≤(- 4 r(4)/r'(4)) Dη^4 = 𝒪_δ, ℓ(η^4), by p_t≤ 4 from Lemma <ref> and Assumption <ref> <ref>. §.§ Proof of Theorem <ref> In this subsection, we prove Theorem <ref>. We start by proving Lemma <ref> which provides a useful property of h defined in Theorem <ref>. Consider the function h defined in Theorem <ref>, given by h(p) - 1/2( p r(p)^3/r'(p) + p^2 r(p)^2 ) if p 0, and - 1/2r”(0) if p=0. Then, h is a positive, even, and bounded twice continuously differentiable function. It is clear that h is even. We first prove that h is positive. For any p 0, it holds that h(p) = -pr(p)^3/2r'(p)( 1 + pr'(p)/r(p))>0, since pr(p)/r'(p) < 0 and 1 + pr'(p)/r(p) = ℓ”(p)/r(p)> 0 by Assumption <ref> and convexity of ℓ. The function h is continuous since lim_p→ 0 h(p) = h(0). Continuous function on a compact domain is bounded, so h is bounded on the closed interval [-1, 1]. We can rewrite h as h(p) = 1/2 p^2r(p)^2 ( - r(p)/pr'(p) - 1 ). Note that p^2r(p)^2 = ℓ'(p)^2 ≤ 1, and (-r(p)/pr'(p)-1) is positive, decreasing function on p>0 by Assumption <ref> <ref>. Hence, h is bounded on [1, ∞). Since h is even, h is bounded on (-∞, 1]. Therefore, h is a bounded function on . We finally prove that h is twice continuously differentiable. Since r is even and C^4 on , we can check that h'(p) - 1/2[ r(p)^3 (r'(p) - pr”(p))/r'(p)^2 + pr(p) (5r(p) + 2pr'(p)) ] if p 0, and 0 if p=0. Moreover, for any p 0, h”(p) = - 1/2( 2r(p)^2 r”(p) (pr”(p) - r'(p))/r'(p)^3 - pr(p)^3 r^(3)(p)/r'(p)^2 - 3pr(p)^2 r”(p)/r'(p)) - 4r(p)^2 - 7pr(p)r'(p) - p^2 (r(p)r”(p) + r'(p)^2), and h”(0) = r^(4)(0)/6r”(0)^2 - 5/2. Since lim_p→ 0 h”(p) = h”(0), we can conclude that h is a twice continuously differentiable function. We now give the proof of Theorem <ref>, restated below for the sake of readability. * We first prove that there exists a time step t_b≥ 0 such that q_t_b > 1. Assume the contrary that q_t ≤ 1 for all t≥ 0. Let t_a be the time step obtained in Theorem <ref>. Then for any t≥ t_a, we have r(p_t) = (1 - h(p_t)η^2 + 𝒪_δ, ℓ(η^4)) q_t ≤ 1 - h(p_t) η^2/2, for small step size η. The function g(p) r(p) - 1 + h(p)η^2/2 is even, continuous, and has the function value g(0) = η^2/4r”(0) > 0. Consequently, there exists a positive constant ϵ>0 such that g(p) > 0 for all p∈ (-ϵ, ϵ). Then, we have p_t≥ϵ for all t≥ t_a, since g(p_t) ≤ 0. Moreover, s_t ≥3/4 for any t≥ t_a by Theorem <ref> for small step size η. This implies that for any t≥ t_a, q_t/q_t+1 = 1 - η^2 p_t^2 r(p_t)^2 (2s_t - 1) ≤ 1 - 1/2η^2 ℓ'(p_t)^2 ≤ 1 - 1/2η^2ℓ'(ϵ)^2, so q_t grows exponentially, which results in the existence of a time step t_b'≥ t_a such that q_t_b' > 1, a contradiction. Therefore, there exists a time step t_b such that q_t_b≤ 1 and q_t > 1 for any t > t_b, i.e., q_t jumps across the value 1. This holds since the sequence (q_t) is monotonically increasing. For any t≤ t_b, we have q_t+1≤ q_t + 𝒪(η^2) by Lemma <ref>, and this implies that t_b ≥Ω ((1-q_0)η^-2), as desired. Lastly, we prove the convergence of GD iterates (p_t, q_t). Let t > t_b be given. Then, q_t ≥ q_t_b+1 > 1 and it holds that |p_t+1/p_t| = 2r(p_t)/q_t - 1 -η^2 p_t^2 r(p_t)^2 ≤2/q_t_b+1 - 1 < 1. Hence, p_t is exponentially decreasing for t>t_b. Therefore, p_t converges to 0 as t→∞. Since the sequence (q_t)_t=0^∞ is monotonically increasing and bounded (due to Theorem <ref>), it converges. Suppose that (p_t, q_t) converges to the point (0, q^*). By Theorem <ref>, we can conclude that | q^* - 1 + η^2/2r”(0)| = 𝒪_δ,ℓ(η^4), which is the desired bound. §.§ Proof of Theorem <ref> In this subsection, we prove Theorem <ref>. We first prove a useful lemma which bounds the Hessian of the function (, ) ↦^⊤, stated below. For any Θ = (, ) with ∈^m× d, ∈^m, and ∈^d with _2 = 1, the following equality holds: ‖∇_(, )^2 (^⊤) ‖_2 ≤ 1. Moreover, if λ is an eigenvalue of ∇_(, )^2 (^⊤), then -λ is also an eigenvalue of ∇_(, )^2 (^⊤). We first define the notations. We use the operator ⊗ to represent tensor product, or Kronecker product between matrices. For example, for any given two matrices A = (a_ij) ∈^m× n and B, we define A⊗ B by A ⊗ B = [ a_11B … a_1nB; ⋮ ⋱ ⋮; a_m1B … a_mnB ]. We use 0_m× n to denote a m by n matrix with all entries filled with zero, and I_n denotes n by n identity matrix. Now we provide the proof of the original problem. Let = (u_ij)∈^m× d, = (v_i)∈^m, and = (x_j) ∈^d be given. Then, ^⊤ = ∑_i, j v_i U_ij x_j. We vectorize the parameter Θ = (, ) by (v_1,…,v_m,U_11,…,U_m1,…,U_1d,…, U_md)∈^m+md. Then, we can represent the Hessian as ∇_(, )^2 (^⊤) = ([ 0 ^⊤; 0_d× d ]) ⊗I_m. For any given c∈ and ∈^d with c^2 + _2^2 = 1, we have ‖([ 0 ^⊤; 0_d× d ]) ([ c; ]) ‖_2 = ‖([ ^⊤; c ]) ‖_2 ≤_2 = 1. Hence, by definition of matrix operator norm, we have ‖([ 0 ^⊤; 0_d× d ]) ‖_2 ≤ 1. Therefore, we can conclude that ‖∇_(, )^2 (^⊤)‖_2 = ‖([ 0 ^⊤; 0_d× d ]) ⊗I_m‖_2 = ‖([ 0 ^⊤; 0_d× d ]) ‖_2 ≤ 1. Now suppose that λ is an eigenvalue of ∇_(, )^2 (^⊤). We note that for any given matrices and , if λ_a is an eigenvalue of with the corresponding eigenvector _a and λ_b is an eigenvalue of with the corresponding eigenvector _b, then λ_a λ_b is an eigenvalue of ⊗ with the corresponding eigenvector _a ⊗_b. Moreover, any eigenvalue of ⊗ arises as such a product of eigenvalues of and . Hence, using Eq. (<ref>), we have λ is an eigenvalue of the matrix ([ 0 ^⊤; 0_d× d ]). We denote the corresponding eigenvector by (c, ^⊤)^⊤ where c∈ and ∈^d, i.e., it holds that ([ 0 ^⊤; 0_d× d ]) ([ c; ]) = ([ ^⊤; c ]) = λ([ c; ]). Consequently, we have ([ 0 ^⊤; 0_d× d ]) ([ -c; ]) = ([ ^⊤; -c ]) = - λ([ -c; ]), and this implies that -λ is an eigenvalue of the matrix ([ 0 ^⊤; 0_d× d ]). Therefore, by Eq. (<ref>), -λ is an eigenvalue of ∇_(, )^2 (^⊤). Using Lemma <ref>, we prove an important bound on the sharpness value provided by the Proposition <ref> stated below. For any Θ = (, ) with ∈^m× d, ∈^m, and ∈^d with _2 = 1, the following bound holds: |λ_max(Θ) - ℓ”(^⊤) (_2^2 + _2^2) |≤ 1 The loss Hessian at Θ = (, ) can be characterized as: ∇^2_Θℒ(Θ) = ℓ”(^⊤) (∇_Θ (^⊤))^⊗ 2 + ℓ'(^⊤) ∇_Θ^2 (^⊤). We first prove that λ_max(∇^2_Θℒ(Θ)) = ‖∇^2_Θℒ(Θ) ‖_2. Note that the largest absolute value of the eigenvalue of a symmetric matrix equals to its spectral norm. Hence, ‖∇^2_Θℒ(Θ) ‖_2 = max{λ_max(∇^2_Θℒ(Θ)), -λ_min(∇^2_Θℒ(Θ)) }, so it suffices to prove that λ_max(∇^2_Θℒ(Θ)) ≥ -λ_min(∇^2_Θℒ(Θ)). Let denote the eigenvector of ∇^2_Θℒ(Θ) corresponding to the smallest eigenvalue λ_min(∇^2_Θℒ(Θ)) with _2 = 1. Then, using Eq. (<ref>), we have λ_min (∇^2_Θℒ(Θ)) = ^⊤∇^2_Θℒ(Θ) = ℓ”(^⊤) ^⊤(∇_Θ (^⊤))^⊗ 2 + ℓ'(^⊤) ^⊤∇_Θ^2 (^⊤) ≥ℓ'(^⊤) ^⊤∇_Θ^2 (^⊤) ≥ - ℓ'(^⊤)∇_Θ^2 (^⊤)_2, where we used Lemma <ref> to obtain the last inequality. Note that the matrix ℓ”(^⊤) (∇_Θ (^⊤))^⊗ 2 is PSD, so that λ_max (∇^2_Θℒ(Θ)) ≥λ_max (ℓ'(^⊤) ∇_Θ^2 (^⊤)) = ℓ'(^⊤)∇_Θ^2 (^⊤)_2 ≥ - λ_min (∇^2_Θℒ(Θ)). Therefore, λ_max(∇^2_Θℒ(Θ)) = ‖∇^2_Θℒ(Θ) ‖_2. Now, we have the following triangle inequality: |λ_max(Θ) - ℓ”(^⊤) (_2^2 + _2^2) | = |‖∇^2_Θℒ(Θ)‖_2 - ‖ℓ”(^⊤) (∇_Θ (^⊤))^⊗ 2‖_2 | ≤‖ℓ'(^⊤) ∇_Θ^2 (^⊤) ‖_2 = |ℓ'(^⊤) |‖∇_(, )^2 (^⊤) ‖_2 ≤ 1, where the last inequality holds by Lemma <ref> and 1-Lipschitzness of ℓ. We now give the proof of Theorem <ref>, restated below for the sake of readability. * By Proposition <ref>, we can bound the sharpness λ_max(Θ_t) at time step t by |λ_max(Θ_t) - 2ℓ”(p_t)/η q_t|≤ 1. Since ℓ”(z) = r(z) + zr'(z), we can rewrite as following: |λ_max(Θ_t) - (s_t^-1 + p_t r'(p_t)/q_t) 2/η|≤ 1. By Theorem <ref> and since h is a bounded function by Lemma <ref>, we have s_t = 1+𝒪_ℓ(η^2) for any t≥ t_a. Consequently, s_t^-1 - 1 = 𝒪_ℓ(η^2) and r(p_t) - q_t =𝒪_ℓ(η^2). Moreover, for any 0<q<1, d/dq(r̂(q)r'(r̂(q))/q) = r̂'(q) (r'(r̂(q)) + r̂(q) r”(r̂(q)))/q - r̂(q) r'(r̂(q))/q^2 = 1/q( 1 + r̂(q) r”(r̂(q))/r'(r̂(q))) - r̂(q) r'(r̂(q))/q^2, so that lim_q→ 1^-(d/dq(r̂(q)r'(r̂(q))/q)) = lim_p→ 0^+(1 + pr”(p)/r'(p)) = 2. Therefore, d/dq(r̂(q)r'(r̂(q))/q) is bounded on [1/4, 1) and Taylor's theorem gives |p_t r'(p_t)/r(p_t) - r̂(q_t) r'(r̂(q_t))/q_t| = 𝒪_ℓ(| r(p_t) - q_t |) = 𝒪_ℓ(η^2), for any time step t with q_t < 1. Hence, if q_t<1, we have the following bound: |λ̃(q_t) - (s_t^-1 + p_t r'(p_t)/q_t) 2/η|≤| 1 - s_t^-1 + r̂(q_t) r'(r̂(q_t))/q_t - p_t r'(p_t)/r(p_t)|2/η + 𝒪_ℓ(η)= 𝒪_ℓ(η), where we used p_t r'(p_t)/q_t = p_t r'(p_t)/r(p_t) (1+𝒪_ℓ(η^2)) = p_t r'(p_t)/r(p_t) + 𝒪_ℓ(η^2), since 1 + p_t r'(p_t)/r(p_t) = ℓ”(p_t)/r(p_t) > 0 implies p_t r'(p_t)≤ r(p_t) ≤ 1. Now let t be any given time step with q_t ≥ 1. Then, r(p_t) = 1 - 𝒪_ℓ(η^2), and since r(z) = 1+r”(0)z^2 + 𝒪_ℓ(z^4) for small z, we have p_t = 𝒪_ℓ(η). Hence, |λ̃(q_t) - (s_t^-1 + p_t r'(p_t)/q_t) 2/η|≤| 1 - s_t^-1 - p_t r'(p_t)/r(p_t)|2/η + 𝒪_ℓ(η) = 𝒪_ℓ(η), for any t with q_t≥ 1. By Eqs. (<ref>), (<ref>), and (<ref>), we can conclude that for any t≥ t_a, we have |λ_max(Θ_t) - λ̃(q_t) |≤ 1 + 𝒪_ℓ(η). Finally, we can easily check that the sequence (λ̃(q_t))_t=0^∞ is monotonically increasing, since z↦zr'(z)/r(z) is a decreasing function by Assumption <ref> <ref> and the sequence (q_t) is monotonically increasing. § PROOFS FOR THE SINGLE-NEURON NONLINEAR NETWORK §.§ Formal statements of Theorem <ref> In this subsection, we provide the formal statements of Theorem <ref>. We study the GD dynamics on a two-dimensional function ℒ(x,y) 1/2(ϕ(x)y)^2, where x, y are scalars and ϕ is a nonlinear activation satisfying Assumption <ref>. We consider the reparameterization given by Definition <ref>, which is (p, q) ( x, 2/η y^2). We emphasize that the results we present in this subsection closely mirror those of the Section <ref>. In particular, * Assumption <ref> mirrors Assumption <ref>, * Assumption <ref> mirrors Assumption <ref>, * (gradient flow regime) Theorem <ref> mirrors Theorem <ref>. * (EoS regime, Phase 1) Theorem <ref> mirrors Theorem <ref>, * (EoS regime, Phase 2) Theorem <ref> mirrors Theorem <ref>, and * (progressive sharpening) Theorem <ref> mirrors Theorem <ref>. The proof strategies are also similar. This is mainly because the 1-step update rule Eq. (<ref>) resembles Eq. (<ref>) for small step size η. We now present our rigorous results contained in Theorem <ref>. Inspired by Lemma <ref>, we have an additional assumption on ϕ as below. Let r be a function defined by r(z)ϕ(z)ϕ'(z)/z for z 0 and r(0) 1. The function r satisfies Assumption <ref>. In contrast to the function r defined in Section <ref>, the expression 1+pr'(p)/r(p) can be negative, which implies that the constant c defined in Lemma <ref> is positive. As a result, the dynamics of p_t may exhibit a period-4 (or higher) oscillation or even chaotic behavior (as illustrated in Figure <ref>). We first state our results on the gradient flow regime. [gradient flow regime]theoremTheoremGF Let η∈ (0, r(1)/2(r(1)+2)) be a fixed step size and ϕ be a sigmoidal function satisfying Assumptions <ref> and <ref>. Suppose that the initialization (p_0, q_0) satisfies p_0≤ 1 and q_0 ∈(1/1-2η, r(1)/4η). Consider the reparameterized GD trajectory characterized in Eq. (<ref>). Then, the GD iterations (p_t, q_t) converge to the point (0, q^*) such that q_0 ≤ q^* ≤exp(2 η[min{2(q_0 - 1)/q_0, r(1)/q_0}]^-1) q_0 ≤ 2q_0. Theorem <ref> implies that in gradient flow regime, GD with initialization (x_0, y_0) and step size η converges to (0, y^*) which has the sharpness bounded by: (1 - 2 η[min{2(q_0 - 1)/q_0, r(1)/q_0}]^-1) y_0^2 ≤λ_max(∇^2 ℒ(0, y^*)) ≤ y_0^2. Now we provide our results on the EoS regime with an additional assumption below. Let r be a function defined in Assumption <ref>. Then r is C^4 on and satisfies: * z↦r'(z)/r(z)^2 is decreasing on , * z↦zr'(z)/r(z) is decreasing on z>0 and increasing on z<0, * z→zr(z)/r'(z) is decreasing on z>0 and increasing on z<0, and * r̂(1/2) r'(r̂(1/2))> -1/2. Note that the function r that arise from the activation ϕ = tanh satisfies Assumptions <ref>, <ref>, and <ref>. [EoS regime, Phase 1]theoremTheoremEoSearly Let η>0 be a small enough constant and ϕ be an activation function satisfying Assumptions <ref>, <ref>, and <ref>. Let z_0 sup_z{zr'(z)/r(z)≥ -1/2}, z_1 sup_z{zr'(z)/r(z)≥ -1}, and c_0 max{r(z_0), r(z_1)+1/2}∈ (1/2,1). Let δ∈ (0,1-c_0) be any given constant. Suppose that the initialization (p_0, q_0) satisfies p_0≤ 1 and q_0∈ (c_0, 1-δ). Consider the reparameterized GD trajectory characterized in Eq. (<ref>). We assume that for all t≥ 0 such that q_t<1, we have p_t 0. Then, there exists a time step t_a = 𝒪_δ, ϕ(log(η^-1)) such that for any t≥ t_a, q_t/r(p_t) = 1 + h(p_t) η + 𝒪_δ,ϕ(η^2) where h:→ is a function defined as h(p) -ϕ(p)^2 r(p)/p r'(p) if p 0, and -1/r”(0) if p=0. The main difference between Theorem <ref> and Theorem <ref> is the error term which is 𝒪(η^2) in the former and 𝒪(η^4) in the latter. This is because the 1-step update rule of q_t in Theorem <ref> is given by q_t+1 = (1+𝒪(η))q_t, while in Theorem <ref> we have q_t+1 = (1+𝒪(η^2))q_t. [EoS regime, Phase 2]theoremTheoremEoSlate Under the same settings as in Theorem <ref>, there exists a time step t_b = Ω((1-q_0)η^-1), such that q_t_b≤ 1 and q_t > 1 for any t> t_b. Moreover, the GD iterates (p_t, q_t) converge to the point (0, q^*) such that q^* = 1 - η/r”(0) + 𝒪_δ,ϕ(η^2). Theorem <ref> implies that in the EoS regime, GD with step size η converges to (0, y^*) which has the sharpness approximated as: λ_max(∇^2 ℒ(0, y^*)) = 2/η - 2/r”(0) + 𝒪_δ, ϕ(η). Theorem <ref> proves that progressive sharpening (i.e., sharpness increases) occurs during Phase 2. [progressive sharpening]theoremTheoremPS Under the same setting as in Theorem <ref>, let t_a denote the obtained time step. Define the function λ̃: _>0→ given by λ̃(q) (1 + r̂(q) r'(r̂(q))/q) 2/η if q≤ 1, and 2/η otherwise. Then, the sequence (λ̃(q_t))_t=0^∞ is monotonically increasing. For any t≥ t_a, the sharpness at GD iterate (x_t, y_t) closely follows the sequence (λ̃(q_t))_t=0^∞ satisfying the following: λ_max(∇^2 ℒ (x_t, y_t)) = λ̃(q_t) + 𝒪_ϕ(1). In Figure <ref>, we conduct numerical experiments on single neuron model with tanh-activation, demonstrating that λ̃(q_t) provides a close approximation of the sharpness. §.§ Proof of Theorem <ref> We give the proof of Theorem <ref>, restated below for the sake of readability. * Theorem <ref> directly follows from Proposition <ref> stated below. Suppose that η∈ (0, r(1)/2(r(1)+2)), p_0≤ 1 and q_0 ∈(1/1-2η, r(1)/4η). Then for any t≥ 0, we have p_t≤[ 1 - min{2(q_0 - 1)/q_0, r(1)/q_0}]^t ≤ 1, and q_0 ≤ q_t≤exp(2 η[min{2(q_0 - 1)/q_0, r(1)/q_0}]^-1) q_0 ≤ 2q_0. We give the proof by induction; namely, if p_t≤[ 1 - min{2(q_0 - 1)/q_0, r(1)/q_0}]^t, q_0 ≤ q_t≤exp(2 η[min{2(q_0 - 1)/q_0, r(1)/q_0}]^-1) q_0 ≤ 2q_0 are satisfied for time steps 0≤ t≤ k for some k, then the inequalities are also satisfied for the next time step k+1. For the base case, the inequalities are satisfied for t=0 by assumptions. For the induction step, we assume that the inequalities hold for any 0≤ t≤ k. We will prove that the inequalities are also satisfied for t=k+1. By induction assumptions, we have r(1)≤ r(p_k) ≤ 1 and q_0≤ q_k ≤ 2q_0. From Eq. (<ref>), we get |p_k+1/p_k| = | 1 - 2r(p_k)/q_k|≤max{ 1 - 2r(1)/2q_0, -1 + 2/q_0} = 1 - min{2(q_0-1), r(1)}/q_0. Due to the induction assumption, we obtain the desired bound on p_k+1 as following: p_k+1≤[ 1 - min{2(q_0 - 1)/q_0, r(1)/q_0}]^k+1. Moreover, for any 0≤ t≤ k, by Eq. (<ref>) we have 1 - 2η p_t^2 ≤ (1-η p_t^2)^2 ≤q_t/q_t+1 = (1-ηϕ(p_t)^2)^2 ≤ 1, where the second inequality comes from the fact that ϕ is 1-Lipschitz and ϕ(0) = 0 (Assumption <ref>). Hence, we have q_k+1≥ q_k ≥ q_0. Note that q_t/q_t+1∈ [1/2,1] for small η. Consequently, we have |log( q_0/q_k+1) |≤∑_t=0^k|log(q_t/q_t+1) | ≤ 2 ∑_t=0^k|q_t/q_t+1 - 1 | ≤ 2η∑_t=0^k p_t^2 ≤ 2η∑_t=0^k[ 1 - min{2(q_0 - 1)/q_0, r(1)/q_0}]^2t ≤ 2 η[min{2(q_0 - 1)/q_0, r(1)/q_0}]^-1, where the second inequality holds since log (1 + z)≤ 2 z if z≤1/2. Therefore, we obtain the desired bound on q_k+1 as following: q_0 ≤ q_k+1≤exp(2 η[min{2(q_0 - 1)/q_0, r(1)/q_0}]^-1) q_0. Since q_0 ≥1/1-2η and q_0≤r(1)/4η, we have min{2(q_0 - 1)/q_0, r(1)/q_0}≥ 4η. This implies that q_k+1≤exp(1/2) q_0 ≤ 2q_0, as desired. §.§ Proof of Theorem <ref> In this subsection, we prove Theorem <ref>. We use the following notation: s_tq_t/r(p_t). All the lemmas in this subsection are stated in the context of Theorem <ref>. The proof structure resembles that of Theorem <ref>. We informally summarize the lemmas used in the proof of Theorem <ref>. Lemma <ref> proves that p_t is bounded by a constant and q_t increases monotonically with the increment bounded by 𝒪(η). Lemma <ref> states that in the early phase of training, there exists a time step t_0 where s_t_0 becomes smaller or equal to 2/2-r(1), which is smaller than 2. Lemma <ref> demonstrates that if s_t is smaller than 2 and p_t≥r̂(1-δ/4), then s_t-1 decreases exponentially. For the case where p_t < r̂(1-δ/4), Lemma <ref> proves that p_t increases at an exponential rate. Moreover, Lemma <ref> shows that if s_t < 1 at some time step, then s_t+1 is upper bounded by 1+𝒪(η). Combining these findings, Proposition <ref> establishes that in the early phase of training, there exists a time step t_a^* such that s_t_a^* = 1 + 𝒪_δ, ϕ(η). Lastly, Lemma <ref> demonstrates that if s_t = 1 + 𝒪_δ, ϕ(η), then s_t - 1 - h(p_t) η decreases exponentially. Suppose that the initialization (p_0, q_0) satisfies p_0≤ 1 and q_0∈ (c_0, 1-δ). Then for any t≥ 0 such that q_t ≤ 1, it holds that p_t≤ 4, and q_t≤ q_t+1≤ (1 + 𝒪(η))q_t. We prove by induction. We assume that for some t≥ 0, it holds that p_t≤ 4 and 1/2≤ q_t ≤ 1. We will prove that p_t+1≤ 4 and 1/2≤ q_t≤ q_t+1≤ (1+𝒪(η))q_t. For the base case, p_0≤ 1 ≤ 4 and 1/2≤ c_0 < q_t ≤ 1 holds by the assumptions on the initialization. Now suppose that for some t≥ 0, it holds that p_t≤ 4 and 1/2≤ q_t ≤ 1. By Eq. (<ref>), p_t+1 = | p_t - 2ϕ(p_t)ϕ'(p_t)/q_t|≤max{p_t, 2/q_t}≤ 4. where we used Assumption <ref> to bound ϕ(p_t)ϕ'(p_t)≤ 1. Moreover, 1-2η≤ (1-η)^2≤q_t/q_t+1 = (1-ηϕ(p_t)^2)^2≤ 1, since ϕ is bounded by 1. Hence, q_t≤ q_t+1≤ (1 + 𝒪(η))q_t, as desired. Lemma <ref> implies that p_t is bounded by a constant throughout the iterations, and q_t monotonically increases slowly, where the increment for each step is 𝒪(η). Hence, there exists a time step T=Ω(δη^-1) = Ω_δ(η^-1) such that for any t≤ T, it holds that q_t ≤ 1 - δ/2. Through out this subsection, we focus on these T early time steps. Note that for all 0≤ t ≤ T, it holds that q_t ∈ (c_0, 1-δ/2). There exists a time step t_0 = 𝒪_δ, ϕ(1) such that s_t_0≤2/2-r(1). We start by proving the following statement: for any 0≤ t≤ T, if 2/2-r(1) < s_t < 2r(1)^-1, then s_t+1 < 2r(1)^-1 and p_t+1≤ (1-r(1))p_t. Suppose that 2/2-r(1) < s_t < 2r(1)^-1. Then from Eq. (<ref>), it holds that |p_t+1/p_t| = | 1 - 2/s_t|≤ 1-r(1). Hence, p_t+1≤ (1-r(1))p_t. Now we prove s_t+1< 2r(1)^-1. Assume the contrary that s_t+1≥ 2r(1)^-1. Then, r(p_t+1) = q_t+1/s_t+1 < q_t+1 < 1-δ/2 so that p_t+1≥r̂(1-δ/2). By Mean Value Theorem, there exists p_t^* ∈ (p_t+1, p_t) such that 1/r(p_t+1) = 1/r(p_t - (p_t-p_t+1)) = 1/r(p_t) + r'(p_t^*)/r(p_t^*)^2 (p_t-p_t+1) ≤1/r(p_t) + r'(p_t+1)/r(p_t+1)^2(r(1)p_t) ≤1/r(p_t) - r'(r̂(1-δ/2))/(1-δ/2)^2(r(1)r̂(1-δ/2)) = 1/r(p_t) - Ω_δ,ϕ(1), where we used Assumption <ref> <ref> and r̂(1-δ/2) ≤p_t+1≤ (1-r(1))p_t. Consequently, s_t+1 = q_t+1/r(p_t+1) = (1+𝒪(η))q_t (1/r(p_t) - Ω_δ,ϕ(1)) ≤q_t/r(p_t) = s_t < 2r(1)^-1, for small step size η. This gives a contradiction to our assumption that s_t+1≥ 2r(1)^-1. Hence, we can conclude that s_t+1 < 2r(1)^-1, as desired. We proved that for any 0≤ t≤ T, if 2/2-r(1) < s_t < 2r(1)^-1, it holds that s_t+1 < 2r(1)^-1 and p_t+1≤ (1-r(1))p_t. At initialization, p_0≤ 1 and q_0 < 1, so that s_0 < r(1)^-1. If s_0 ≤2/2-r(1), then t_0=0 is the desired time step. Suppose that s_0 > 2/2-r(1). Then, we have s_1 < 2r(1)^-1 and p_1≤ (1-r(1))p_0≤ 1-r(1). Then we have either s_1 ≤2/2-r(1), or 2/2-r(1) < s_1 < 2r(1)^-1. In the previous case, t_0=1 is the desired time step. In the latter case, we can repeat the same argument and obtain s_2 < 2r(1)^-1 and p_2≤ (1-r(1))^2. By inductively repeating the same argument, we can obtain a time step t_0≤log(r̂(1-δ/2)) / log(1-r(1)) = 𝒪_δ, ϕ(1) such that either s_t_0≤2/2-r(1), or p_t_0≤r̂(1-δ/2). In the latter case, r(p_t_0) ≥ 1-δ/2 > q_t_0, and hence s_t_0 < 1 < 2/2-r(1). Therefore, t_0 = 𝒪_δ, ϕ(1) is the desired time step satisfying s_t_0≤2/2-r(1). According to Lemma <ref>, there exists a time step t_0=𝒪_δ,ϕ(1) such that s_t_0≤2/2-r(1) < 2(1+η^2)^-1 for small step size η. Now we prove the lemma below. Suppose that s_t≤ 1. Then, it holds that s_t+1≤ 1 + 𝒪(η). For any p∈ (0, r̂(q_t/2)), we have r(p)≥q_t/2 so that f_q_t(p) = (-1+2r(p_t)/q_t)p_t. Hence, ∂/∂ pf_q_t(p) = 2 r(p)/q_t(1 + p r'(p)/p) - 1, for any p∈ (0, r̂(q_t/2)). By Assumption <ref> <ref>, <ref> and q_t ≥ c_0 ≥ r(z_1)+1/2≥ 2r(z_1) where z_1 = sup_z {zr'(z)/r(z)≥ - 1}, both r(p) and (1+pr'(p)/r(p)) are positive, decreasing function on (0, r̂(q_t/2)). Consequently, ∂/∂ pf_q_t(p) is a decreasing function on (0, r̂(q_t/2)). Now note that q_t/2 < q_t < 1, which means r̂(1) = 0 < r̂(q_t) < r̂(q_t/2) by the definition of r̂. Note that ∂/∂ pf_q_t(p) at p = r̂(q_t) evaluates to ∂/∂ pf_q_t(r̂(q_t)) = 1 + 2 r̂(q_t) r'(r̂(q_t))/r(r̂(q_t))≥ 1 + 2 r̂(c_0)r'(r̂(c_0))/r(r̂(c_0))≥ 0, where the inequalities used Assumption <ref> <ref> and q_t > c_0 ≥ r(z_0) where z_0 := sup_z{zr'(z)/r(z)≥ -1/2}, from the statement of Theorem <ref>. Therefore, since ∂/∂ pf_q_t(p) is decreasing on (0, r̂(q_t/2)) and is nonnegative at r̂(q_t), for any p∈ (0, r̂(q_t)), it holds that ∂/∂ pf_q_t(p)≥ 0. In other words, f_q_t(p) is an increasing function on (0, r̂(q_t)). Since 0≤ s_t≤ 1, we have p_t≤r̂(q_t) and it holds that p_t+1 = (-1 + 2/s_t) p_t = f_q_t(p_t)≤f_q_t(r̂(q_t)) = r̂(q_t). Therefore, with this inequality and Lemma <ref>, we can conclude that s_t+1 = q_t+1/r(p_t+1) = (1 + 𝒪(η))q_t/r(p_t+1)≤q_t/r(r̂(q_t)) + 𝒪(η) = 1 + 𝒪(η). Using Lemma <ref>, we prove the following lemma. For any 0≤ t≤ T, if s_t < 2 and r(p_t)≤ 1 - δ/4, then s_t+1-1≤ (1-d)s_t-1 + 𝒪(η), where d∈ (0,1/2] is a constant which depends on δ and ϕ. From Eq. (<ref>) it holds that p_t+1/p_t = 1 - 2/s_t < 0, so that p_t and p_t+1 have opposite signs. By Mean Value Theorem, there exists θ_t between -1 and (1-2/s_t) satisfying 1/r(p_t+1) = 1/r(-p_t + (2(s_t-1)/s_t)p_t) = 1/r(-p_t) - r'(θ_t p_t)/r(θ_t p_t)^2(2(s_t-1)/s_t) p_t = 1/r(p_t) - r'(θ_t p_t)/r(θ_t p_t)^2(2(s_t-1)/s_t) p_t. where the last equality used the fact that p_t and θ_t p_t have opposite signs and r'(z) and z have opposite signs. Note that θ_t p_t is between p_t and p_t+1. Consequently, the value r'(θ_t p_t)/r(θ_t p_t)^2 is between r'(p_t)/r(p_t)^2 and r'(p_t+1)/r(p_t+1)^2 by Assumption <ref> <ref>. We will prove the current lemma based on Eq. (<ref>). We divide into following three cases: (1) s_t≥ 1 and s_t+1≥ 1, (2) s_t≥ 1 and s_t < 1, and (3) s_t < 1. Case 1. Suppose that s_t≥ 1 and s_t+1≥ 1. Here, we have p_t≥r̂(q_t)≥r̂(1-δ/2) and similarly p_t+1≥r̂(1-δ/2). By Assumption <ref> <ref>, r'(θ_t p_t)/r(θ_t p_t)^2≥r'(r̂(1-δ/2))/(1-δ/2)^2. Hence, Eq. (<ref>) gives 1/r(p_t+1)≤1/r(p_t) - r'(r̂(1-δ/2))/(1-δ/2)^2(2(s_t-1)/s_t) r̂(1-δ/2). Consequently, by Lemma <ref>, s_t+1 = q_t (1 + 𝒪(η))/r(p_t+1) = q_t/r(p_t+1) + 𝒪(η) ≤ s_t - r'(r̂(1-δ/2))/(1-δ/2)^2(2(s_t-1)/s_t) r̂(1-δ/2) q_t + 𝒪(η) ≤ s_t - r'(r̂(1-δ/2))/(1-δ/2)^2 (s_t-1) r̂(1-δ/2) 1/2 + 𝒪(η) ≤ s_t - r̂(1-δ/2)r'(r̂(1-δ/2))/2(1-δ/2)^2 (s_t-1) + 𝒪(η), where we used q_t > c_0 > 1/2 and s_t < 2. Therefore, we can obtain the following inequality: 0 ≤ s_t+1-1 ≤( 1 - r̂(1-δ/2)r'(r̂(1-δ/2))/2(1-δ/2)^2) (s_t-1) + 𝒪(η). Case 2. Suppose that s_t≥ 1 and s_t+1< 1. Here, we have r(p_t+1) > q_t+1≥ q_t ≥ r(p_t), so that p_t+1 < p_t. Consequently, r'(θ_t p_t)/r(θ_t p_t)^2≤r'(p_t)/r(p_t)^2 by Assumption <ref> <ref>. Hence, we can deduce from Eq. (<ref>) that 1/r(p_t+1) ≥1/r(p_t) - r'(p_t)/r(p_t)^2( 2(s_t-1)/s_t) p_t = 1/r(p_t) - 2p_tr'(p_t)/r(p_t)q_t (s_t-1) = 1/r(p_t) + 2p_tr'(p_t)/r(p_t)q_t (s_t-1). Consequently, by Assumption <ref> <ref>, s_t+1≥q_t/r(p_t+1)≥ s_t + 2p_tr'(p_t)/r(p_t) (s_t-1) ≥ s_t + 2r̂(c_0/2)r'(r̂(c_0/2))/r(r̂(c_0/2)) (s_t-1), where we used r(p_t)≥q_t/2 > c_0/2. Therefore, we can obtain the following inequality: 0 ≤ 1 - s_t+1≤ - (1 + 4r̂(c_0/2)r'(r̂(c_0/2))/c_0) (s_t-1), where -1<r̂(c_0/2)r'(r̂(c_0/2))/r(r̂(c_0/2)) = 2r̂(c_0/2)r'(r̂(c_0/2))/c_0 < 0, since c_0 ≥ r(z_1)+1/2≥ 2r(z_1) with z_1 = sup_z{zr'(z)/r(z)≥ -1}, and r(z_1) ≤1/2 holds by Assumption <ref> <ref>. Case 3. Suppose that s_t < 1. By Lemma <ref>, it holds that s_t+1≤ 1+𝒪(η). Moreover, we assumed r(p_t) ≤ 1 - δ/4, so that p_t≥r̂(1-δ/4). We also have p_t+1 = (-1+2/s_t)p_t > p_t≥r̂(1-δ/4). Consequently, by Assumption <ref> <ref>, it holds that r'(θ_t p_t)/r(θ_t p_t)^2≥r'(r̂(1 - δ/4))/(1 - δ/4)^2. Hence, by Eq. (<ref>), we have 1/r(p_t+1) ≥1/r(p_t) + r'(r̂(1 - δ/4))/(1 - δ/4)^2(2(1-s_t)/s_t) r̂(1-δ/4) ≥1/r(p_t) + r'(r̂(1 - δ/4))/(1 - δ/4)^2 2(1-s_t) r̂(1-δ/4) = 1/r(p_t) + 2 r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2 (1-s_t), and hence, s_t+1≥q_t/r(p_t+1)≥ s_t + r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2 (1-s_t), where we used q_t > 1/2. Therefore, we can obtain the following inequality: -𝒪(η) ≤ 1-s_t+1≤(1 - r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2) (1-s_t), where the first inequality is from Lemma <ref>. Combining the three cases, we can finally conclude that if we choose d min{1/2, r̂(1-δ/2)r'(r̂(1-δ/2))/2(1-δ/2)^2, 2 (1 + 2r̂(c_0/2)r'(r̂(c_0/2))/c_0), r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2}∈(0,1/2], then s_t+1-1≤ (1-d)s_t-1 + 𝒪(η), as desired. Lemma <ref> implies that if s_t<2 and p_t≥r̂(1-δ/4), then s_t-1 exponentially decreases. We prove Lemma <ref> to handle the regime p_t < r̂(1-δ/4), which is stated below. For any 0≤ t ≤ T, if r(p_t)≥ 1 - δ/4, it holds that |p_t+1/p_t|≥2/2-δ. If r(p_t)≥ 1 - δ/4, then s_t = q_t/r(p_t) < 1-δ/2/1-δ/4 = 4-2δ/4-δ, where we used q_t < 1-δ/2 for any 0≤ t≤ T. Consequently, |p_t+1/p_t| = 2/s_t - 1 ≥2(4-δ)/4-2δ - 1 = 2/2-δ. Now we prove Proposition <ref>, which proves that s_t reaches close to 1 with error bound of 𝒪(η). There exists a time step t_a^* = 𝒪_δ,ϕ(log(η^-1)) satisfying s_t_a^* = 1 + 𝒪_δ,ϕ(η). By Lemma <ref>, there exists a time step t_0 = 𝒪_δ,ϕ(1) such that s_t_0≤2/2-r(1). Here, we divide into two possible cases: (1) s_t_0<1, and (2) 1≤ s_t_0≤2/2-r(1). Case 1. Suppose that s_t_0<1. By Lemma <ref>, if r(p_t_0)≥ 1-δ/4 (or equivalently, p_t_0≤r̂(1-δ/4)), then there exists a time step t_1 ≤ t_0 + log(r̂(1-δ/4)/p_t_0) / log(2/2-δ) = 𝒪_δ, ϕ (1) such that p_t_1≥r̂(1-δ/4). We denote the first time step satisfying p_t_1≥r̂(1-δ/4) and t_1≥ t_0 by t_1 = 𝒪_δ, ϕ(1). By Lemma <ref>, it holds that s_t_1≤ 1+𝒪(η) since s_t_1-1<1. Consequently, if s_t_1≥ 1, then s_t_1-1≤𝒪(η) so that t_a^* = t_1 is the desired time step. Hence, it suffices to consider the case when s_t_1<1. Here, we can apply Lemma <ref> which implies that s_t_1+1-1≤ (1-d) s_t_1-1 + 𝒪(η), where d is a constant which depends on δ and ϕ. Then, there are two possible cases: either s_t_1-1≤𝒪(η d^-1), or s_t_1+1-1≤ (1-d/2)s_t_1-1. It suffices to consider the latter case, suppose that s_t_1+1-1≤ (1-d/2)s_t_1-1. Since we are considering the case s_t_1<1, again by Lemma <ref>, we have s_t_1+1≤ 1 + 𝒪(η). Since p_t_1+1/p_t_1 = 2/s_t_1-1 > 1, we have p_t_1+1≥p_t_1≥r̂(1-δ/4). This means that we can again apply Lemma <ref> and repeat the analogous argument. Hence, there exists a time step t_2 ≤ t_1 + log(η/1-s_t_1) / log (1-d/2) = 𝒪_δ, ϕ(log(η^-1)), such that s_t_2-1≤𝒪(η d^-1) = 𝒪_δ,ϕ(η). Case 2. Suppose that 1≤ s_t_0≤2/2-r(1). Then, r(p_t_0) ≤ q_t_0≤ 1-δ/2, so we can apply Lemma <ref>. There are two possible cases: either s_t_0+1-1≤𝒪(η d^-1) = 𝒪_δ,ϕ(η), or s_t_0+1-1≤ (1-d/2)s_t_0-1. It suffices to consider the latter case. If s_t_0+1≥ 1, we can again apply Lemma <ref> and repeat the analogous argument. Hence, we can obtain a time step t_0' ≤ t_0 + log(η/1-s_t_0) / log(1-d/2) = 𝒪_δ,ϕ(log(η^-1)) such that either s_t_0' < 1 or s_t_0'-1 = 𝒪_δ,ϕ (η) is satisfied. If s_t_0' < 1, we proved in Case 1 that there exists a time step t_2' = t_0'+𝒪_δ,ϕ(log(η^-1)) such that s_t_2'-1≤𝒪_δ,ϕ(η), and this is the desired bound. Now we carefully handle the error term 𝒪(η) obtained in Proposition <ref> and a provide tighter bound on s_t by proving Lemma <ref> stated below. If s_t-1≤𝒪_δ, ϕ(η), then it holds that s_t+1-1-h(p_t+1) η≤(1+2p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η + 𝒪_δ,ϕ(η^2p_t^2), where h(p) -ϕ(p)^2 r(p)/p r'(p) if p 0, and -1/r”(0) if p=0. Suppose that s_t-1≤𝒪_δ,ϕ(η). Then, p_t+1 = | 1 - 2/s_t|p_t = (1 + 𝒪_δ, ϕ(η))p_t. By Eq. (<ref>) proved in Lemma <ref>, there exists ϵ_t = 𝒪_δ, ϕ(η) such that 1/r(p_t+1) = 1/r(p_t) + r'((1+ϵ_t)p_t)/r((1+ϵ_t)p_t)^2(2(s_t-1)/s_t) p_t = 1/r(p_t) + ( r'(p_t)/r(p_t)^2 + 𝒪_δ,ϕ(η p_t) ) (2(s_t-1)/s_t) p_t = 1/r(p_t) + r'(p_t)/r(p_t)^2(2(s_t-1)/s_t) p_t + 𝒪_δ,ϕ(η^2 p_t^2), where we used the Taylor expansion on r'(p)/r(p)^2 with the fact that d/dp(r'(p)/r(p)^2) is bounded on [-4, 4] and that p_t≤ 4 to obtain the second equality. Note that q_t+1 = (1-ηϕ(p_t)^2)^-2q_t = (1+2ηϕ(p_t)^2) q_t + 𝒪(η^2) by Eq (<ref>). Consequently, s_t+1 = (1+2ηϕ(p_t)^2) (s_t + r'(p_t)/r(p_t)^2(2(s_t-1)/s_t) p_t q_t) + 𝒪_δ,ϕ(η^2 p_t^2) = (1+2ηϕ(p_t)^2) s_t + 2p_t r'(p_t)/r(p_t)(s_t -1) + 𝒪_δ,ϕ(η^2 p_t^2) = 1 + (1 + 2p_t r'(p_t)/r(p_t)) (s_t-1) + 2ηϕ(p_t)^2 + 𝒪_δ,ϕ(η^2 p_t^2). Note that h is even, and twice continuously differentiable function by Lemma <ref>. Consequently, h'(0) = 0 and h'(p) = 𝒪_ϕ(p), since h” is bounded on closed interval. Consequently, h(p_t+1) = h((1+𝒪_δ, ϕ(η))p_t) = h(p_t) + 𝒪_δ, ϕ(η p_t^2). Hence, we can obtain the following: s_t+1 - 1 - h(p_t+1)η = s_t+1 - 1 - h(p_t) η + 𝒪_δ, ϕ(η^2p_t^2) = s_t+1 - 1 + ϕ(p_t)^2 r(p_t)/p_t r'(p_t)η + 𝒪_δ, ϕ(η^2p_t^2) = (1 + 2p_t r'(p_t)/r(p_t)) (s_t - 1 + ϕ(p_t)^2 r(p_t)/p_t r'(p_t)η) + 𝒪_δ, ϕ(η^2p_t^2) = (1 + 2p_t r'(p_t)/r(p_t))(s_t - 1 - h(p_t)η) + 𝒪_δ,ϕ(η^2p_t^2) Note that r(p_t) = (1 + 𝒪_δ, ϕ (η)) q_t ≥ (1 + 𝒪_δ, ϕ (η)) q_0 ≥ c_0 ≥ r(z_0) for small step size η, where z_0 = sup{z r'(z)/r(z)≥ -1/2}. Consequently, it holds that 1 + 2p_t r'(p_t)/r(p_t)≥ 0. Therefore, we can conclude that s_t+1-1-h(p_t+1) η≤(1+2p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η + 𝒪_δ,ϕ(η^2p_t^2), as desired. Now we give the proof of Theorem <ref>, restated below for the sake of readability. * By Proposition <ref>, there exists a time step t_a^* = 𝒪_δ, ℓ(log(η^-1)) which satisfies: s_t_a^*-1 = |q_t_a^*/r(p_t_a^*) - 1 | = 𝒪_δ, ϕ(η). By Lemma <ref>, there exists a constant D>0 which depends on δ, ϕ such that if s_t-1 = 𝒪_δ,ϕ(η), then s_t+1-1-h(p_t+1)η≤( 1 + 2p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η + D η^2 p_t^2. Hence, if s_t-1 = 𝒪_δ,ϕ(η) and s_t-1-h(p_t) η≥(-p_t r(p_t)/r'(p_t))Dη^2, then s_t+1 - 1 - h(p_t+1)η≤( 1 + p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η. For any t≤ T, we have q_t < 1-δ/2 so that if s_t-1 = 𝒪_δ,ϕ(η), then r(p_t)≤ (1+𝒪_δ,ϕ(η))q_t < 1-δ/4 for small step size η. From Eq. (<ref>) with t=t_a^*, we have either s_t_a^* - 1 - h(p_t_a^*)η < (-p_t_a^* r(p_t_a^*)/r'(p_t_a^*))Dη^2, or s_t_a^*+1 - 1 - h(p_t_a^*+1)η≤( 1 + r̂(1-δ/4) r'(r̂(1-δ/4))/(1-δ/4)) s_t_a^* - 1 - h(p_t_a^*)η, where we used Assumption <ref> <ref> and p_t > r̂(1-δ/4). In the latter case, s_t_a^*+1-1 = 𝒪_δ,ϕ(η) continues to hold and we can again use Eq. (<ref>) with t=t_a^*+1. By repeating the analogous arguments, we can obtain the time step t_a ≤ t_a^* + log( -Dη^2/r”(0) s_t_a^* - 1 - h(p_t_a^*)η)/log( 1 + r̂(1-δ/4) r'(r̂(1-δ/4))/(1-δ/4)) = 𝒪_δ, ℓ (log(η^-1)), which satisfies: either s_t_a - 1 - h(p_t_a)η < (-p_t_a r(p_t_a)/r'(p_t_a))Dη^2, or s_t_a - 1 - h(p_t_a) η≤(-1/r”(0)) Dη^2 ≤(- p_t_a r(p_t_a)/r'(p_t_a)) Dη^2≤(- 4 r(4)/r'(4)) Dη^2, where we used p_t≤ 4 from Lemma <ref> and -zr(z)/r'(z)≥ -1/r”(0) for any z by Assumption <ref> <ref>. By Eq. (<ref>), if s_t - 1 - h(p_t) η≤(- 4 r(4)/r'(4)) Dη^2 is satisfied for any time step t, then s_t+1 - 1 - h(p_t+1) η≤( 1 + 2p_t r'(p_t)/r(p_t)) (- 4 r(4)/r'(4)) Dη^2 + D η^2 p_t^2 ≤(- 4 r(4)/r'(4)) Dη^2, by p_t≤ 4 from Lemma <ref> and Assumption <ref> <ref>. Hence, by induction, we have the desired bound as following: for any t≥ t_a, s_t - 1 - h(p_t) η≤(- 4 r(4)/r'(4)) Dη^2 = 𝒪_δ, ℓ(η^2), by p_t≤ 4 and Assumption <ref> <ref>. §.§ Proof of Theorem <ref> In this subsection, we prove Theorem <ref>. We start by proving Lemma <ref> which provides a useful property of h defined in Theorem <ref>. Consider the function h defined in Theorem <ref>, given by h(p) -ϕ(p)^2 r(p)/p r'(p) if p 0, and -1/r”(0) if p=0. Then, h is a positive, even, and bounded twice continuously differentiable function. By Assumption <ref>, h is a positive, even function. Moreover, h is continuous since lim_p→ 0 h(p) = -1/r”(0) = h(0). Continuous function on a compact domain is bounded, so h is bounded on the closed interval [-1, 1]. Note that ϕ(p)^2 ≤ 1, and (-r(p)/pr'(p)) is positive, decreasing function on p>0 by Assumption <ref> <ref>. Hence, h is bounded on [1, ∞). Since h is even, h is bounded on (-∞, 1]. Therefore, h is a bounded on . We finally prove that h is twice continuously differentiable. Since r is even and C^4 on , we can check that for any p 0, h'(p) = -2r(p)^2/r'(p) - ϕ(p)^2/p + ϕ(p)^2 r(p) (r'(p) + pr”(p))/p^2 r'(p)^2, and h'(p)=0. Moreover, for any p 0, h”(p) = - 6 r(p) + 2ϕ(p)^2/p^2 + ϕ(p)^2 r”(p)/pr'(p) + ϕ(p)^2r(p)r^(3)(p)/p r'(p)^2 + 2r(p)^2 (r'(p)+2pr”(p))/pr'(p)^2 -ϕ(p)^2 r(p) (2r'(p) + pr”(p))/p^3 r'(p)^2 - ϕ(p)^2 r(p) r”(p) (r'(p) + 2pr”(p))/p^2 r'(p)^3, and h”(0) = - 1 - 2 ϕ^(3)(0)/3r”(0) + r^(4)(0)/3 r”(0)^2. Since lim_p→ 0 h”(p) = h”(0), we can conclude that h is a twice continuously differentiable function. Now we give the proof of Theorem <ref>, restated below for the sake of readability. * We first prove that there exists a time step t≥ 0 such that q_t > 1. Assume the contrary that q_t ≤ 1 for all t≥ 0. Let t_a be the time step obtained in Theorem <ref>. Then for any t≥ t_a, we have r(p_t) = (1 - h(p_t)η + 𝒪_δ, ϕ(η^2)) q_t ≤ 1 - h(p_t) η/2, for small step size η. The function g(p) r(p) - 1 + h(p)η/2 is even, continuous, and has the function value g(0) = η/2r”(0) > 0. Consequently, there exists a positive constant ϵ>0 such that g(p) > 0 for all p∈ (-ϵ, ϵ). Then, we have p_t≥ϵ for all t≥ t_a, since g(p_t) ≤ 0. This implies that for any t≥ t_a, it holds that q_t+1/q_t = (1 - ηϕ(p_t)^2)^-2≥ (1 - ηϕ(ϵ)^2)^-2 > 1, so q_t grows exponentially, which results in the existence of a time step t_b'≥ t_a such that q_t_b' > 1, a contradiction. Therefore, there exists a time step t_b such that q_t_b≤ 1 and q_t > 1 for any t > t_b, i.e., q_t jumps across the value 1. This holds since the sequence (q_t) is monotonically increasing. For any t≤ t_b, we have q_t+1≤ q_t + 𝒪(η) by Lemma <ref>, and this implies that t_b ≥Ω ((1-q_0)η^-1), as desired. Lastly, we prove the convergence of GD iterates (p_t, q_t). Let t > t_b be given. Then, q_t ≥ q_t_b+1 > 1 and it holds that |p_t+1/p_t| = 2r(p_t)/q_t - 1 ≤2/q_t_b+1 - 1 < 1. Hence, p_t is exponentially decreasing for t>t_b. Therefore, p_t converges to 0 as t→∞. Since the sequence (q_t)_t=0^∞ is monotonically increasing and bounded (due to Theorem <ref>, it converges. Suppose that (p_t, q_t) converges to the point (0, q^*). By Theorem <ref>, we can conclude that | q^* - 1 + η/r”(0)| = 𝒪_ϕ,ℓ(η^2), which is the desired bound. §.§ Proof of Theorem <ref> In this subsection, we prove Theorem <ref>. We first prove an important bound on the sharpness value provided by the Proposition <ref> stated below. For any (x,y)∈^2 with r(x)+xr'(x) ≥ 0, the following inequality holds: (r(x) + xr'(x)) y^2 ≤λ_max (∇^2 ℒ(x,y)) ≤ (r(x) + xr'(x)) y^2 + 4x^2 r(x)/r(x) + xr'(x), Let (x,y)∈^2 be given. The loss Hessian at (x,y) is ∇^2 ℒ (x,y) = [ (ϕ(x)ϕ”(x) + ϕ'(x)^2) y^2 2ϕ(x)ϕ'(x) y; 2ϕ(x)ϕ'(x)y ϕ(x)^2 ]. Note that the largest absolute value of the eigenvalue of a symmetric matrix equals to its spectral norm. Since trace of the Hessian, tr(∇^2 ℒ (x,y)) = (ϕ(x)ϕ”(x) + ϕ'(x)^2) y^2 + ϕ(x)^2 = (r(x) + xr'(x)) y^2 + ϕ(x)^2 ≥ 0, is non-negative, the spectral norm of the Hessian ∇^2 ℒ (x,y) equals to its largest eigenvalue. Hence, we have λ_max (∇^2 ℒ(x,y)) = ∇^2 ℒ(x,y)_2 ≥‖∇^2 ℒ(x,y) ·[ 1; 0 ]‖_2 = [((ϕ(x)ϕ”(x) + ϕ'(x)^2) y^2)^2 + (2ϕ(x)ϕ'(x) y)^2]^1/2 ≥ (ϕ(x)ϕ”(x) + ϕ'(x)^2) y^2 = (r(x) + xr'(x)) y^2, which is the desired lower bound. We also note that for any matrix , the inequality _2 ≤_F holds, where _F is the Frobenius norm. Hence, we have λ_max (∇^2 ℒ(x,y)) = ∇^2 ℒ(x,y)_2 ≤∇^2 ℒ(x,y)_F = [ ((ϕ(x)ϕ”(x) + ϕ'(x)^2) y^2)^2 + 2(2ϕ(x)ϕ'(x) y)^2 + ϕ(x)^4 ]^1/2 = [ (r(x)+xr'(x))^2 y^4 + 8x^2r(x)^2 y^2 + ϕ(x)^4 ]^1/2 ≤[ ( (r(x)+xr'(x))y^2 + 4x^2 r(x)/r(x) + xr'(x))^2 ]^1/2 = (r(x)+xr'(x))y^2 + 4x^2 r(x)/r(x) + xr'(x), which is the desired upper bound. Now we give the proof of Theorem <ref>, restated below for the sake of readability. * By Theorem <ref> and since h is a bounded function by Lemma <ref>, we have s_t = 1+𝒪_ϕ(η) for any t≥ t_a. Note that r(p_t) = (1 + 𝒪_δ, ϕ (η)) q_t ≥ (1 + 𝒪_δ, ϕ (η)) q_0 ≥ c_0 ≥ r(z_0) for small step size η, where z_0 = sup{z r'(z)/r(z)≥ -1/2}. Consequently, it holds that 1 + 2p_t r'(p_t)/r(p_t)≥ 0, and this implies 1 + p_t r'(p_t)/r(p_t)≥1/2. By Proposition <ref>, we can bound the sharpness λ_t λ_max (∇^2 ℒ(x_t,y_t)) by (1 + p_t r'(p_t)/r(p_t)) 2r(p_t)/η q_t≤λ_t ≤(1 + p_t r'(p_t)/r(p_t)) 2r(p_t)/η q_t + 4p_t^2 (1 + p_t r'(p_t)/r(p_t))^-1. By Lemma <ref>, p_t≤ 4 for any time step t. Moreover, since 1 + p_t r'(p_t)/r(p_t)≥1/2 holds for any t≥ t_a with small step size η, we have 4p_t^2 (1 + p_t r'(p_t)/r(p_t))^-1 = 𝒪_ϕ(1). Hence, for any t≥ t_a, it holds that λ_t = (1 + p_t r'(p_t)/r(p_t)) 2r(p_t)/η q_t + 𝒪_ϕ(1). For any t≥ t_a, we have s_t = 1+𝒪_ϕ(η) and this implies s_t^-1-1 = 𝒪_ϕ(η) and r(p_t) - q_t =𝒪_ϕ(η). For any 0<q<1, we have d/dq(r̂(q)r'(r̂(q))/q) = r̂'(q) (r'(r̂(q)) + r̂(q) r”(r̂(q)))/q - r̂(q) r'(r̂(q))/q^2 = 1/q( 1 + r̂(q) r”(r̂(q))/r'(r̂(q))) - r̂(q) r'(r̂(q))/q^2, so that lim_q→ 1^-(d/dq(r̂(q)r'(r̂(q))/q)) = lim_p→ 0^+(1 + pr”(p)/r'(p)) = 2. Therefore, d/dq(r̂(q)r'(r̂(q))/q) is bounded on [1/4, 1) and Taylor's theorem gives |p_t r'(p_t)/r(p_t) - r̂(q_t) r'(r̂(q_t))/q_t|≤𝒪_ϕ( r(p_t) - q_t ) ≤𝒪_ϕ(η). for any time step t with q_t < 1. Hence, if q_t<1, we have the following bound: |λ̃(q_t) - (1 + p_t r'(p_t)/r(p_t)) 2r(p_t)/η q_t|≤| 1 - s_t^-1 + r̂(q_t) r'(r̂(q_t))/q_t - p_t r'(p_t)/r(p_t)|2/η + 𝒪_ϕ(1) = 𝒪_ϕ(1), where we used p_r r'(p_t)/q_t = p_t r'(p_t)/r(p_t) ( 1 + 𝒪_ϕ(η)) = p_t r'(p_t)/r(p_t) + 𝒪_ϕ(η), since p_t r'(p_t)≤ r(p_t) for any t ≥ t_a. Now let t be any given time step with q_t ≥ 1. Then, r(p_t) = 1 - 𝒪_ϕ(η), and since r(z) = 1+r”(0)z^2 + 𝒪_ϕ(z^4) for small z, we have p_t = 𝒪_ϕ(√(η)). Hence, |λ̃(q_t) - (1 + p_t r'(p_t)/r(p_t)) 2r(p_t)/η q_t|≤| 1 - s_t^-1 - p_t r'(p_t)/r(p_t)|2/η + 𝒪_ϕ(1)= 𝒪_ϕ(1), for any t with q_t≥ 1, where we again used p_r r'(p_t)/q_t = p_t r'(p_t)/r(p_t) ( 1 + 𝒪_ϕ(η)) = p_t r'(p_t)/r(p_t) + 𝒪_ϕ(η). By Eqs. (<ref>), (<ref>), and (<ref>), we can conclude that for any t≥ t_a, we have |λ_t - λ̃(q_t) |≤𝒪_ϕ(1), the desired bound. Finally, we can easily check that the sequence (λ̃(q_t))_t=0^∞ is monotonically increasing, since z↦zr'(z)/r(z) is a decreasing function by Assumption <ref> <ref> and the sequence (q_t) is monotonically increasing.
http://arxiv.org/abs/2307.04295v1
20230710012341
Quasi-normal modes of naked singularities in presence of non-linear scalar fields
[ "O. S. Stashko", "O. V. Savchuk", "V. I. Zhdanov" ]
gr-qc
[ "gr-qc" ]
http://arxiv.org/abs/2307.06243v1
20230712153410
Reconstructing Spatiotemporal Data with C-VAEs
[ "Tiago F. R. Ribeiro", "Fernando Silva", "Rogério Luís de C. Costa" ]
cs.DB
[ "cs.DB", "cs.LG" ]
T. R. Ribeiro et al. CIIC, ESTG, Polytechnic of Leiria, Portugal CIIC, Polytechnic of Leiria, Portugal {tiago.f.ribeiro, fernando.silva, rogerio.l.costa}@ipleiria.pt Reconstructing Spatiotemporal Data with C-VAEs Tiago F. R. Ribeiro10000-0003-1603-1218 Fernando Silva10000-0001-9335-1851 Rogério Luís de C. Costa 20000-0003-2306-7585 August 12, 2023 =============================================================================================================================== The continuous representation of spatiotemporal data commonly relies on using abstract data types, such as moving regions, to represent entities whose shape and position continuously change over time. Creating this representation from discrete snapshots of real-world entities requires using interpolation methods to compute in-between data representations and estimate the position and shape of the object of interest at arbitrary temporal points. Existing region interpolation methods often fail to generate smooth and realistic representations of a region's evolution. However, recent advancements in deep learning techniques have revealed the potential of deep models trained on discrete observations to capture spatiotemporal dependencies through implicit feature learning. In this work, we explore the capabilities of Conditional Variational Autoencoder (C-VAE) models to generate smooth and realistic representations of the spatiotemporal evolution of moving regions. We evaluate our proposed approach on a sparsely annotated dataset on the burnt area of a forest fire. We apply compression operations to sample from the dataset and use the C-VAE model and other commonly used interpolation algorithms to generate in-between region representations. To evaluate the performance of the methods, we compare their interpolation results with manually annotated data and regions generated by a U-Net model. We also assess the quality of generated data considering temporal consistency metrics. The proposed C-VAE-based approach demonstrates competitive results in geometric similarity metrics. It also exhibits superior temporal consistency, suggesting that C-VAE models may be a viable alternative to modelling the spatiotemporal evolution of 2D moving regions. § INTRODUCTION sections/1_intro.tex § BACKGROUND AND RELATED WORK sections/2_background_related_work.tex § SPATIOTEMPORAL DATA COMPRESSION AND RECONSTRUCTION sections/3_methodology.tex § PERFORMANCE EVALUATION sections/4_evaluation.tex § CONCLUSION sections/5_conclusion.tex § DATA AVAILABILITY The source code is available at the following <https://github.com/CIIC-C-T-Polytechnic-of-Leiria/Reconstr_CVAE_paper>. Additionally, the dataset can be accessed directly from this link <https://zenodo.org/record/7944963/#.ZGYP6nbMIQ8>. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could influence the work reported in this paper. § ACKNOWLEDGEMENTS This work is partially funded by FCT - Fundação para a Ciência e a Tecnologia, I.P., through projects MIT-EXPL/ACC/0057/2021 and UIDB/04524/2020, and under the Scientific Employment Stimulus - Institutional Call - CEECINST/00051/2018. splncs04
http://arxiv.org/abs/2307.03978v2
20230708135448
Separable MV-algebras and lattice-groups
[ "Vincenzo Marra", "Matías Menni" ]
math.RA
[ "math.RA", "math.AG", "math.CT", "math.LO", "Primary: 06D35, Secondary: 06F20, 18B50, 12F10" ]
General theory determines the notion of separable MV-algebra (equivalently, of separable unital lattice-ordered Abelian group). We establish the following structure theorem: An MV-algebra is separable if, and only if, it is a finite product of algebras of rational numbers—i.e., of subalgebras of the MV-algebra [0,1]∩. Beyond its intrinsic algebraic interest, this research is motivated by the long-term programme of developing the algebraic geometry of the opposite of the category of MV-algebras, in analogy with the classical case of commutative K-algebras over a field K. [ Sahil Gangurde ABV-Indian Institute of Information Technology & Management, Gwalior, India [email protected] =========================================================================================================================== § INTRODUCTION For any field K, a (commutative) K-algebra is separable if, and only if, it is a finite product of finite separable field extensions of K. See, for example, <cit.>. The aim of the present paper is to establish the analogue of this fact for MV-algebras and lattice-groups. We show as our main result that an MV-algebra is separable exactly when it is a finite product of algebras of rational numbers—the subalgebras of [0,1]∩ (Theorem <ref>). By a well-known theorem of Mundici <cit.>, the category of MV-algebras is equivalent to the category of lattice-ordered Abelian groups with a unit. We frame our treatment in the language of MV-algebras, and postpone to the final Appendix <ref> a synopsis of its translation to lattice-groups. While the main result of this paper holds independent algebraic interest, it finds its deeper motivation in a broader mathematical landscape on which we offer some comments in this introduction. As explained in <cit.>, some of Grothendieck’s algebro-geometric constructions may be abstracted to the context of extensive categories <cit.>. A category with finite coproducts is extensive if the canonical functor /X ×/Y →/(X + Y) is an equivalence for every pair of objects X, Y in . Extensivity attempts to make explicit a most basic property of (finite) coproducts in categories `of spaces'. For instance, the category of topological spaces and continuous functions between them is extensive; the category of groups is not. Extensive experience indeed confirms that conceiving an extensive category as a category `of spaces' is a useful conceptual guide. Essential to the development of Algebraic Geometry is the fact that , the opposite of the category of (commutative unital) rings, is extensive. (It easily follows that, for any ring R, the opposite of the category R/ of R-algebras is extensive.) Extensivity naturally determines a notion of complemented subobject. So, in an extensive category with finite products, it is also natural to consider the objects with complemented diagonal. These are traditionally called decidable objects, and it is useful to think of them as the `discrete spaces' inside the category `of spaces' where they live. For instance, a topological space is decidable if, and only if, it is discrete. For any ring R, and any R-algebra A, let A be the corresponding object in the extensive category (R/). Then A is decidable if, and only if, A is separable as an R-algebra. In other words, the separable R-algebras are precisely those for which the associated affine scheme is decidable. Let us say that a category is coextensive if its opposite is extensive. In light of the above comments, an object in a coextensive category is called separable if the corresponding object in is decidable. The category of MV-algebras is coextensive. This provides the notion of separable MV-algebra that is the topic of the present paper. Explicitly, the MV-algebra A is separable if, and only if, there is a homomorphism f A + A → A such that the span A [l]_-∇ A + A [r]^-f A is a product diagram, where ∇ A+A→ A denotes the codiagonal map. The geometry of has long been the subject of intensive hands-on study because of its striking connections with several areas of classical mathematics, from piecewise-linear topology to the geometry of numbers. The characterisation of decidable objects in that we present here was motivated by our ongoing long-term project to study of the `gros Zariski' topos determined by the theory of MV-algebras as the domain of a pre-cohesive geometric morphism <cit.>. We postpone the topos-theoretic consequences of separability to further publications; no Topos Theory is required for the proof of the purely algebraic results in the present paper. The plan of the paper is as follows. In Sections <ref>, <ref>, and <ref> we introduce the necessary material to prove a sufficient condition for an extensive category with finite products to have the property that every decidable object is a finite coproduct of connected subterminals. In Section <ref> we verify that is coextensive. In Theorem <ref> we characterise the subterminal objects of as, in , the subalgebras of [0,1]∩. In order to extend Theorem <ref> to a characterisation of separable MV-algebras we need to introduce the Pierce functor for , an analogue of the standard ring-theoretic functor by the same name. The key fact is that the Pierce functor preserves coproducts. To prove it, in Section <ref> we develop the required material on the connected-component functor π_0 in . Using the theory of spectra of MV-algebras recalled in Section <ref> along with the topological π_0 functor, we are able to show in Theorem <ref> that the Pierce functor does preserve all coproducts. Theorems <ref> and <ref> are combined in Section <ref> to obtain our main result, the mentioned characterisation of separable MV-algebras. We conclude Section <ref> with a discussion that points to further research aimed at enriching the connected-component functor on to an `arithmetic connected-component functor'; this functor, we submit, arises out of locally finite MV-algebras. Finally, in Appendix <ref> we collect the translation of our main results to lattice-groups. § EXTENSIVE CATEGORIES AND CONNECTED OBJECTS In this section we recall the definition of extensive category and of connected object. For more details about extensive categories see, for example, <cit.> and references therein. A category with finite coproducts is called extensive if for every X and Y in the canonical functor /X ×/Y →/(X + Y) is an equivalence. Examples of extensive categories are (sets and functions), (finite sets and functions), any topos, , (compact Hausdorff spaces and continuous maps), (Stone[By a Stone space we mean a compact Hausdorff zero-dimensional space. Such spaces are often called Boolean in the literature.] spaces and continuous maps). The categories of rings, of Boolean algebras and of distributive lattices[Throughout the paper, with the exception of Appendix <ref>, we assume distributive lattices to have top and bottom elements preserved by homomorphisms.] are coextensive. See <cit.> and <cit.> for further examples. In extensive categories coproduct injections are regular monomorphisms, coproducts of monomorphisms are monomorphisms, and the initial object is strict in the sense that any map X → 0 is an isomorphism. Also, extensive categories are closed under slicing. A coproduct in_0 X → X + Y ← Y :in_1 is * disjoint if the coproduct injections are monic and the commutative square 0 [d] [r] Y [d]^-in_1 X [r]_-in_0 X + Y is a pullback; * universal if for every arrow Z → X + Y the two pullback squares below exist V [d] [r] Z [d] [l]W[d] X [r]_-in_0 X + Y [l]^-in_1 Y and the top cospan is a coproduct diagram. The following result is essentially <cit.>. A category with finite coproducts is extensive if, and only if, coproducts are universal and disjoint. Assume from now on that is an extensive category. A monomorphism u U → X in is called complemented if there is a v V → X such that the cospan u U → X ← V :v is a coproduct diagram. In this case, v is the complement of u. Notice that complemented monomorphisms are regular monomorphisms because they are coproduct injections. In the next definition, and throughout, we identify monomorphisms and subobjects whenever convenient. An object X in is connected if it has exactly two complemented subobjects. In or , an object is connected if and only if it has exactly two clopens. An object A in is connected as an object in if and only if A has exactly two idempotents. We remark that, in general, connected objects are not closed under finite products. For each X in we let X denote the poset of complemented subobjects of X. We stress that if u U → X and v V → X are two complemented monomorphisms in and f U → V is such that v f = u then f is complemented <cit.>. So for any two complemented subobjects u, v of X, there is no ambiguity in writing u ≤ v since it means the same for u, v considered as subobjects, or as complemented subobjects. Extensivity easily implies that the poset X has finite infima, a bottom element, and an involution. This structure may be used to prove that X is actually a Boolean algebra which interacts well with pullbacks in the sense that, for any map f X → Y in , pulling back along f determines a Boolean algebra homomorphism Y → X. So, assuming that is well-powered, the assignment X ↦ X extends to a functor → between extensive categories that preserves finite coproducts. We will use the following simple equivalences. For any object X in the following are equivalent. * X is connected. * X is not initial and, for every complemented subobject u U → X, U is initial or u is an isomorphism. * X is not initial and, for every coproduct diagram U → X ← V, U is initial or V is initial. § FINITE-COPRODUCT PRESERVING FUNCTORS Let and be extensive categories, and let L → preserve finite coproducts. Such a functor preserves complemented monomorphisms so, for any X in , L induces a function X →(L X) which is actually a map in , natural in X. (It is relevant to remark such a functor also preserves pullbacks along coproduct injections. See <cit.>.) We will say that L is injective surjective/bijective on complemented subobjects if and only if X →(L X) has the corresponding property for every X in . The functor L → is injective on complemented subobjects if and only if it reflects 0. In this case, L also reflects connected objects. Assume first that L is injective on complemented subobjects and let X in be such that L X = 0. Then (L X) is the terminal Boolean algebra and, as X →(L X) is injective by hypothesis, X is also trivial. For the converse notice that if L reflects 0 then the map X →(L X) in has trivial kernel for every X in . To prove the second part of the statement assume that X in is such that L X is connected in . If X were initial then so would L X because L preserves finite coproducts and, in particular, the initial object. So X is not initial. Now assume that U → X ← V is a coproduct diagram. Then so is L U → L X ← L V. Since L X is connected, either L U or L V is initial by Lemma <ref>. As L reflects 0, either U or V is initial, so X is connected by the same lemma. (Alternatively, if X →(L X) is injective and its codomain is the initial Boolean algebra then so is the domain.) We will be particularly interested in extensive categories wherein every object is a finite coproduct of connected objects. For example, satisfies this property, but neither nor does. If is the category of finitely presentable K-algebras for a field K, then also satisfies this property. If L → is bijective on complemented subobjects then the following hold. * The functor L preserves connected objects. * For any object X in , if L X is a finite coproduct of connected objects then so is X. * If every object in is a finite coproduct of connected objects then so is the case in . * Assume that and have finite products and that L preserves them. If is such that finite products of connected objects are connected then so is the case in . To prove the first item just notice that, by hypothesis, X →(L X) is an isomorphism for each X in . Hence if X has exactly two complemented subobjects then so does L X. Before proving the second item we establish an auxiliary fact. Let X be in and let u U → L X be a complemented subobject in with connected U. Then, as L is surjective on complemented objects by hypothesis, there exists a complemented subobject v V → X in such that L v = u as subobjects of L X. Then L V ≅ U is connected, so V is connected by Lemma <ref>. Thus, we have lifted the `connected component' u of L X to one of X. To prove the second item let (u_i | i ∈ I) be a finite family of pairwise-disjoint complemented subobjects of L X with connected domain whose join is the whole of L X. For each i∈ I, let v_i be the complemented subobject of X induced by u_i as in the previous paragraph. As L reflects 0, the family (v_i | i∈ I) is pairwise disjoint. Also, L ⋁_i∈ I v_i = ⋁_i ∈ I L v_i = ⋁_i∈ I u_i is the whole of LX. As L is injective on complemented subobjects, ⋁_i∈ I v_i must be the whole of X. In summary, we have lifted the finite coproduct decomposition of L X to one of X. The third item follows at once from the second. For the fourth item, let X be the product of a finite family (X_i | i ∈ I) of connected objects in . Then L X is the product of (L X_i | i ∈ I) because L preserves finite products. Each L X_i is connected because L preserves connected objects by the first item, so L X is connected by our hypothesis on . Hence X is connected by Lemma <ref>. We next prove a sufficient condition for a functor L as above to be bijective on complemented subobjects. If L → has a finite-coproduct preserving right adjoint, then L is bijective on complemented subobjects. Let R be the right adjoint to L and let σ and τ be the unit and counit of L ⊣ R. We show that L is both injective and surjective on complemented subobjects. To prove injectivity it is enough to show that L reflects 0 (Lemma <ref>). So let X be an object in such that L X is initial. Then we may transpose the isomorphism L X → 0 in to a map X → R 0, but R 0 = 0 because R is assumed to preserve finite coproducts. Since the initial object is strict, X is initial. We next show that L is surjective on complemented subobjects. Let u U → L X be a complemented monomorphism. Then R u is complemented so the left pullback square below exists V [d]_-v [r] R U [d]^-R u L V [d]_-L v[r] L(R U) [d]^-L(R u)[r]^-τ U [d]^-u X [r]_-σ R (L X) L X [r]_-Lσ L(R (L X)) [r]_-τ L X by extensivity of . Then the two squares on the right above obviously commute, and the bottom composite is the identity. Moreover, <cit.> implies that both squares are pullbacks, so u and L v coincide as subobjects of LX. Combining Lemma <ref> and Proposition <ref> we obtain the following. Assume that L → has a finite-coproduct preserving right adjoint. If every object in is a finite coproduct of connected objects then so is the case in . § DECIDABLE OBJECTS Let be an extensive category with finite products. In particular, has a terminal object 1. An object X is called subterminal if the unique map X → 1 is monic. For any object X in , the following are equivalent. * The object X is subterminal. * The diagonal Δ X → X× X is an isomorphism. * The projections _0, _1 X× X → X are equal. The first item implies the second because for any monomorphism X → 1 the following diagram X [d]_-id[r]^-id X [d]^-! X [r]_-! 1 is a pullback. The second item implies the third because any map has at most one inverse. To prove that the third item implies the first, let f, g Y → X. Then there exists a unique map fg Y → X × X such that _0 fg = f and _1 fg = g. So f = _0 fg = _1 fg = g. That is, for any object Y there is a unique map Y → X. This means that the unique map X → 1 is monic. We stress that extensivity plays no rôle in Lemma <ref>, which is a general fact about categories with finite products. An object X in is decidable if the diagonal Δ X → X × X is complemented. Lemma <ref> shows that subterminal objects in are decidable, and that they may be characterised as those decidable objects X such that the diagonal Δ X → X × X not only is complemented, but is actually an isomorphism. The full subcategory of decidable objects will be denoted by →. If is lextensive (i.e. extensive and with finite limits) it follows from <cit.> that is lextensive and that the inclusion → preserves finite limits, finite coproducts and that it is closed under subobjects. Moreover, for any X, Y in , X + Y is decidable if, and only if, both X and Y are decidable. On the other hand, arbitrary coproducts of decidable objects need not be decidable—consider, for instance, an infinite copower of the terminal object in or . For any object X in the following are equivalent: * X is subterminal and connected. * X is decidable and X × X is connected. If X is subterminal and connected then Δ X → X× X is an isomorphism by Lemma <ref>. So X is decidable and X× X is as connected as X. For the converse assume that X is decidable and that X × X is connected. Decidability means that the subobject Δ X → X × X is complemented; as X × X is connected, X is initial or Δ X → X × X is an isomorphism by Lemma <ref>. But X is not initial (because X× X is connected) so Δ X → X × X is an isomorphism. Then X is as connected as X× X, and X is subterminal by Lemma <ref>. Let be another extensive category with finite products and let L → preserve finite products and finite coproducts. Assume that L reflects 0 and that 1 is connected in . Then the following hold for every X in . * If L X = 1 then X is connected. * If X in is decidable and L X = 1 then X is subterminal. The functor L reflects 0 so it reflects connected objects by Lemma <ref>. As 1 is connected in by hypothesis, L X = 1 implies X connected. If L X = 1 then L (X × X) = L X × L X = 1. So X × X is connected by the first item. Therefore X is subterminal by Proposition <ref>. It easily follows from the definition of decidable object that L preserves decidable objects. In more detail, the preservation properties of L imply that the left-bottom composite below [d] @.>[r] [d] [r]_-L factors uniquely through the right inclusion and, moreover, → preserves finite products and finite coproducts. In fact, → preserves all the finite limits that L preserves (because the subcategories of decidable objects are closed under finite limits). Additionally assume from now on that L → has a finite-coproduct preserving right adjoint R →. Notice that under the present hypotheses both L and R preserve finite products and finite coproducts. It follows that the adjunction L⊣ R restricts to one between and . If every decidable object in is a finite coproduct of connected objects then so is the case in . The adjunction L ⊣ R → restricts to one L' ⊣ R' →, and every object in is a finite coproduct of connected objects by hypothesis. So we may apply Corollary <ref> to L'→ Because is lextensive, there exists an essentially unique coproduct preserving functor → that also preserves the terminal object. The functor sends a finite set I to the copower I· 1 in . The categories , , and other examples have the property that this functor → coincides with →. Notice that if this condition holds then 1 is connected in , because = → is closed under subobjects and preserves 1. If the canonical functor → coincides with → then every decidable object in is a finite coproduct of connected subterminals. By Corollary <ref> every decidable object in is a finite coproduct of connected objects. So it is enough to prove that every connected decidable object in is subterminal. For this, let X be connected and decidable. Then L X is decidable, because L preserves finite products and finite coproducts, and it is connected by Lemma <ref> and Proposition <ref>. By hypothesis, the canonical → coincides with → so L X = 1. Hence X is decidable and L X = 1. Therefore X is subterminal by Lemma <ref>. For a lextensive category we have considered several conditions. * Every decidable object is a finite coproduct of connected objects. * Every decidable object is a finite coproduct of connected subterminals. * The canonical functor → coincides with the inclusion →. For a field K, (K/) satisfies the first condition but not the second. The categories and satisfy the third condition. The third condition implies the second which, in turn, implies the first. Proposition <ref> shows that for certain adjunctions L ⊣ R →, if satisfies the third condition then satisfies the second. This will be used to prove that satisfies the second condition (Theorem <ref>). § THE COEXTENSIVE CATEGORY OF MV-ALGEBRAS For background on MV-algebras we refer to the standard textbooks <cit.>, of which we also follow the notation. In this section we show that is coextensive by proving that products are codisjoint and couniversal (Proposition <ref>). Let be a regular category with finite colimits. If 0 → 1 is a regular epimorphism then products are codisjoint. Let A be an object in . As the composite 0 → A → 1 is a regular epimorphism by hypothesis, so is A → 1 by regularity of . That is, not only 0 → 1 but actually any A → 1 is a regular epimorphism. As every regular epimorphism is the coequalizer of its kernel pair, A → 1 is the coequalizer of the two projections A × A → A. Also, as products of regular epimorphisms are epimorphisms, the product of id A → A and B → 1 is a regular epimorphism A × B → A × 1. That is, the projection A × B → A is a regular epimorphism. To complete the proof we recall a basic fact about colimits: for a commutative diagram as on the left below E [d]_-e [r]<+1ex>^-e_0[r]<-1ex>_-e_1 D [d]_-d [r] B [d] (A× A) × B [d]_-_0[rr]<+1ex>^-_0 × B[rr]<-1ex>_-_1 × B A× B [d]_-_0[r]^-_1 B [d] F [r]<+1ex>^-f_0[r]<-1ex>_-f_1 A [r] Q A× A [rr]<+1ex>^-_0[rr]<-1ex>_-_1 A [r] 1 such that d e_i = f_i e for i ∈{0, 1}, the top and bottom forks are coequalizers and e is epic, the inner right square is a pushout. Applying this observation to the diagram on the right above we obtain that the inner right square in that diagram is a pushout. In particular, if is the category of models for an algebraic theory with at least one constant then the initial object 0 is non-empty and so 0 → 1 is a regular epimorphism. This is the case, of course, for =. In , couniversality of products is entailed by the intimate relationship between idempotents and product decompositions. The situation for is analogous. An element b of an MV-algebra A is called Boolean if it satisfies one of the following equivalent conditions (see <cit.>): b⊕ b=b b⊙ b=b b∨ b=1 b∧ b=0. For x∈ A we let A → A[x^-1] be the quotient map induced by the congruence on A generated by the pair (x,1). For any f A → B in the following diagram is a pushout A [d]_-f[r] A[x^-1] [d] B [r] B[(f x)^-1] where the right vertical map is the unique one making the square commute. Standard, using the universal property of the (horizontal) quotient homomorphisms. For any MV-algebra A and every Boolean element x∈ A, let ⟨ x ⟩ be the ideal of A generated by { x}. Then the quotient q A→ A/⟨ x⟩ has the universal property of A → A[x^-1]. If k A → B is such that k x = 1 then x ∈k, so ⟨ x⟩k. By the universal property of quotients there is exactly one homomorphism c A/⟨ x⟩→ C such that cq=k. In , the diagram D [l]^-q_0 C [r]_-q_1 E is a product precisely when there exists a Boolean element x∈ C such that q_0 has the universal property of C → C[( x)^-1] and q_1 has the universal property of C → C[x^-1]. When this is the case, the element x∈ C with the foregoing property is unique. Assume the diagram is a product. Then there is a unique x∈ C such that q_ix=i, i=0,1. This x is Boolean because 0 and 1 are. Hence x is Boolean too, and thus ⊕-idempotent; therefore, ⟨ x ⟩={c∈ C| c ≤ x}. If c≤ x then q_1c≤ q_1( x)=0, so q_1c=0 and c∈q_1. If c∈q_1 then q_1c=0≤ q_1( x) and q_0c≤ 1=q_0( x), so c≤ x by the definition of product order. We conclude q_1=⟨ x ⟩. The projection q_1 is surjective so Lemma <ref> entails that q_1 has the universal property of C → C[x^-1]. An entirely similar argument applies to q_0. Conversely, assume q_0 and q_1 have the universal properties in the statement. By Lemma <ref> we may identify q_0 with C → C/⟨ x⟩ and q_1 with C → C/⟨ x⟩. So it is enough to show that the canonical C → C/⟨ x⟩× C/⟨ x⟩ is bijective. Injectivity follows because if c≤ x, x then c≤ x∧ x=0, so ⟨ x⟩∩⟨ x⟩ = 0. To prove surjectivity, let (q_0 c_0 , q_1 c_1) ∈ C/⟨ x⟩× C/⟨ x⟩ with c_0, c_1 ∈ C and consider c = (c_0 ∧ x) ∨ (c_1 ∧ x) ∈ C. It is easy to check that C → C/⟨ x⟩× C/⟨ x⟩ sends c in the domain to (q_0 c_0 , q_1 c_1) in the codomain. The content of Lemma <ref> is far from new, cf. e.g. <cit.> and <cit.>. However, having expressed that content in the form that is most suitable for the sequel, we have included a proof for the reader's convenience. is coextensive. Any algebraic category is complete and cocomplete, so in particular it has finite products and pushouts. We appeal to the characterization of extensive categories in Proposition <ref>. Codisjointness of products follows from Lemma <ref> or from a direct calculation observing that the projections of a product A × B send (0, 1) to 0 and 1 respectively, so 0 = 1 must hold in the pushout. It remains to show that products are couniversal. So we consider the pushout of a product diagram as below A [d]_-h [l]_- pr_0 A× B [d]^-f[r]^- pr_1 B [d]^-k D [l]^-q_0 C [r]_-q_1 E and prove that the bottom span is product diagram. Indeed, observe that the Boolean element (0, 1) ∈ A× B is sent to the Boolean element xf(1, 0) ∈ C so, by Lemma <ref>, it is enough to check that q_0 inverts x and q_1 inverts x; but this follows from Lemma <ref>. Although it was not necessary to prove the main result of this section, it seems worthwhile to observe that, in the context of algebraic categories, Lemma <ref> may be strengthened to a characterisation. In any algebraic category, binary products are codisjoint if, and only if, the initial algebra has non-empty underlying set. If the initial algebra 0 is not empty then the unique map 0 → 1 is a regular epimorphism so we can apply Lemma <ref>. For the converse implication notice that the following square 0 × 0 [d] [r] 0 [d] 0[r] 1 is a pushout by hypothesis. As any of the projections 0× 0 → 0 is split epic, its pushout 0 → 1 is a regular epimorphism, so 0 must be non-empty. § SUBTERMINALS IN , AND RATIONAL ALGEBRAS The aim of this section is to characterize subterminal objects in . Perhaps unexpectedly, the following fact will play an important rôle. Monomorphisms in are stable under pushout. It is well known <cit.> that, in algebraic categories, stability of monomorphisms under pushout is equivalent to the conjunction of the Amalgamation Property (AP) and of the Congruence Extension Property (CEP). Pierce proved the AP for Abelian lattice-groups in <cit.>, and Mundici <cit.> observed that Pierce's result transfers through the functor Γ to MV-algebras. For a different proof of the AP for Abelian lattice-groups and MV-algebras, see <cit.>. The CEP for MV-algebras was proved in <cit.>; for an alternative proof, see <cit.>. For yet another proof in the more general context of residuated lattices, see <cit.>. Most of the work will be done on the algebraic side, so it is convenient to start with an arbitrary category with finite coproducts whose initial object is denoted 0. As suggested above, we concentrate on the objects A such that the unique map 0 → A is epic. Notice that such an object is exactly a subterminal object in , but we prefer to avoid introducing new terminology such as `cosubterminal' or `supra-initial'. For convenience we state here the dual of Lemma <ref>. For any object A in , the following are equivalent: * The map 0 → A is epic. * The codiagonal ∇ A + A → A is an isomorphism. * The coproduct injections in_0 , in_1 A → A + A are equal. We shall also need a simple auxiliary fact. Let 0→ A be epic and m B → A be a map. If the coproduct map m + m B + B → A + A is monic then 0 → B is epic. The following square commutes B + B [d]_-m + m[r]^-∇ B [d]^-m A + A [r]_-∇ A by naturality of the codiagonal. The bottom map is an isomorphism by Lemma <ref>, and the left vertical map is monic by hypothesis. So the top map is also monic, as well as split epic. Assume from now on that has finite colimits and that monomorphisms are stable under pushout. We stress that this stability property is quite restrictive. For instance, it does not hold in . On the other hand, we already know that it holds in by Lemma <ref>. The map 0 → A is epic if, and only if, for every monomorphism B → A, 0 → B is epic. One direction is trivial and does not need stability of monomorphisms. For the converse observe that, as monomorphisms are stable under pushout, finite coproducts of monomorphisms are monic. So we can apply Lemma <ref>. The following is a further auxiliary fact. For any d A → D and e B → A in , if e is epic and the composite d e B → D is monic then d is an monic. The right square below is trivially a pushout and, since e B → A is epic, the left square is also a pushout B [d]_-e[r]^-e A [d]^-id[r]^-d D [d]^-id A [r]_-id A [r]_-d D so the rectangle is a pushout too. As the top composite is monic, and these are are stable under pushout by hypothesis, the bottom map is monic. We emphasise the next particular case of Lemma <ref>. Let d A → D be a regular epimorphism in . If 0 → A is epic and 0 → D is monic then d is an isomorphism. Assume now that our category with finite colimits and stable monomorphisms has a terminal object 1 such that for any object A in the unique A → 1 is a regular epimorphism. This is common in algebraic categories. A quotient of A in is an equivalence class of regular epimorphisms with domain A, where two such are equivalent if they are isomorphic as objects of A/. An object A is simple if it has exactly two quotients, namely, those represented by A → 1 and id A → A. So, if is an algebraic category, then an object is simple if and only if it has exactly two congruences. To motivate the hypotheses of the following lemma observe that for every object A in , A is terminal or 0 → A is monic. Similarly for and for K/ with K a field. In contrast, that is not the case in . If for every object D of , D is terminal or 0 → D is monic, then for every epic 0 → A the following hold. * A is simple or terminal. * If m B → A is monic then B + B is simple or terminal. To prove the first item let d A → D be a regular epimorphism. Then D is terminal or 0 → D is monic by hypothesis. If 0 → D is monic then d is an isomorphism by Lemma <ref>. So the only possible quotients of A are A → 1 or id A → A. So A is terminal or simple. To prove the second item first recall that epimorphisms are closed under coproduct. Then recall that, as monomorphisms are stable by hypotheses, they are closed under finite coproducts. Therefore, m + m B + B → A + A is a monomorphism and 0 = 0 + 0 → A + A is epic. So, by Lemma <ref>, 0→ B + B is also epic. The first item implies that B + B is simple or terminal. The material in this section applies to the case =, so we may now prove our first MV-algebraic result. For the proof we require a standard fact from the theory of MV-algebras and lattice-groups, which will also find further application later in this paper. An ideal of the MV-algebra A is maximal if it is proper, and inclusion-maximal amongst proper ideals of A; equivalently, the quotient A/ is a simple algebra. For every MV-algebra A, and for every maximal ideal of A, there is exactly one homomorphism of MV-algebras _A⟶ [0,1], and this homomorphism is injective. In connection with the result that follows, let us explicitly recall that the initial object 0 in is the two-element Boolean algebra {0,1}. For any MV-algebra A the following are equivalent. * A is a subalgebra of [0,1]∩. * A is non-trivial and the unique map 0 → A is epic. * The unique map 0 → A is monic and epic. * A is simple and 0 → A is epic. If A ⊆ [0,1]∩ then A is certainly non-trivial, and <cit.> shows that the coproduct inclusions in_0, in_1 A → A + A are equal. So 0 → A is epic by Lemma <ref>. The second and third items are clearly equivalent, and they imply the fourth by Lemma <ref>. Finally, assume that A is simple and that 0 → A is epic. By Hölder's Theorem (Lemma <ref>) together with simplicity, there is exactly one monomorphism A→ [0,1]. Now let r ∈ A and write ι A_r → A for the subalgebra of A generated by r. As A_r is not trivial (and 0 → A is epic) Lemma <ref> implies that A_r + A_r is simple. Hence, by the computation in <cit.>, r must be rational. § THE Π_0 FUNCTOR FOR TOPOLOGICAL SPACES In this section we show that the full inclusion → of the category of Stone spaces into that of compact Hausdorff spaces has a left adjoint π_0→ that preserves set-indexed products. The result just stated may be concisely referenced as follows. That the inclusion at hand is reflective is well known and flows readily from the universal property of the quotient topology. As shown in <cit.>, the reflection has “stable units”; we need not discuss this property here, except to recall that it easily implies that the left adjoint π_0 preserves finite products. Since Gabriel and Ulmer in <cit.> show that π_0 preserves cofiltered limits, π_0 preserves all products.[We are grateful to Luca Reggio and to Dirk Hofmann for pointing out to us, respectively, the relevance of <cit.> and of <cit.>.] We give here a different proof that emphasises the key rôle of totally disconnected spaces in the general case. We first obtain a product-preserving left adjoint to the full inclusion of the category of totally disconnected topological spaces into . We then show how to restrict this left adjoint to the categories of interest to us in the present paper. The result just stated may be efficiently referenced as follows. As pointed out to us by Luca Reggio, reflectivity of the inclusion → is discussed in <cit.> together with the fact that the reflection has stable units, so that the left adjoint preserves finite products. (Reggio also indicated that reflectivity is discussed in <cit.> as a consequence of the general theory of regular and exact completions.) Moreover, Dirk Hofmann observed that, since Gabriel and Ulmer in <cit.> show that the left adjoint π_0→ preserves cofiltered limits, π_0 preserves all products. We give here a different proof. We first obtain a product-preserving left adjoint to the full inclusion of the category of totally disconnected topological spaces into . We then show how to restrict this left adjoint to the categories of interest to us in the present paper. We begin by recalling some relevant definitions and facts. A topological space X is connected if it so in the sense of Definition <ref>. A subset of a space is clopen if it is both closed and open. Then, a space X is connected if and only if it contains exactly two clopen sets, which are then necessarily ∅ and X. Equivalently <cit.>, X is connected if whenever X=A∪ B with A∩ B=∅ and A and B closed subsets of X, then exactly one of A and B is empty. If X is a space and x∈ X, the component of x in X, written C_x (with X understood), is defined as C_x⋃{C X| x ∈ X and C is connected}⊆ X. It can be shown that C_x is a connected subspace of X <cit.>, and it therefore is the inclusion-largest such to which x belongs. Also, C_x is closed in X <cit.>. A topological space X is totally disconnected if for each x∈ X we have C_x={x}. Consider the equivalence relation on X given by x∼ y if, and only if, C_x=C_y, and define π_0XX/∼. We equip π_0X with the quotient topology, and call it the space of components of X. We write q X ⟶π_0X for the quotient map. For every continuous map f X→ Y between topological spaces there is exactly one map such that the square below commutes. X [d]_-f[r] [d]^-π_0fπ_0X Y[r] π_0Y We first show that f X→ Y preserves the equivalence relation ∼ in (<ref>). Given x,x' ∈ X, suppose x∼ x', so that C_x=C_y C. Since continuous maps preserve connectedness <cit.>, f[C] is a connected subset of Y that contains both fx and fx'. Hence f[C] C_fx∩ C_fx', which entails C_fx=C_fy. This completes the proof that f preserves ∼. Existence and uniqueness of π_0 f follow from the universal property of the quotient X →π_0 X. Lemma <ref> implies that the assignment that sends f to π_0f extends to an endofunctor π_0⟶. This endofunctor determines the full subcategory , as we now show. If C ⊆π_0 X is a connected subspace then so is q^-1 [C] ⊆ X. Let q^-1[C]=F_1∪ F_2 with F_1 and F_2 disjoint closed subsets of X. For any y ∈ C we can write the fibre q^-1[{y}] as C_x for any x∈ q^-1[{y}]. Further, we can express C_x as the disjoint union C_x=(F_1∩ C_x)∪ (F_2∩ C_x). And C_x is closed and connected, because it is a component. Hence exactly one of q^-1[{y}]=C_x F_1 or q^-1[{y}]=C_x F_2 holds, for each y ∈ C. We can then define S_i{ y ∈ C| q^-1[{y}] F_i}, i=1,2, to the effect that C=S_1∪ S_2 and S_1∩ S_2 =∅. By construction we have F_i=q^-1[S_i], i=1,2. The definition of quotient topology then entails that S_i is closed because F_i is. Since C is connected, exactly one of S_1 and S_2 is empty, and hence so is exactly one of F_1 and F_2. For any space X, the quotient map q X →π_0X in (<ref>) is universal from X to the full inclusion →. We first show that π_0 X is totally disconnected. Let C_y be the component of y ∈π_0 X, with the intent of showing it is a singleton. By Lemma <ref>, since C_y is connected in π_0 X, so is q^-1[C_y] connected in X. Therefore q^-1[C_y] is contained in the component C_x of any x∈ X with x∈ q^-1[C_y]; and thus, the direct image q[q^-1[C_y]] is contained in q[C_x]={y}. Since q[q^-1[C_y]]=C_y, because q is surjective, we conclude C_y{y}, as was to be shown. Let f X→ Y be a continuous map, with Y totally disconnected. We already know from the proof of Lemma <ref> that f preserves ∼ so, as Y is totally disconnected, x ∼ x' in X implies f x = f x' in Y. The universal property of the quotient q X →π_0 X implies the existence of a unique g π_0 X → Y such that g q = f. We conclude that the full inclusion → has a left adjoint that, with no risk of confusion, will again be denoted by π_0 →. The functor π_0 → preserves all set-indexed products. Consider a family (X_s | s ∈ S) of spaces in indexed by a set S and let γπ_0 ∏_s∈ S X_s ⟶∏_s∈ Sπ_0 X_s be the unique map such that the triangle below commutes π_0 ( ∏_s∈ S X_s) [rd]_-π_0 _s[r]^-γ ∏_s∈ Sπ_0 X_s [d]^-_s π_0 X_s for every s ∈ S. In other words, γ ( C( x_s | s∈ S )) = (C x_s | s∈ S) ∈∏_s∈ Sπ_0 X_s for any ( x_s | s∈ S ) in ∏_s∈ S X_s. To prove that γ is injective assume that γ ( q ( x_s | s∈ S )) =γ ( q ( y_s | s∈ S )) in ∏_s∈ Sπ_0 X_s. That is, q x_s = q y_s in π_0 X_s for every s ∈ S. By <cit.> we have q ( x_s | s∈ S ) = q ( y_s | s∈ S ) in π_0 ( ∏_s∈ S X_s), so γ is injective. To prove that γ is surjective observe that the following diagram commutes [ld]_-q∏_s∈ S X_s [d]^-∏_s∈ S q[r]^-_s X_s [d]^-q π_0 ( ∏_s∈ S X_s) [r]_-γ ∏_s∈ Sπ_0 X_s [r]_-_s π_0 X_s for every s∈ S, so the inner triangle commutes. As products of surjections are surjective, the inner vertical map is surjective and hence so is γ, the bottom map of the triangle. We next identify a related construction which will provide a useful alternative description of π_0 when restricted to . Let us write (X,) for the set of continuous maps from the space X to the discrete two-point space {0,1}. There is a canonical continuous function E = ⟨ f| f ∈(X,)⟩ X⟶^(X,), x⟼ ( f x | f∈(X,) ). For any subset S X, write χ_S X→ for the characteristic function defined by χ_S x=1 if, and only if, x∈ S. Then S is clopen precisely when χ_S∈(X,). Thus, E in (<ref>) can equivalently be described as the function that sends each point x ∈ X to the set of clopen subsets of X that contain x. In order to prove the next lemma recall <cit.> that the quasi-component of x ∈ X is defined as C_x⋂{S X| S is clopen, and x ∈ S}. It is clear that the quasi-components of a space X partition X into closed non-empty sets. The relation between E and quasi-components may be stated as follows. For any x, x' ∈ X, E x = E x' if and only if C_x = C_x'. If E x = E x' then clearly C_x = C_x'. For the converse assume that C_x = C_x' and let S ⊆ X be a clopen containing x. Then x' ∈C_x' = C_x ⊆ S. That is, x' ∈ S. The reader should beware that the quasi-component C_x of x∈ X in general fails to be connected. Indeed, the inclusion C_xC_x always holds for each x∈ X <cit.>, and may be proper <cit.>. However: For any X there exists a unique E' π_0 X →^(X,) such that the following diagram X @(d,l)[rd]_-E[r]^-q π_0 X [d]^-E' ^(X,) commutes. Let x, x' ∈ X be such that x ∼ x'; that is, C_x = C_x'. Then x ∈ C_x ∩ C_x'⊆C_x ∩C_x' so, as quasi-components are equal or disjoint, C_x = C_x'. That is, E x = E x' by Lemma <ref>. Let X [r]^-D π'_0 X [r]^-m ^(X,) be the epi/regular-mono factorization of the canonical map E in (<ref>). Then the following square commutes X [d]_-D[r]^-q [d] π_0X @.>[ld]|-c[d]^-E' π'_0X[r]_-m ^(X,) by Lemma <ref> and, as q is regular-epi and m is monic, there is exactly one continuous map cπ_0(X)→π'_0(X) making the inner-triangles commute. Since D is epic, so is c. Also, since m is a regular mono, π_0'X carries the subspace topology inherited from the product ^(X,) and, as the latter is a Stone space, π_0'X is Hausdorff. If X is compact Hausdorff then c π_0 X →π_0' X is an isomorphism and these isomorphic spaces are Stone spaces. First recall <cit.> that, in any compact Hausdorff space X, the equality C_x=C_x holds for each x∈ X. In other words, in this case, the function π_0 X →π_0' X is bijective. Also, since X is compact, so is π_0 X because q is surjective. Hence, as we already know that π_0' X is Hausdorff, the Closed Map Lemma implies that c is an isomorphism. Similarly, compactness of X implies compactness of π_0' X and hence, the Closed Map Lemma implies that m is closed. Therefore, π_0'X is a closed subspace of the Stone space ^(X,). It is classical that each Stone space is totally disconnected, so there is a full inclusion → such that the following diagram [d] [r] [d] [r] commutes. Lemma <ref> implies that the composite [r] [r]^-π_0 factors through the full inclusion →. The factorization will be conveniently denoted by π_0 →. The functor π_0→ is left adjoint to the full inclusion →, and preserves all set-indexed products. Since, as observed above, π_0→ restricts to π_0→, the fact that the former is a left adjoint to → (Lemma <ref>) restricts to the fact that π_0→ is left adjoint to →. It is standard that products in and in agree with products in (using, in particular, Tychonoff's Theorem that any product of compact spaces is compact), so Proposition <ref> entails that π_0→ preserves all set-indexed products. § SPECTRA OF MV-ALGEBRAS In this section we recall the material about spectra of MV-algebras that is needed in the sequel. Recall that an ideal of an MV-algebra A is prime if it is proper, and the quotient A/ is totally ordered. The (prime) spectrum of an MV-algebra A is A{⊆ A| is a prime ideal of A} topologised into the spectral space of A, as follows. For a subset S A, define (S) {∈A| S⊆}, (S) A∖(S)={∈A| S⊈}. The set (S) is called the vanishing locus, or zero set, of S, while (S) is called its support. If a ∈ A, write (a) as a shorthand for ({a}), and similarly for (a). Then the collection {(I)| I is an ideal of A} is the set of closed sets for a topology on A that makes the latter a spectral space in the sense of Hochster <cit.>. The collection {(a)| a∈ A} is a basis of compact open sets for this topology; see <cit.> and <cit.>. The topology is variously known as the Stone, Zariski, or hull-kernel topology of A. The assignment A ↦A extends to a functor →, because inverse images of primes ideals along homomorphisms are prime. Althouh it is common to take the codomain of as the category of spectral spaces and spectral maps, for our purposes in this paper it is expedient to regard as taking values in . The maximal spectrum of an MV-algebra A is A{⊆ A| is a maximal ideal of A}. We have AA, or equivalently, any simple MV-algebra is totally ordered (see e.g. <cit.>). The maximal spectral space of A is the set A equipped with the subspace topology it inherits from A. Then A is a compact Hausdorff space <cit.>, and every compact Hausdorff space arises in this manner from some MV-algebra A <cit.>. The standard example of MV-algebra, the interval [0,1] equipped with the constant 0 and the operations ⊕, , generalises as follows. If X is any set, the collection [0,1]^X of all functions from X to [0,1] inherits the structure of an MV-algebra upon defining operations pointwise. If, additionally, X is a topological space, since ⊕ [0,1]^2→ [0,1], [0,1]→[0,1], and 0 are continuous with respect to the Euclidean topology of [0,1], the subset (X){f X→ [0,1]| f is continuous} is a subalgebra of the MV-algebra [0,1]^X. We shall describe a natural MV-ho­mo­mor­phism η_A A ⟶(A), for each MV-algebra A. Its existence descends from Hölder's Theorem (Lemma <ref>), which allows us to define a close analogue to the Gelfand transform in functional analysis. Indeed, in light of that result, to a∈ A and ∈A we associate the real number _(a / )∈ [0,1], obtaining the function aA ⟶ [0,1] ⟼ h_(a). It can be shown <cit.> that the function (<ref>) is continuous with respect to the Stone topology of A and the Euclidean topology of [0,1]. We thereby arrive at the announced homomorphism η_A A ⟶(A) a ⟼a for each MV-algebra A. For any MV-homomorphism h A→ B and any ∈B we have h^-1()∈A. Moreover, the inverse-image map h^-1B→A is continuous with respect to the Stone topology. The first assertion is proved in <cit.>. The second assertion is a straightforward verification using the definition of Stone topology. In light of Lemma <ref> we henceforth regard as a functor: ⟶^ op, where denotes the category of compact Hausdorff spaces and their continuous maps. Given a continuous map f X → Y in , it is elementary that the induced function (f)(Y) ⟶(X), g∈(Y) ⟼ g∘ f ∈(X) is a morphism in . We therefore regard as a functor: ^ op⟶. There is an adjunction ⊣^ op→ known as the Cignoli-Dubuc-Mundici adjunction <cit.>; see <cit.> for further references and details not mentioned below. Dually to (<ref>), for any space X in there is a continuous map ϵ_X X ⟶(X) x ⟼{f∈(X)| f(x)=0}, and it is a standard fact that ϵ_X is a homeomorphism. (Compare <cit.>.) Writing 𝕀 C for the identity functor on a category C, we can summarise the adjunction as follows. The functor is left adjoint to the fully faithful , i.e. ⊣^ op→. The unit and the counit of the adjunction are the natural transformations η𝕀→ and ϵ→𝕀^ op whose components are given by (<ref>) and (<ref>), respectively. § THE PIERCE FUNCTOR PRESERVES COPRODUCTS The category of Boolean algebras may be identified with the domain of the full subcategory → determined by the MV-algebras whose operation ⊕ is idempotent. It is then clear that → is a variety so, in particular, it has a left adjoint. It also has a right adjoint that we now describe. We write A for the collection of all Boolean elements of the MV-algebra A. By <cit.>, A is the largest subalgebra of A that is a Boolean algebra. A homomorphism h A→ B preserves Boolean elements, because the latter are defined by equational conditions. Therefore, h induces by restriction a function hA→B that is evidently a homomorphism of Boolean algebras. We thus obtain a functor ⟶ from the category of MV-algebras to that of Boolean algebras; we call it the Pierce functor because of the close analogy with the theory developed in <cit.> for rings. The functor is right adjoint to the functor . This is a direct consequence of the fact that A is the largest Boolean subalgebra of A, for any MV-algebra A. The proof of Proposition <ref>—in particular, Lemma <ref>—makes it clear that → is essentially the `complemented subobjects' functor determined by the extensive category . We now embark on the proof of the central fact that → preserves coproducts. Our aim is to reduce the problem to a situation where we can apply the topological results in Section <ref>. For any MV-algebra A and any element a∈ A, a is Boolean if, and only if, for each prime ideal of A, we have a/∈{0,1} A/. Let C be any totally ordered MV-algebra. For x∈ C, either x≤ x or x ≤ x. If the former holds then x∧ x=x, so that if x is Boolean then x=0. If the latter holds then x∨ x=x, and thus x=1 if x is Boolean. In summary, if x∈ C is Boolean then x∈{0,1}. The converse implication is clear. Summing up, the Boolean elements of C are precisely 0 and 1. Boolean elements, being definable by equational conditions, are preserved by homomorphisms. Hence if a is Boolean then a/∈ A/ is Boolean, and therefore, since A/ is totally ordered, a/∈{0,1} by the argument in the preceding paragraph. This proves the left-to-right implication in the statement of the lemma. For the converse implication, we recall that in any MV-algebra A we have ⋂A={0} <cit.>. Hence, the function ι A ⟶∏_∈A A/ defined by a ∈ A ⟼ (a/)_∈∈∏_∈A A/ is an injective homomorphism. Assume that for each ∈A we have a/∈{0,1}. Since operations in ∏_∈A A/ are computed pointwise, we infer ι(a)∨ι(a)= (a/)_∈∨ (a/)_∈=1, and therefore, since ι is an isomorphism onto its range, a∨ a=1. This completes the proof. Let A be an MV-algebra, and suppose there exist possibly empty closed subsets X_0,X_1A with A=X_0∪ X_1 and X_0∩ X_1=∅. Then there exists exactly one Boolean element b∈ A such that b/=0 for each ∈ X_0 and b/=1 for each ∈ X_1. By <cit.>, there is exactly one ideal I_i of A such that (I_i)=X_i, i=0,1. Consider the elements 0,1∈ A. The fact that A is partitioned into X_0 and X_i entails I_0∨ I_1=A and I_0∩ I_1={0}, so that the Chinese Remainder Theorem <cit.> applied to 0 and X_0, and to 1 and X_1, yields one element b∈ A such that b/I_0=0 and b/I_1=1. Using the Third Isomorphism Theorem, the latter conditions imply b/∈{0,1} for each ∈A so that by Lemma <ref> we conclude that b is Boolean. If b'∈ A also satisfies b'/=0 for each ∈ X_0 and b'/=1 for each ∈ X_1, then b/=b'/ for ∈A, so that b=b' because ⋂A={0} <cit.>. We record a corollary that will have further use in the paper. It is the exact analogue for MV-algebras of a standard result for the category , see e.g. <cit.>. In order to state it, let us write X for the Boolean algebra of clopen sets of any topological space X. Let us then observe that the uniqueness assertion about the Boolean element b in Lemma <ref> allows us to define, for any MV-algebra A, a function χ_A(A)⟶A that assigns to each X_0∈(A) the unique element b∈A with the properties stated in that lemma with respect to X_0 and X_1A∖ X_0. It is then elementary to verify that χ_A is a homomorphism of Boolean algebras. For any MV-algebra A, the function ϕ_AA ⟶(A) that sends b∈A to (b)(A) is an isomorphism of Boolean algebras whose inverse is the homomorphism χ_A in (<ref>). In particular, A is indecomposable if, and only if, A is connected. By Lemma <ref> it is clear that (b) for each b∈A is clopen and that ϕ_A is a homomorphism. Let us consider b'χ_Aϕ_Ab. For each ∈(b) we have b/=0 by definition of , and b'/=0 by the defining property of b'. Similarly, for each ∈A∖(A) we have b/=b'/=0. Thus, b and b' agree at each prime and thus b=b' because ⋂A={0} <cit.>. Conversely, for X_0∈(A), consider the clopen ϕ_Aχ_AX_0. For ∈A, by definition of χ_A we have ∈ X_0 if, and only if, (χ_AX_0)/=0. Hence ϕ_A (χ_A X_0)=X_0, and the proof is complete. The radical of A is the ideal A⋂A. In accordance with standard terminology in general algebra, one says A is semisimple precisely when A={0}. We note in passing that, unless A is semisimple, the statement in Lemma <ref> cannot be strenghtened to “a is Boolean if, and only if, for each ∈A we have a/∈{0,1} A/”. Let A be an MV-algebra, and suppose there exist possibly empty closed subsets X_0,X_1A with A=X_0∪ X_1 and X_0∩ X_1=∅. Then there exists exactly one Boolean element b∈ A such that b/=0 for each ∈ X_0 and b/=1 for each ∈ X_1. By <cit.>, each ∈A is contained in exactly one λ∈A, so that we can define a function λA ⟶A, ⟼λ. By <cit.>, this function is continuous, and it is a retraction for the inclusion AA. Therefore, X'_0λ^-1[X_0] and X'_1λ^-1[X_1] are closed subsets of A satisfying A=X'_0∪ X'_1 and X'_0∩ X'_1=∅. Now Lemma <ref> provides a unique Boolean element b such that b/=0 for each ∈ X_0', and b/=1 for each ∈ X_1'. As X_i X_i', i=0,1, b satisfies the condition in the statement. Concerning uniqueness, suppose a is a Boolean element of A such that a/=0 for each ∈ X_0, and a/=1 for each ∈ X_1. We claim a=b. Indeed, let ∈ X_i', i=0,1. Then a/λ=i because λ∈ X_i. The inclusion λ induces a quotient map q A/→ A/λ. By Lemma <ref> we have a/∈{0,1}. Also, A/λ is nontrivial. Therefore since q(a/)=a/λ=i it follows that a/=i. By the uniqueness assertion in Lemma <ref> we conclude a=b. We observe that the analogue of Lemma <ref> about coproduct decompositions of A being indexed by idempotent elements does not hold in general for rings. Indeed, spectra of MV-algebras always are completely normal—which affords the existence of the map λ used in the proof above—whereas spectra of rings are not, in general. For more on the important rôle that the continuous retraction λ in (<ref>) plays in the theory of lattice-groups and MV-algebras, see <cit.> and the references therein. Our next objective is to show that sends the unit η of ⊣ in (<ref>) to an isomorphism. For any MV-algebra A, the morphism η_A A→ ()A is an isomorphism. Let b'∈(A) be Boolean, with the aim of exhibiting b∈A such that η_A(b)=b'. Evaluating the defining equality b'⊕ b'=b' at each ∈A we see that b'()∈{0,1} holds. Therefore, the two closed subsets X_0 b'^-1[{0}] and X_1 b'^-1[{1}] of A satisfy the hypotheses of Lemma <ref>. We conclude that there exists one Boolean element b∈ A with b/=0 for ∈ X_0 and b/=1 for ∈ X_1. By the definition of η_A this entails at once η_A(b)=b', so η_A is surjective. By the uniqueness statement in Lemma <ref>, η_A is also injective. Our next step will be to factor into a manner that is useful to our purposes. Lemma <ref> implies that the functors → in the diagram below [d]_-[r]^- ^ op[r]_- [u]_- are naturally isomorphic. The functor ^ op→ preserves all set-indexed coproducts. Using Stone duality, it is an exercise to verify that the composite functor ^ op→ induces, by taking opposite categories on each side, a functor naturally isomorphic to the functor π_0→ of Section <ref>. The lemma then follows from Theorem <ref>. We finally obtain the main result of this section. The Pierce functor → preserves all set-indexed coproducts. As we saw above, the triangle (<ref>) commutes up to a natural isomorphism. Further, preserves arbitrary set-indexed colimits because it is left adjoint by Theorem <ref>; and preserves set-indexed coproducts by Lemma <ref>. Hence preserves set-indexed coproducts. § MAIN RESULT, AND FINAL REMARKS Let be a coextensive category. Recall from the introduction that an object A in is separable if A is decidable as an object in the extensive . Thus, A is separable if, and only if, there is a morphism f A + A → A such that the span A [l]_-∇ A + A [r]^-f A is a product diagram. Separable MV-algebras coincide with finite products of subalgebras of [0,1]∩. By Theorem <ref> we have an reflection π_0 ⊣→ such that both adjoints preserve finite products and finite coproducts, so Proposition <ref> implies that every decidable object in is a finite coproduct of subterminal objects. Theorem <ref> completes the proof. We conclude the paper with some final remarks that point to further research aimed at developing an ‘arithmetic connected-component functor’. The guiding result from Algebraic Geometry is this: the category of étale schemes over K is reflective as a subcategory of that of locally algebraic schemes over K <cit.>. The left adjoint there is denoted by π_0, and π_0 X is called the k-schéma des composantes connexes de X in Definition I, 4, 6.6 op. cit. Moreover, it is then proved that π_0 preserves finite coproducts. In terms of extensive categories, this says that for =, the subcategory → has a finite-product preserving left adjoint. We announce that the same holds for = _ fp, where _ fp is category of finitely presetable MV-algebras. The proof will be published elsewhere, but it is appropriate to indicate here the rôle of locally finite MV-algebras in connection with that result. An MV-algebra A is locally finite if each finitely generated subalgebra of A is finite. Finite MV-algebras are evidently locally finite; [0,1]∩ is an example of a locally finite MV-algebra that is not finite. Locally finite MV-algebras were studied in <cit.>; see also <cit.> for a generalisation of the results in <cit.>, and <cit.> for further material and <cit.> for recent progress on the topic. The connection with Theorem <ref> is the following characterisation of rational algebras. For any MV-algebra A the following are equivalent. * A is simple and locally finite. * A is a subalgebra of [0,1]∩. (<ref>)⇒(<ref>). By Hölder's Theorem (Lemma <ref>), since A is simple there is exactly one monomorphism A→ [0,1]; let us therefore identify A with a subalgebra of [0,1]. If A contains an irrational number ρ∈ [0,1] then the subalgebra generated by ρ is infinite. Indeed, the Euclidean algorithm of successive subtractions applied to ρ,1∈ does not terminate (because ρ and 1 are incommensurable) and produces an infinite descending sequence of distinct, non-zero elements of A. Thus, A ⊆ [0,1]∩ by local finiteness. (<ref>)⇒(<ref>). Any subalgebra of [0,1] evidently has no proper non-trivial ideal, by the Archimedean property of the real numbers, and is therefore simple. If, moreover, A⊆ [0,1]∩, the subgroup of generated by finitely many a_1,…,a_n∈ A together with 1 is discrete, and therefore by <cit.> the subalgebra generated by a_1,…,a_n is a finite chain. Thus A is locally finite. An MV-algebra A is separable if, and only if, A is locally finite and A is finite. If A is separable then, by Theorem <ref>, A = ∏_i∈ I A_i with I finite and A_i ⊆ [0,1]∩ for each i ∈ I. In particular, A is finite. Also, each A_i is locally finite by Lemma <ref>. As finite products of locally finite algebras are locally finite, A is locally finite. Conversely, assume that A is locally finite and A is finite. Then, A = ∏_i∈ I A_i with I finite and A_i directly indecomposable for each i ∈ I. As locally finite algebras are closed under quotients, each A_i is locally finite. Hence, each A_i is locally finite and indecomposable. But then A must be simple. Indeed, Corollary <ref> entails that A is connected, and A=A by <cit.>. Then the spectral space A is Hausdorff, and thus has a base of clopen sets—hence, being compact, it is a Stone space. Since Stone spaces are totally disconnected, connectedness of A entails that A is a singleton, so A has exactly two ideals, and so is simple. By Lemma <ref>, A is then a subalgebra of [0,1]∩. Therefore, A is separable by Theorem <ref>. Now, let → be the full subcategory determined by locally finite MV-algebras. Let us prove that this subcategory is coreflective. An element a of an MV-algebra A is of finite order-rank[The terminology we introduce here is best motivated using lattice-groups—please see Appendix <ref>.] if the subalgebra B it generates in A is finite. If B is terminal, we say the order-rank of a is zero. Otherwise, there exists exactly one n∈{1,2,…} such that B=C_1×⋯× C_n with each C_i directly indecomposable and non-terminal, and we then say the order-rank of a is n. We set A{a ∈ A | a is of finite order-rank}. Note that AA, because any Boolean algebra is locally finite. For any MV-algebra A and subset G A, let us write G for the subalgebra of A generated by G. When G={g} we write g for {g}. Any homomorphism of MV-algebras sends elements of finite order-rank to elements of finite order-rank. Let h A→ B be a homomorphism and let a ∈A. Since h commutes with operations, a routine argument in general algebra shows that h[Sa]=(ha); since a is finite, so is (ha). For any MV-algebra A, A is a locally finite subalgebra of A. Further, A is the inclusion-largest locally finite subalgebra of A. Let F{a_1,…,a_n} A be a finite subset of elements of finite order-rank, n≥ 0 an integer. We need to show that the subalgebra F of A generated by F is finite. Induction on n. If n=0 then ∅ is either the terminal one-element algebra or the initial two-element algebra. Now suppose G{a_1,…, a_n-1} is such that G is finite. The subalgebra a_n is also finite, because a_n is of finite order-rank by hypothesis. The subalgebra F is the least upper bound of G and of a_n in the lattice of subalgebras of A, and therefore can be written as a quotient of the coproduct G+a_n. In more detail, by the universal property of the coproduct, the inclusion maps GF and a_nF induce a unique homomorphism hG+a_n→ A whose regular-epi/mono factorisation h=m q is such that m S→ A exhibits the subobject of A that is the join of the subobjects G and a_n—in particular, S is isomorphic to F. So F is a quotient of the algebra G+a_n. Since finite coproducts of finite MV-algebras are finite by <cit.>, G+a_n is finite and therefore so is F. To show that A is a subalgebra of A, first note that clearly 0∈A. If a∈A then a lies in the subalgebra generated by a, which is finite; hence a is of finite order-rank. If a,b ∈A, then a⊕ b lies in the subalgebra generated by {a,b}, which is finite by the argument in the preceding paragraph; hence a⊕ b is of finite order-rank. For the last assertion in the statement, let B be a locally finite subalgebra of A. Given any b ∈ B, the subalgebra generated by b in A is finite, by our assumption about B; hence b is of finite order-rank, and b∈ A. This completes the proof. Lemmas <ref> and <ref> allow us to regard as a functor ⟶. The functor ⟶ is right adjoint to the full inclusion ⟶. This is an immediate consequence of the fact that A is the largest locally finite subalgebra of the MV-algebra A, as proved in Lemma <ref>. It is proved in <cit.> that has all set-indexed products. This follows at once from Corollary <ref>: indeed, for any set-indexed family {A_i}_i ∈ I of locally finite MV-algebras the product of {A_i}_i ∈ I in is the coreflection (∏_i ∈ IA_i) of the product ∏_i ∈ IA_i in . We have been unable to prove that → preserves finite products. However, writing for _ fp, we can show that the functor restricts to a left adjoint π_0 → to the inclusion → and, moreover, it preserves finite products. As mentioned, the proof will appear elsewhere. § SEPARABLE UNITAL LATTICE-ORDERED ABELIAN GROUPS For background on lattice-groups we refer to <cit.>. We recall that a lattice-ordered group, or ℓ-group for short, is a group that is also a lattice[In this appendix, lattices are only required to have binary meets and joins, but not top or bottom elements.] such that the group operation distributes over binary meets and joins. We only consider Abelian ℓ-groups, and thus adopt additive notation. The underlying group of an Abelian ℓ-group is torsion-free, and its underlying lattice is distributive. Write for the category of Abelian ℓ-groups and of lattice-group homomorphisms. An element 1∈ G in an Abelian ℓ-group is a (strong order) unit if for each g∈ G there is a natural number n such that n1≥ g. An Abelian ℓ-group G equipped with a distinguished unit 1 is called unital, and denoted (G,1). Write _1 for the category of unital Abelian ℓ-groups and of unit-preserving lattice-group homomorphisms. There is a functor Γ_1→ that acts on objects by sending (G,1) to its unit interval [0,1]{x∈ G| 0≤ x ≤ 1}, and on morphisms by restriction; here, [0,1] is regarded as an MV-algebra under the operations x⊕ y (x+y)∧ 1, x 1-x, and 0. This functor has an adjoint Ξ→_1, and Mundici proved in <cit.> that Γ and Ξ constitute an equivalence of categories. The initial object in _1 is (,1), and the terminal object is the trivial unital ℓ-group ({0=1}, 0). In analogy with the relationship between non-unital and unital rings, the category has a zero object and is not coextensive, while the category _1 is. Separable unital Abelian ℓ-groups are defined as for any coextensive category, cf. the beginning of Section <ref>. An object G of is Archimedean if whenever nx≤ y holds in G for each positive integer n, then x≤ 0; and an object (G,1) of _1 is called Archimedean if G is. The following characterisations hold: (G,1) is Archimedean precisely when Γ(G,1) is semisimple; and (G,1) is totally ordered and Archimedean precisely when Γ(G,1) is simple. Hölder's Theorem for the category _1 may be stated as follows: Any (G,1) that is Archimedean and totally ordered has exactly one morphism to (,1), and that morphism is monic (equivalently, its underlying function is injective). Let us say that an object (G,1) of _1 is rational if it is isomorphic to an ordered subgroup of the additive group containing 1, where the order of G is inherited from the natural order of the rationals. Theorem <ref> may be then formulated for the category _1 as follows. For any unital Abelian ℓ-group (G,1) the following are equivalent. * (G,1) is rational. * (G,1) is non-trivial, and the unique map (,1) → (G,1) is epic. * The unique map (,1) → (G,1) is monic and epic. * (G,1) is totally ordered and Archimedean, and the unique map (,1) → (G,1) is epic. An object (G,1) of _1 is Specker if its unit-interval MV-algebra Γ(G,1) is a Boolean algebra. Write _1 for the full subcategory of _1 on the the Specker objects. The inclusion functor _1→_1 has a right adjoint _1→_1, the Pierce functor for _1, and preserves arbitrary coproducts (Theorem <ref>). Our main result, Theorem <ref>, would be proved for the category _1 using this Pierce functor; it can be phrased as follows. Separable unital Abelian ℓ-groups coincide with finite products of rational unital Abelian ℓ-groups. Products in the category are Cartesian products, because is a variety of algebras. On the other hand, while _1 is equivalent to a variety by Mundici's cited theorem, its underlying-set functor is not right adjoint. Indeed, products in _1 are not, in general, Cartesian products. However, finite products in _1 are Cartesian—the product of (G,1) and (H,1) is (G× H, (1,1)) with the Cartesian projections. An Abelian ℓ-group is called a simplicial group if it is isomorphic in to a free Abelian group of finite rank ^r equipped with the coordinatewise order. A unit in such a simplicial group is then any element 1∈^r whose each coordinate is strictly positive; the pair (^r,1) is called a unital simplicial group. These lattice-groups play a key rôle in the representation theory of dimension groups, see e.g. <cit.>. An object (G,1) in _1 is a unital simplicial group exactly when its unit-interval MV-algebra Γ(G,1) is finite. An object (G,1) is locally simplicial if each sublattice subgroup generated by finitely many elements along with 1 is a unital simplicial group. An object (G,1) in _1 is locally simplicial exactly when its unit-interval MV-algebra Γ(G,1) is locally finite. Then: An object (G,1) of _1 is separable just when it is locally simplicial, and (G,1) has finite -module rank[In the literature on lattice-groups, the condition that (G,1) has finite rank is expressed in the following traditional manner: the unit of G has finitely many components.] (Corollary <ref>). Write _1 for the full subcategory of _1 on the locally simplicial objects. The inclusion functor _1→_1 has a right adjoint _1→_1 (Corollary <ref>); that is, every (G,1) has an inclusion-largest locally simplicial unital sublattice subgroup. To prove this in the category _1 one would introduce the notion of element of `finite-order rank' of a unital Abelian ℓ-group. It is this notion that motivates the ter­mi­no­logy we adopted in the context of MV-algebras in Section <ref>; by way of conclusion of this appendix, we offer a short discussion. Let (G,1) be a unital Abelian ℓ-group, let g∈ G, and let H be the sublattice subgroup of G generated by g and by 1. If (H,1) is a unital simplicial group (^r,1)—equivalently, if the MV-algebra Γ(H,1) is finite—then we call g an element of finite order-rank r. This notion of rank crucially depends on the interplay between the lattice and the group structure, and is not reducible to the linear notion of rank. To explain why, let us preliminarly observe that a simplicial group ^r enjoys the finiteness property that its positive cone (^r)^+—that is, the monoid of non-negative elements of ^r—is finitely generated as a monoid. Next, let us point out that the underlying group of the Abelian ℓ-group H generated by g and 1 in G is necessarily free: indeed, any finitely generated object of has free underlying group, as was proved in <cit.>. The -module rank of H is at most countably infinite, because H is countable. But even if we assume the rank of H is finite, the unit-interval Γ(H,1) may be infinite, and in that case the lattice order of ^r≅ H cannot be simplicial—and indeed, one can prove that the monoid H^+ cannot be finitely generated. Hence, the condition that the sublattice subgroup H of G generated by g and 1 is simplicial is strictly stronger than the condition that H has finite -module rank. To illustrate, consider the subgroup H of generated by an irrational number ρ∈ together with 1; then H≅^2 as groups, the total order inherited by ^2 from is palpably not simplicial, the positive cone H^+ can be shown not to be finitely generated by an easy direct argument, and Γ(H,1) is an infinite simple MV-algebra. amsplain
http://arxiv.org/abs/2307.07523v1
20230710110551
PapagAI:Automated Feedback for Reflective Essays
[ "Veronika Solopova", "Adrian Gruszczynski", "Eiad Rostom", "Fritz Cremer", "Sascha Witte", "Chengming Zhang", "Fernando Ramos López Lea Plößl", "Florian Hofmann", "Ralf Romeike", "Michaela Gläser-Zikuda", "Christoph Benzmüller", "Tim Landgraf" ]
cs.AI
[ "cs.AI", "cs.CL" ]
PapagAI: Automated Feedback for Reflective Essays V. Solopova et al. Freie Universität Berlin, Germany Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany Otto-Friedrich-Universität Bamberg, Germany PapagAI: Automated Feedback for Reflective Essays Veronika Solopova10000-0003-0183-9433 Eiad Rostom1 Fritz Cremer1Adrian Gruszczynski1Sascha Witte1Chengming Zhang20009-0007-8695-5455Fernando Ramos López1 Lea Plößl20009-0004-7290-5068Florian Hofmann2Ralf Romeike10000-0002-2941-4288Michaela Gläser-Zikuda20000-0002-3071-2995 Christoph Benzmüller2,30000-0002-3392-3093 Tim Landgraf10000-0003-4951-5235 August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================= Written reflective practice is a regular exercise pre-service teachers perform during their higher education. Usually, their lecturers are expected to provide individual feedback, which can be a challenging task to perform on a regular basis. In this paper, we present the first open-source automated feedback tool based on didactic theory and implemented as a hybrid AI system. We describe the components and discuss the advantages and disadvantages of our system compared to the state-of-art generative large language models. The main objective of our work is to enable better learning outcomes for students and to complement the teaching activities of lecturers. § INTRODUCTION Dropout rates as high as 83% among pre-service teachers and associated teacher shortages are challenging the German education system <cit.>. This may be due to learning environments not adequately supporting prospective teachers in their learning process <cit.>. Written reflective practice may alleviate the problem: By reflecting on what has been learned and what could be done differently in the future, individuals can identify areas for improvement. However, instructors may be overburdened by giving feedback to 200+ students on a weekly basis. With the rise of large language models (LLMs, <cit.>), automated feedback may provide welcome relief. Students could iteratively improve their reflection based on the assessment of a specialized model and through that, their study performance. Instructors could supervise this process and invest the time saved in improving the curriculum. While current research is seeking solutions to align the responses of LLMs with a given set of rules, it is currently impossible to guarantee an output of a purely learnt model to be correct. Here, we propose “PapagAI", a platform to write reflections and receive feedback from peers, instructors and a specialized chatbot. PapagAI uses a combination of ML and symbolic components, an approach known as hybrid AI <cit.>. Our architecture is based on various natural language understanding modules[All ML models are available in our OSF depository (https://osf.io/ytesn/), while linguistic processing code can be shared upon request.], which serve to create a text and user profile, according to which a rule-based reasoner chooses the appropriate instructions. § RELATED WORK PapagAI employs a number of models for detecting topics contained in -, and assessing the quality and depth of the reflection, as well as for detecting the sentiment and emotions of the author. While extensive previous work was published on each of these tasks, implementations in German are rare. To our knowledge, there is no previous work that combined all in one application. Automated detection of reflective sentences and components in a didactic context has been described previously <cit.>. In <cit.>, e.g., the authors analyse the depth of a reflection on the text level according to a three-level scheme (none, shallow, deep). Document-level prediction, however, can only provide coarse-grained feedback. Liu et al. <cit.>, in contrast, also use three levels for predicting reflective depth for each sentence. In emotion detection, all previous works focus on a small set of 4 to 6 basic emotions. In Jena <cit.>, e.g., the author describes detecting students' emotions in a collaborative learning environment. Batbaatar et al. <cit.> describes an emotion model achieving an F1 score of 0.95 for the six basic emotions scheme proposed by Ekman <cit.>. Chiorrini et al. <cit.> use a pre-trained BERT to detect four basic emotions and their intensity from tweets, achieving an F1 score of 0.91. We did not find published work on the German language, except for Cevher et al. <cit.>, who focused on newspaper headlines. With regard to sentiment polarity, several annotated corpora were developed for German <cit.>, mainly containing tweets. Guhr et al. <cit.> use these corpora to fine-tune a BERT model. Shashkov et el. <cit.> employ sentiment analysis and topic modelling to relate student sentiment to particular topics in English. Identifying topics in reflective student writing is studied by Chen et al. <cit.> using the MALLET toolkit <cit.> and by De Lin et al. <cit.> with Word2Vec + K-Means clustering. The techniques in these studies are less robust than the current state-of-art, such as ParlBERT-Topic-German <cit.> and Bertopic <cit.>. Overall, published work on automated feedback to student reflections is scarce, the closest and most accomplished work being AcaWriter <cit.> and works by Liu and Shum <cit.>. They use linguistic techniques to identify sentences that communicate a specific rhetorical function. They also implement a 5-level reflection depth scheme and extract parts of text describing the context, challenge and change. The feedback guides the students to the next level of reflective depth with a limited number of questions. In their user study, 85.7% of students perceived the tool positively. However, the impact on the reflection quality over time was not measured and remains unclear. § METHODS, COMPONENTS AND PERFORMANCES Data collection. Our data comes from the German Reflective Corpus <cit.>. The dataset contains reflective essays collected via google-forms from computer science and ethics of AI students in German, as well as e-portfolio diaries describing school placements of teacher trainees from Dundee University. For such tasks as reflective level identification and topic modelling, we enlarged it by computer science education students' essays and pedagogy students' reflections[This still non-published data can be obtained upon request.]. It consists of reflections written by computer science, computer science education, didactics and ethics of AI students in German and English. Data is highly varied, as didactics students write longer and deeper reflections than e.g. their computer science peers. Emotions detection. Setting out from the Plutchik wheel of basic emotions <cit.>, during the annotation process we realised that many of the basic emotions are never used, while other states are relevant to our data and the educational context (e.g. confidence, motivation). We framed it as a multi-label classification problem at the sentence level. We annotated 6543 sentences with 4 annotators. The final number of labels is 17 emotions, with the 18th label being 'no-emotion'. We calculated the loss using binary cross entropy, where each label is treated as a binary classification problem, the loss is calculated for each label independently, which we sum for the total loss. We achieved the best results with a pre-trained RoBERTa <cit.> , with a micro F1 of 0.70 and a hamming score of 0.67 across all emotion labels. The model achieved the highest scores for “surprise”, “approval” and “interest”. With a lenient hamming score, accounting for the model choosing similar emotions (e.g. disappointment instead of disapproval) our model achieves up to 0.73. Gibbs cycle. <cit.> illustrates cognitive stages needed for optimal reflective results. It includes 6 phases: description, feelings, evaluation, analysis, conclusion and future plans. We annotated the highest phase present in a sentence and all the phases present. We treated this as a multi-class classification problem and used a pre-trained ELECTRA model. While evaluating, we compared one-hot prediction to the highest phase present and 3 top probability classes with all the phases present. While one-hot matching only managed to score 65% F1 macro, the top 3 predictions achieve up to 98% F1 macro and micro. Reflective level detection. Under the supervision of Didactics specialists two annotators labelled 600 texts according to Fleck & Fitzpatrick's scheme <cit.>, achieving moderate inter-annotators agreement of 0.68. The coding scheme includes 5 levels: description, reflective description, dialogical reflection, transformative reflection and critical reflection; With 70% of the data used for the training and 30% for evaluation, we used pre-trained BERT large and complete document embeddings for the English and German, resulting in QWK score of 0.71 in cross-validation. Topic modelling. We used BERTopic <cit.> on the sentence level. First, we tokenized and normalize the input sequence to lowercase and filter out numbers, punctuation, and stop-words using nltk library <cit.>. Then, we extract embeddings with BERT, reduce dimensionalities with UMAP, cluster reduced embeddings with HDBSCAN, create topic representation with tfidf and fine-tune topic representations with the BERT model. Because we have a lot of data of different origins, we created two clusterings, one more specific to the pedagogy topic and one including various educational topics. You can see our clusters in App. Linguistic scoring. Using spacy[https://spacy.io] we tokenized, and lemmatize the sentences, extracted dependencies parcing and part of speech. Additionally, we used RFTagger<cit.> for parts of speech and types of verbs. We extract sentence length, adverb for verb ratio, adjective for noun ratio, number of simple and complex sentences, types of subordinated clauses and number of discourse connectors[We use Connective-Lex list for German: https://doi.org/10.4000/discours.10098.] used. This information enables us to determine the reflection length, expressivity and variability of the language, as well as surface coherence and structure. § SYSTEM ARCHITECTURE In PapagAI (see Fig. <ref>) the input text of the reflection is received from the AWS server through a WebSocket listener script. To minimize the response time, the models are loaded in the listener script once and then the user request spawn threads with the models already loaded. If the input text is smaller than three sentences and contains forbidden sequences, the processing does not start and the user receives a request to revise their input. Otherwise, the text is segmented into sentences and tokens. The language is identified using langid <cit.> and if the text is not in German, it is translated using Google translator API implementation.[https://pypi.org/project/deep-translator/] The reflective level model receives the whole text, while other models are fed with the segmented sentences. Topic modelling and Gibbs cycle results are mapped, to identify if topics were well reflected upon. If more than three sentences are allocated to the topic and these sentences were identified by the Gibbs cycle model as analysis, we consider the topics well thought through. The extracted features are then passed to the feedback module. Here, the lacking and under-represented elements are identified in linguistic features and the three least present Gibbs cycle stages. If sentiment and emotions are all positive we conclude that no potential challenges and problems are thought through. If the sentiment and emotions are all negative, we want to induce optimism. These features together with the reflective level are mapped to the database of potential prompts and questions, where one of the suitable feedback options is chosen randomly for the sake of variability. Using manually corrected Gpt-3 outputs, for each prompt we created variations so that the feedback does not repeat often even if the same prompts are required.The extracted textual prompts are built together in a rule-based way into the template, prepared for German, Spanish and English. Otherwise, the overall feedback is made in German and then translated into the input language. The textual and a vector of extracted features for visual representation are sent back to the AWS server. The whole processing takes from 15 to 30 seconds based on the length of the text. Sample feedback can be seen in Figure <ref>. § COMPARISON WITH GPT-3 We compared our emotions detection (fine-tuned RoBERTa) and Gibbs cycle model (fine-tuned Electra) with the prompt-engineered state-of-the-art generative model Davinci <cit.> on the same task. For the evaluation and comparison, we used a small subset of 262 samples which were not part of the training. We first tried the zero-shot approach, where we described our labels to GPT-3 and gave it our sentence to predict. Then, we tried a one-shot approach, providing GPT-3 with one example sentence for each label. Finally, in the few-shot approach, we provided GPT-3 with three examples per label, which is the maximum number of examples possible due to the input sequence length restriction. Although the task requested GPT-3 to pick multiple labels out of the possible options, the model predicted multiple labels only in 5% of the cases for emotions. For this reason, we used the custom defined “one correct label”: the score considers the prediction correct if it contains at least one correct label from the sentence's true labels. The zero-shot approach achieved only 0.28 accuracy in predicting one correct label for emotions. The model predicted the labels “information”, “uncertainty”, “interest”, and “motivated” for the majority of the sentences. With the Gibbs cycle task, it achieved 80% correct predictions. Providing one example per label improved the performance noticeably by 18% (0.46) for emotions, and the model was able to detect emotions like “confidence”, “challenged”, and “approval” more accurately. It did not influence Gibb's cycle performance. Increasing the number of examples to three resulted in a slight improvement of 3% (0.49) for emotions, and 7% (0.87) for the Gibbs cycle. However, the best-scoring approaches did not offer a comparable performance to our fine-tuned models on these specific tasks with 0.81 on the same custom metric for emotion detection and 0.98 for the Gibbs cycle. § DISCUSSION AND CONCLUSION The current PapagAI system has several advantages in comparison to generative LLMs. It ensures transparency of the evaluation and control over the output, which is based exclusively on didactic theory. Although LLMs show huge promise, they are still prone to hallucination <cit.>, and, as we have shown in <ref>, they may under-perform on difficult cognitive tasks in comparison to smaller language models fine-tuned for the task. The fine-tuning of LLMs to didactic books and instructions, which we plan for our future work, still does not guarantee 100% theoretical soundness of the output, which is problematic e.g. in the case of pre-service students with statistically low AI acceptance. At the same time, the newest models, such as GPT-4, are only available through APIs, which raises concerns about data privacy, especially as the data in focus is an intimate reflective diary. Moreover, current open-source models, such as GPT-J and GPT-2, especially for languages other than English do not draw comparable results. Our architecture has, however, obvious drawbacks. On the one hand, our models do not reach 100% accuracy and this can naturally lead to suboptimal feedback. The processing time for many models, especially for longer texts, can be significantly higher than for a single generative LLM. For now, as we provide one feedback message for one rather long reflection, this is not a big issue, however, if we implement a dialogue form, the time of response would not feel natural. Finally, the variability of output using our approach is much more limited in comparison to generative models. We try to address it by creating many similar versions of instructions rephrased by GPT-3, and corrected manually. On average 7 out of 10 prompts needed some correction. Most of the errors were related to GPT-3 trying to rephrase the given sentence using synonyms that were not didactically appropriate in the given context. Future work, among others, will focus on user studies to understand how we can optimize the feedback, so that the users find it credible and useful, while their reflective skills advance. We also plan a more detailed evaluation based on more user data. We hope that our work will contribute to the optimization of the pre-service teachers' reflective practice and self-guided learning experience. splncs04 § APPENDIXES
http://arxiv.org/abs/2307.03998v1
20230708154349
Lightweight Improved Residual Network for Efficient Inverse Tone Mapping
[ "Liqi Xue", "Tianyi Xu", "Yongbao Song", "Yan Liu", "Lei Zhang", "Xiantong Zhen", "Jun Xu" ]
cs.CV
[ "cs.CV", "eess.IV" ]
Lightweight Improved Residual Network for Efficient Inverse Tone Mapping Liqi Xue, Tianyi Xu, Yongbao Song, Yan Liu, Lei Zhang, Xiantong Zhen, and Jun Xu This work was sponsored by the National Natural Science Foundation of China (No. 62002176, 62176068, and 12101334), CAAI-Huawei MindSpore Open Fund, the Natural Science Foundation of Tianjin (No. 21JCQNJC00030), and the Fundamental Research Funds for the Central Universities. Corresponding author: Xiantong Zhen ([email protected]) and Jun Xu ([email protected]). Liqi Xue, Tianyi Xu, Yan Liu, and Jun Xu are with the School of Statistics and Data Science, Nankai University, Tianjin 300071, China. Yongbao Song is with the School of Mathematical Science, Nankai University, Tianijn 300071, China. Lei Zhang and Xiantong Zhen are with the Computer Science College, Guangdong University of Petrochemical Technology, Maoming 525000, China. August 12, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The display devices like HDR10 televisions are increasingly prevalent in our daily life for visualizing high dynamic range (HDR) images. But the majority of media images on the internet remain in 8-bit standard dynamic range (SDR) format. Therefore, converting SDR images to HDR ones by inverse tone mapping (ITM) is crucial to unlock the full potential of abundant media images. However, existing ITM methods are usually developed with complex network architectures requiring huge computational costs. In this paper, we propose a lightweight Improved Residual Network (IRNet) by enhancing the power of popular residual block for efficient ITM. Specifically, we propose a new Improved Residual Block (IRB) to extract and fuse multi-layer features for fine-grained HDR image reconstruction. Experiments on three benchmark datasets demonstrate that our IRNet achieves state-of-the-art performance on both the ITM and joint SR-ITM tasks. The code, models and data will be publicly available at <https://github.com/ThisisVikki/ITM-baseline>. Inverse tone mapping, improved residual block, lightweight network, inference efficiency. § INTRODUCTION High dynamic range (HDR) images defined in Rec.2020 <cit.> exhibit clearer details in highlights and shadows, as well as smoother transitions on brightness and color, than the standard dynamic range (SDR) images with 8-bit color depth defined in Rec.709 <cit.>. Owing to these benefits, the manufacturers of television and mobile devices make a push to bring HDR contents to demanding consumers. Though HDR display devices allow more visually-pleasing contents in HDR images by Dolby Vision, HDR10, and HDR10+ technologies <cit.>, the SDR images in 8 bit-depth would be featureless when being directly broadcast on HDR display devices <cit.>. To present the SDR images closer to human perception on HDR display devices, it is essential to convert SDR images into comfortable HDR ones without color or information loss. This challenging problem is known as inverse tone mapping (ITM) <cit.>, which has been studied in a more general sense rather than expanding the luminance range of camera raw image files in linear color space <cit.>. Early image ITM methods mainly resort to global or local image processing operators for promising performance. Global ITM operators <cit.> usually utilize reverse tone mapping functions to extend the dynamic range of image pixels. But this would bring distorted details and uneven transitions between neighborhood pixels in different levels of brightness. Local ITM operators <cit.> expand the image bit depth in a spatially-varying manner. Unfortunately, these methods would fail to preserve the global consistency of luminance ranges across an image. Recently, deep neural networks have been employed to tackle the ITM task from a data-driven perspective <cit.>. These networks usually contain strong backbones with complex architectures, which may require huge computational costs for promising ITM performance. Besides, the methods of <cit.> simultaneously tackle the joint image super-resolution (SR) and ITM (joint SR-ITM) tasks by separating the base and detail components from the input image with extra image decomposition <cit.>. However, this would further increase the model complexity and computational costs of current joint SR-ITM methods over previous ITM ones. Despite their promising performance, the above-mentioned ITM methods suffer from two main limitations. Firstly, the complex model architectures obscure the core of the ITM problem, that is, “expanding the luminance range of a low dynamic range image to produce a higher dynamic range image” <cit.>. This problem of extending luminance range or color bit depth is similar to the tasks of image super-resolution <cit.> and video frame prediction <cit.>, all of which aim to increase the highly-correlated information of the input image at different aspects. Therefore, it is possible to tackle the ITM problem by simple and lightweight neural networks, as inspired by the concurrent works in image super-resolution <cit.> and video frame prediction <cit.>. Secondly, the huge computational costs also limits the prevalence of ITM methods from being deployed into edge devices. For example, to perform ITM on a 4K-resolution (3840×2160) image, Deep SR-ITM <cit.> needs 2.50M parameters, ∼1.05×10^4G FLOPs, and a speed of 777.95ms, while HDRTVNet <cit.> needs 37.20M parameters, ∼1.41×10^4G FLOPs, and a speed of 1513.43ms. In this paper, we leverage the popular residual learning recipe <cit.> to develop a simple and lightweight Improved Residual Network (IRNet) for efficient ITM performance. Specifically, we propose an Improved Residual Block (IRB) with simple modifications of the residual block <cit.> for fine-grained feature extraction and fusion. On network design, we also adopt the plain residual learning framework to avoid complex multi-branch architecture <cit.>. Experiments on three benchmark datasets, including our newly collected one, show that our IRNet is very efficient and outperforms ITM methods. As shown in Figure <ref>, our IRNet only needs ∼0.13M parameters and ∼0.22×10^4G FLOPs at a speed of 398.33ms to process a 4K-resolution image, which outperforms state-of-the-art methods on the ITM task. On the HDRTV1K dataset <cit.>, our IRNet exceeds AGCM+LE <cit.> on visual quality and PSNR by 0.59dB, but only has one tenth of parameter amounts (∼0.13M v.s. ∼1.41M). Besides, our IRNet also achieves superior performance to Deep SR-ITM <cit.> and JSI-GAN <cit.> on joint SR-ITM. In summary, our main contributions are three-fold: * We develop a lightweight Improved Residual Network (IRNet) for efficient image inverse tone mapping (ITM). Our IRNet is built upon a new Improved Residual Block (IRB) customized from the popular residual block for fine-grained feature extraction and fusion. * We collect a new test set for ITM, , ITM-4K, that has 160 4K-resolution images of versatile scenes with ground truth HDR images. It serves as a good supplementary to HDRTV1K <cit.> that has 117 test images. * Experiments on the HDRTV1K dataset <cit.>, our new ITM-4K test set, and the test set in <cit.> show that our lightweight IRNet is efficient and achieves impressive quantitative and qualitative results on the ITM and joint SR-ITM tasks. Comprehensive ablation studies also validate the effectiveness of our model design. The rest of this paper is organized as follows. In <ref>, we summarize the related work. In <ref>, we present the proposed Improved Residual Network (IRNet). In <ref>, we perform experiments to validate the efficiency of our IRNet on ITM and joint SR-ITM. In <ref>, we conclude this paper. § RELATED WORK §.§ Inverse Tone Mapping The inverse tone mapping (ITM) task aims to transform a standard dynamic range (SDR, usually in 8-bit) image into a high dynamic range (HDR, usually in 16-bit) image. This problem is ill-posed due to the information loss in the luminance ranges of SDR images. Early explorations on the ITM task can be divided into global and local ITM operators. While the global ITM operators equally apply linear expansion <cit.>, cross-bilateral filtering <cit.>, or a gamma-based expansion <cit.> to all the pixels or patches of an input SDR image, the local ITM operators <cit.> reconstruct highlight regions or expand the luminance ranges of each pixel or patch according to the local information around it. Previous works show that global ITM operators <cit.> could avoid undesired artifacts, but result in rough details and unnatural transitions due to the ignorance of local detail reconstruction. On the contrary, local ITM operators <cit.> implemented adaptively on small areas would fail to capture the global consistency of luminance ranges. To deal with the issues of locally undesired artifacts and global luminance consistency raised by the early methods mentioned above, many recent ITM methods <cit.> shift to utilize the advancements of deep convolutional neural networks (CNNs). Early CNN-based methods <cit.> merge low dynamic range (LDR) images captured under multiple exposure settings to produce an SDR image. Meanwhile, the work of <cit.> presents a multi-branch CNN to implement ITM both on global and local perspectives. Then, the method of <cit.> introduces a feature masking strategy to address the problem of undesired artifacts emerged during the image reconstruction. Recently, the physical principle of HDR image formation is also incorporated into the designing of ITM CNNs <cit.>. For example, HDRTVNet <cit.> contains of an adaptive global color mapping network, a local enhancement network, and a highlight generation network. Despite their promising performance, most of these methods require huge parameter amounts and computational costs, which hinders them from being deployed into resource-constrained edge devices. In this paper, we aim to develop a lightweight yet efficient ITM network. §.§ Joint Super-Resolution and Inverse Tone Mapping Joint Super-Resolution and Inverse Tone Mapping (joint SR-ITM) aims to simultaneously increase the spatial resolution and dynamic range of an input low-resolution and standard dynamic range (LR-SDR) image. Deep convolutional neural networks have also been applied to tackle the joint SR-ITM task <cit.>. Considering that the luminance ranges of different image areas should be expanded adaptively, the method of <cit.> firstly decomposes an SDR image into a low-frequency structure component and a high-frequency detail component, and then processes the two components by two different but correlated network branches. The separation is implemented by guided-filtering <cit.>, which is widely used in image smoothing <cit.>. This framework is also employed in the subsequent work of JSI-GAN <cit.>. To tackle multi-frame SDR inputs, Lecouat <cit.> reformulated the joint SR-ITM task as an optimization problem to fuse multiple LR-SDR raw image bursts in different exposures into an HR-HDR image. Tan <cit.> developed a two-branch network to fuse a series of LR-LDR dynamic images into an HR-HDR one by estimating the motion cues by a deformable module <cit.>. Though with appealing performance, the image decomposition based methods usually require multi-branch network architectures for the joint SR-ITM task, which, however, usually implies a considerable growth of parameter amounts and computational burden to tackle parallel feature extraction and elaborate interaction. In this paper, we propose a lightweight ITM network for inference efficiency, inspired by the merits of lightweight image super-resolution networks <cit.>. §.§ Efficient Image Restoration For the goal of inference efficiency, network compression and acceleration techniques are exploited to reduce the computational burden and memory consumption of image restoration methods <cit.>. One popular solution is employing Laplacian pyramid <cit.> to decompose the input image into a low-resolution base layer consuming the majority of computations and several high-resolution detail layers requiring a few computations <cit.>. Bilateral grid learning <cit.> is also utilized to learn approximate operators on the downsampled images and execute the learned operators to the original images. Other inference strategies like recursive learning <cit.> and look-up table <cit.> are also exploited to accelerate image restoration networks. Instead of developing new methods, some recent works accelerate existing restoration networks by model slimming <cit.> or input-adaptive inference <cit.>. In this paper, we develop an efficient ITM network that can well process a 4K-resolution SDR image with ∼134K parameters and 0.4 seconds. § PROPOSED METHOD §.§ Motivation In the scene-referred workflow <cit.>, an HDR raw image in camera color space (usually in 16-bit color depth) will be tone mapped to an SDR RGB image in display-referred color space (usually in 8-bit color depth). This process is usually implemented in a camera imaging pipeline containing multiple image processing operations, during which different pixels usually undergo different compression strengths on dynamic ranges to produce visually pleasing image contrasts <cit.>. The task of inverse tone mapping (ITM) aims to increase the dynamic range of light intensity (or luminance) in an SDR image. An SDR image in 8-bit depth can display a maximum of around 16.7 million shades of color, while an HDR image in 10-bit depth can display a maximum of around 1.07 billion shades of color, allowing it to exhibit more colors with better visual quality <cit.>. To better understand the luminance difference between SDR and HDR images, in Figure <ref> (a), we visualize the maximum and minimum luminance values of 117 HDR images from the test set of <cit.>, as well as the luminance values at the corresponding positions of the paired SDR images. We observe that there are obvious gaps between the maximum values of HDR images and the values at the corresponding positions of the paired SDR images, whilst slight differences between the minimum values of the HDR images and the values at the corresponding positions of the paired SDR images. This indicates that high-luminance values change more greatly than low-luminance ones. Besides, the luminance values of different HDR images also show distinct gaps when compared to those values at the corresponding positions in the paired SDR images. For promising ITM performance, the ITM methods <cit.> suffer from complex network backbones with huge parameter amounts and computational costs. To implement efficient ITM, in this paper, we propose to develop a simple, lightweight, and efficient Improved Residual Network (IRNet) by slightly modifying the residual block <cit.>. As shown in Figure <ref> (b), we concatenate the intermediate feature map F_1 after the LeakyReLU in the residual block with the fused feature of F_in and F_2 (more details will be presented in <ref>). Our IRNet shows clear improvements, especially in the bright area near the sun, over that without using the feature map F_1 on the image “028” from the HDRTV1K test set <cit.>, as shown in Figure <ref> (b). Along the highlighted lines, the green line of our IRNet enjoys closer approximation to the blue line of the “Ground Truth” HDR image than the red line of our IRNet without using the intermediate feature F_1 (denoted as “IRNet w/o F_1”). In Figure <ref> (c), we plot the ratios of luminance values of the highlighted lines by our IRNet and “IRNet w/o F_1”, which also validates that our IRNet achieves better approximation to the “Ground Truth” than the IRNet without F_1. This validates the effectiveness of our IRB over the residual block for ITM. Adaptive luminance extension is also important for the ITM task. For this goal, many joint SR-ITM methods <cit.> performed image or feature decomposition to extract and fuse multi-scale feature maps. However, these ITM networks with decomposition techniques often suffer from complex network structures with heavy computational costs (Table <ref>). For efficiency consideration, we design our IRNet as a simple and lightweight network by employing the popular residual block <cit.> as a proper backbone for our IRNet. The promising results in Figure <ref> (b) by our IRNet without using F_1 motivates us to further improve our IRNet for better ITM performance. §.§ Proposed Improved Residual Network Our IRNet first extracts the initial feature map using a 1×1 convolution layer (instead of 3×3 one to reduce the parameter amounts). Then we cascade n Improved Residual Blocks (IRBs) proposed for fine-grained feature extraction and fusion. The details of our IRB block will be introduced later. To boost the ITM performance, each IRB is followed by a Contrast-aware Channel Attention (CCA) layer <cit.>. We also use a skip connection to sum the feature maps before the IRB block and after the CCA layer. Improved Residual Block (IRB). The proposed IRB block is built upon the residual block <cit.>, which achieves great success in many computer vision tasks <cit.>. As shown in Figure <ref> (a), the residual block <cit.> contains two 3 × 3 convolution layers with an activation function (here we replace ReLU by LeakyReLU) between them, the output feature is added by the input feature F_in and activated by another LeakyReLU function. Built upon the residual block, our IRB block is designed to keep our IRNet as simple as possible with better ITM performance. This is feasible by fully exploiting the multi-layer feature maps within the IRB block. To this end, given the input feature F_in∈ℝ^H× W× C, our IRB first refines it by a 3×3 convolution layer and a LeakyReLU activation function. The extracted feature F_1∈ℝ^H× W× C/2 is further refined in our IRB by a second 3×3 convolution layer to output the feature F_2∈ℝ^H× W× C: F_1 = LeakyReLU(Conv_3 × 3(F_in)), F_2 = Conv_3× 3(F_1 ). Then our IRB uses a skip connection and a Conv_1×1 to fuse F_in and F_2 and obtain the fusion feature F_fuse: F_fuse = Conv_1×1(F_in+F_2). Finally, different from the residual block, our IRB explicitly concatenates the intermediate feature F_1 with the fusion feature F_fuse to produce the output feature F_out as follows: F_out=Conv_1×1(Concat(F_fuse,F_1)). We visualize the structure of our IRB block in Figure <ref> (b). Compared with the original residual block, our IRB well extracts and utilizes the multi-layer features, which correspond to spatially adaptive luminance areas for ITM. As shown in Figure <ref> (a), compared with the IRNet w/o F_1, our IRNet restores the luminance of HDR image closer to the ground truth, especially in the highlight regions. Even though popular encoder-decoder frameworks like U-net <cit.> or Uformer <cit.> can be utilized here to extract strong multi-scale features, this would bring significant growth on parameter amounts and computational costs <cit.>. Through a simple modification to the residual block, the proposed IRB serves as a lightweight building block in our IRNet for efficient ITM performance. The mean feature map along the channel dimension could reflects the luminance information of that feature <cit.>. In Figure <ref> (b), we visualize the mean feature maps of F_in, F_1, F_2, F_fuse, and F_out extracted by our IRNet and “IRNet w/o F_1”. One can see that the mean feature map of F_1 extracted by our IRNet exhibits higher luminance in the sky area around the sun than that of “IRNet w/o F_1”. Due to the lack of luminance information by the intermediate feature F_1, “IRNet w/o F_1” produces stronger contrasts at the input feature F_in of IRB blocks and darker luminance around the sun in the output feature F_out, than our IRNet using F_1 in our IRB block. Contrast-aware Channel Attention (CCA). To preserve image details, we utilize a CCA layer <cit.> after each IRB block. As shown in Figure <ref> (c), the CCA layer is consisted of contrast computation, two 1×1 convolution layers interleaved with a ReLU function, a sigmoid function, and a skip connection between the input and output features to help gradient propagation. Given the input X=[x_1,...,x_C]∈ℝ^H× W× C, the contrast is computed as follows: z_c = H_GC(x_c) = √(1/HW∑_(i,j)∈ x_c (x_c^i,j-∑_(i,j)∈ x_c x_c^i,j)^2) + 1/HW∑_(i,j)∈ x_c x_c^i,j, c=1,....,C. After the i-th (i=1,...,n-1) IRB block and CCA layer, the output feature is added to the input feature F_in^i by a skip connection, and F_in^n+1 is the final feature that will be inputted to the next convolution layers as follows: F_in^i+1=F_in^i+CCA(IRB(F_in^i)). After extracting n scales of fine-grained feature maps, we concatenate them for multi-scale feature fusion, which is implemented by a sequence of 1×1 convolution layer, a LeakyReLU activation function, and a 3×3 convolution layer. Finally, we reconstruct the output HDR image using a 3×3 convolution layer. The overall architecture of the proposed IRNet is shown in Figure <ref> (d). To apply the proposed IRNet to the joint SR-ITM task, we further add a Pixel Shuffle operation <cit.> after the final 3×3 convolution layer of our IRNet to make it feasible for super-resolution. The Pixel-Shuffle contains two 3×3 convolution layers interleaved with a ReLU function. The first convolution layer reduces the channel dimension of the feature map from C to 3s^2, where s is the upsampling factor, while the second convolution layer reconstructs the 3-channel HR-HDR image via upsampling the feature map by a factor of s. §.§ Implementation Details Here, we set the channel dimension of the feature map F_in as C=64. The number of IRB blocks n is set as n=2 for the ITM task and n=5 for the joint SR-ITM task. We use Kaiming initialization <cit.> to initialize the parameters of our IRNet. To optimize these parameters, we adopt Adam optimizer <cit.> with β_1 = 0.9 and β_2 = 0.999 to minimize an ℓ_1 loss function. The learning rate η is initialized as 5×10^-4 and degrades to 1×10^-11 by cosine annealing schedule with warm restart <cit.> in every 60 epochs. The batch size is set as 16. We train the models of our IRNet for 200 epochs on an NVIDIA V100 GPU with 32GB memory. § EXPERIMENTS In this section, we evaluate the performance of comparison methods and our IRNet on the ITM and joint SR-ITM tasks. We first introduce the used datasets and metrics. Then we present the the comparison results on ITM and joint SR-ITM, respectively. Finally, we conduct a series of ablation experiments to study the components of our IRNet. §.§ Dataset and Metrics Training set. In our experiments, we use the recently published HDRTV1K dataset <cit.> to evaluate the comparison methods. This dataset contains 1,235 pairs of 8-bit SDR and 10-bit HDR images for training and 117 pairs of images for testing. We crop each image in the training set into 30 256×256 image patches. For data augmentation, we randomly flip the cropped patches horizontally or vertically, rotate these patches by 90°, 180°, or 270°. To perform joint SR-ITM on the HDRTV1K dataset, which is originally developed only for ITM, we downsample the SDR images by a factor of s=4 to obtain the low-resolution (LR) SDR images, similar to <cit.>. The high-resolution (HR) and HDR images from the HDRTV1K dataset can still be used as the training targets. Test sets. On the ITM task, we evaluate the comparison methods on three datasets: the test set of HDRTV1K <cit.>, our newly collected ITM-4K dataset (for high-resolution images), and the test set in <cit.>. On the joint SR-ITM task, we evaluate the comparison methods on the test set of HDRTV1K <cit.>. The details of these test sets are summarized as follows: * HDRTV1K <cit.> contains 117 test SDR images of size 3840×2160×3, with paired HDR images. For joint SR-ITM, we downsample the SDR images by a factor of 4 to generate the LR-SDR test images. * ITM-4K contains 160 pairs of SDR and HDR images of size 3840×2160×3. These images are extracted from 9 HDR10 videos collected from https://4kmedia.org4kmedia.org. The corresponding SDR videos are generated through YouTube similar to <cit.>. We display 12 typical scenes from the 160 test images in Figure <ref>. In Figure <ref>, we also visualize the distribution of the 160 SDR images in our ITM-4K dataset and the 117 SDR test images in HDRTV1K <cit.> using t-SNE <cit.>. One can see that our ITM-4K dataset contains diverse scenes similar yet supplementary to the test set of HDRTV1K <cit.>. * The test set in <cit.>. This dataset contains 28 test images, 12 of which are overlapped with the training set of HDRTV1K <cit.> and the test set of our ITM-4K. Thus, we use the remaining 16 images to evaluate the ITM methods. Note that although this dataset is used for joint SR-ITM task, the test set provides the SDR images of the same sizes with the corresponding HDR images, which can be used to evaluate ITM methods. We do not use this test set for the joint SR-ITM task due to its overlap with the training set of HDRTV1K <cit.>. Metrics. We evaluate the performance of different methods on ITM and joint SR-ITM in terms of PSNR, SSIM <cit.>, LPIPS <cit.>, and HDR-VDP3 <cit.>. PSNR is used to evaluate the closeness of the output image to the corresponding ground truth image. SSIM <cit.> and LPIPS <cit.> evaluate the structural and perceptual similarity, respectively, of the output image to the corresponding ground truth image. HDR-VDP3 <cit.> is a widely used metric to evaluate the quality of HDR images <cit.>, and we use its prediction of “quality” (Q) here. §.§ Results on Inverse Tone Mapping Comparison methods. For our IRNet, we set n=2 and C=64, and denote it as “IRNet-2 (64c)”. We compare it with four ITM methods of HDRNet <cit.>, CSRNet <cit.>, Ada-3DLUT <cit.>, and HDRTVNet <cit.>. The methods of Pixel2Pixel <cit.> and CycleGAN <cit.> are also evaluated as two generative baselines for ITM. As suggested in <cit.>, we also modify the joint SR-ITM methods of Deep SR-ITM <cit.> and JSI-GAN <cit.> for the ITM task, by setting the stride of the first convolution layer as 2 to make them feasible for the ITM task. This manner reduces their computational costs while not degrading the ITM performance. Objective results. The comparison results on the test set of HDRTV1K <cit.> are summarized in Table <ref>. One can see that our IRNet-2 (64c) outperforms the second best method, , AGCM+LE, by 0.59dB, 0.0011, and 0.3 in terms of PSNR, SSIM, and LPIPS, respectively. Note that our IRNet-2 (64c) has 134.73K parameters, fewer than all the other comparison methods except CSRNet (36.49K) and AGCM (35.25K). But these two methods suffer from clear performance gap to our IRNet-2 (64c) in terms of all evaluation metrics. On HDR-VDP3, our method is slightly (0.03) lower than the best method AGCM+LE. But AGCM+LE requires 1410K parameters, 6228.31G FLOPs, and 3114.09G MACs to process a 4K-resolution SDR image at a speed of 691.30ms, much larger than those of our IRNet-2 (64c). Besides, our IRNet-1 (48c), , the IRNet with a single IRB block and C=48, only needs 49.3K parameters to achieve competitive results with the second best method of AGCM+LE. We further evaluate our IRNet-2 and other methods on our ITM-4K dataset and the 16 SDR images in the test set of <cit.>. As shown in Table <ref>, our IRNet-2 (64c) still achieves better results than other comparison methods on PSNR and HDR-VDP3. In summary, our IRNet achieves efficient ITM performance with a lightweight backbone. Visual quality is an important criterion to evaluate the performance of ITM methods, since human are the final reviewers of the image quality. For the purpose of visualization, the HDR images are generated from HDR10 videos and stored in the 16-bit PNG format. The comparison results of visual quality by different methods on three test sets are shown in Figure <ref>. We observe that most comparison methods suffer from a certain degree of color bias, especially near the light source. Our IRNet achieves closer results to the ground truth images than other methods, with more correct colors and color contrasts. In addition, our IRNet achieves better PSNR and SSIM results than the other comparison methods. All these results demonstrate that our IRNet is very effective on ITM. Running speed is the actual wall-clock time of evaluating model efficiency on SDR images. We calculate the running time of comparison methods on 4K-resolution (3840×2160×3) images. As shown in Table <ref>, our IRNet-2 (64c) is faster than the second and third best methods, , AGCM+LE and HDRTVNet, by a gap of 292.97ms and 1115.10ms, respectively. Meanwhile, IRNet-1 (48c) reduces the running time of IRNet-2 (64c) from 398.33ms to 166.91ms with guaranteed performance. Although faster than our IRNet-2, the methods of HDRNet, CSRNet, Ada-3DLUT, and AGCM suffer from obvious performance degradation on quantitative metrics. §.§ Results on Joint SR-ITM Comparison methods. Here, we set n=5 and C=64 in our IRNet, and denote it as “IRNet-5 (64c)”. We compare it with two SR methods, , EDSR <cit.> and RFDN <cit.>, two cascaded two stage SR-ITM methods, , “HDRTVNet+RFDN” (sequentially performing ITM by HDRTVNet and SR by RFDN) and “RFDN+HDRTVNet” (vice versa), and two joint SR-ITM methods, , Deep SR-ITM <cit.> and JSI-GAN <cit.>. For the cascaded SR-ITM methods, we choose RFDN <cit.> and HDRTVNet<cit.> since they are methods on SR and ITM, respectively. Objective results. The comparison of numerical results are summarized in Table <ref>. It can be seen that the two SR methods still achieve reasonable performance in terms of objective metrics. By first performing SR and then ITM, the cascaded method achieves better results on image quality metrics, but requires heavy computational costs, , 14783.55G FLOPs and 7391.58G MACs to process an LR-SDR image of size 960×540. Of course, first performing ITM and then SR significantly reduces the computational costs, and the performance on evaluation metrics suffers from a huge degradation as well. Besides, compared with Deep SR-ITM and JSI-GAN, our IRNet-5 (64c) achieves the best PSNR results (0.38dB higher than the second best method “RFDN+HDRTVNet”) and comparable results on the other metrics, but with the least requirements on parameter amounts, computational costs, and inference time. These observations demonstrate that our IRNet is a lightweight and efficient backbone that can achieve performance on the joint SR-ITM task. Visual quality. In Figure <ref>, we qualitatively compare the visual results of different methods on the HDRTV1K test set <cit.> modified for joint SR-ITM (please refer to <ref> A). One can see that all these methods obtain promising visual results on the presented scenes. The method of “HDRTVNet+RFDN” produces blurry edges around the lighting area. Besides, the images output by “HDRTVNet+RFDN”, “RFDN+HDRTVNet”, Deep SR-ITM <cit.> and JSI-GAN <cit.> suffer from the color shift problem to some extent. By fully exploiting multi-layer features for fine-grained image reconstruction, our IRNet-5 (64c) not only accurately restores the image colors, but also well increases the image details during the SR process. These results validate that, though being lightweight with the fewest parameter amounts and computational costs, the proposed IRNet is very efficient on the joint SR-ITM task. Running speed. The comparison results of running speed on the downsampled images (960×540×3) are summarized in Table <ref>. It can be seen that our IRNet is faster than other comparison methods. Note that when comparing with “RFDN+HDRTVNet”, our IRNet-5 achieves comparable performance with only 4.08% of its running time. These results validate the efficiency of our IRNet on joint SR-ITM. §.§ Ablation Study To study in detail the working mechanism of our IRNet, we present comprehensive ablation experiments of our IRNet on ITM. Specifically, we assess: 1) how to extract the intermediate feature F_1 in our IRB? 2) how does the number of IRB blocks affect our IRNet? 3) how does the channel dimension C in IRB influence our IRNet? 4) how does the CCA layer boost our IRNet? All variants of our IRNet are trained and evaluated on the training set and test set of HDRTV1K <cit.>, respectively. 1) How to extract the intermediate feature F_1 in our IRB? The IRB in our IRNet is modified from the residual block (RB). To validate the effectiveness of our IRB, we first evaluate our IRNet by replacing the IRB blocks by the RB blocks (using LeakyReLU instead of ReLU for fair comparison). The results listed in the first two rows of Table <ref> show that our IRNet with the IRB block achieves much better performance than our IRNet with the original RB block. Besides, we design several variants of our IRB block (“IRB”) and study how they influence our IRNet on ITM. We first remove the intermediate feature F_1 to verify its importance in our IRB, which is denoted as “IRB w/o F_1”. Then we study where to extract the intermediate feature F_1, which can be put before the first convolution layer (take F_1 as F_in), after the activation layer (our IRB), before the addition operation (take F_1 as F_2). The results are summarized in Table <ref>. One can see that our IRNet with the original IRB achieves the best PSNR and SSIM results. By removing the feature F_1, the variant of our IRNet achieves clear drop on PSNR and SSIM, but similar LPIPS and HDR-VDP3 results. If we use the input feature F_in of IRB or the feature after the second convolution layer F_2 as the intermediate feature F_in, the variants of our IRNet suffer from clear drop on PSNR, but with a little difference on SSIM and LPIPS. All these results validate the effectiveness of utilizing the feature after the activation function as the intermediate feature for our IRB to achieve promising ITM performance. 2) How does the number of IRB blocks affect our IRNet? In our IRNet, we use two IRB blocks for ITM and five IRB blocks for joint SR-ITM. Here, we vary the number of IRB blocks to study how it influences our IRNet. The results are listed in Tables <ref> and <ref>, respectively. It can be seen that our IRNet achieves promising performance with 1∼4 IRB blocks on SSIM, LPIPS, and HDR-VDP3. Our IRNet with two IRB blocks achieves the best PSNR results among all choices. Similarly, our IRNet with five IRB blocks achieves the best PSNR and SSIM results on joint SR-ITM, while that with six IRB blocks achieves the best LPIPS and HDR-VDP3 results. To reduce the parameter amounts, we use two and five IRB blocks in our IRNet for ITM and joint SR-ITM, respectively. 3) How does the channel dimension C in IRB influence our IRNet? To answer this question, we perform experiments on our IRNet with different number of channels in the IRB block. The results of our IRNet-1 and IRNet-2 on ITM and those of our IRNet-5 on joint SR-ITM are shown in the Table <ref>, Table <ref> and Table <ref>, respectively. For ITM, our IRNet-1 using one IRB achieves the best PSNR and SSIM results when C=48 and with 49.30K parameters, while our IRNet-2 using two IRBs achieves the best PSNR and SSIM results when C=64 and with 134.73K parameters. For joint SR-ITM, our IRNet-5 using five IRBs achieves the best PSNR results when C=64 and with 468.19K parameters. Our IRNet-5 with C=96 achieves better SSIM, LPIPS, and HDR-VDP3 results, but suffers from a huge growth of parameter amounts. Thus, we set C=48 and C=64 in our IRNet-1 and IRNet-2, respectively for ITM, and C=64 in our IRNet-5 for joint SR-ITM. 4) How does the CCA layer boost our IRNet? Our IRNet uses one CCA layer after each IRB block to refine the feature maps. We remove the first CCA layer between two IRB blocks in our IRNet-2. The results on ITM are shown in Table <ref>. One can see that our IRNet-2 without the first CCA layer suffers from a clear performance drop on PSNR. This demonstrates that the CCA layer is important to our IRNet-2 on ITM. § CONCLUSION In this paper, we developed a lightweight and efficient inverse tone mapping (ITM) network. The proposed Improved Residual Network (IRNet) is mainly consisted of Improved Residual Blocks (IRB) modified from the popular residual block and Contrast-aware Channel Attention (CCA) layers. The proposed IRB block is able to fuse multi-layer features extracted by different convolution layers for fine-grained ITM. We also collected a new ITM-4K test set containing 160 versatile 4K-resolution SDR images. Experiments on three benchmark datasets demonstrated that, our IRNet outperforms the state-of-the-art methods on the ITM task with only ∼0.13M parameters and ∼0.22×10^4G FLOPs per 4K image. Further experiments on the joint SR-ITM task also showed the advantages of our IRNet over the comparison methods on the objective metrics, the computational efficiency, and most importantly, the image quality such as color depth restoration. plain
http://arxiv.org/abs/2307.04652v1
20230710154947
Winding number and circular 4-coloring of signed graphs
[ "Anna Gujgiczer", "Reza Naserasr", "Rohini S", "S Taruni" ]
math.CO
[ "math.CO", "cs.DM" ]
Winding number and circular 4-coloring of signed graphs Sven Burger August 12, 2023 ======================================================= Concerning the recent notion of circular chromatic number of signed graphs, for each given integer k we introduce two signed bipartite graphs, each on 2k^2-k+1 vertices, having shortest negative cycle of length 2k, and the circular chromatic number 4. Each of the construction can be viewed as a bipartite analogue of the generalized Mycielski graphs on odd cycles, M_ℓ(C_2k+1). In the course of proving our result, we also obtain a simple proof of the fact that M_ℓ(C_2k+1) and some similar quadrangulations of the projective plane have circular chromatic number 4. These proofs have the advantage that they illuminate, in an elementary manner, the strong relation between algebraic topology and graph coloring problems. § INTRODUCTION The problem of building graphs of high girth and high chromatic number is one of the basic questions of graph coloring and its study has led to many further developments. In particular, the original proof of Erdős for the existence of such graphs has led to the development of probabilistic methods in graph theory. Since then several constructive methods were presented, but none are easy to grasp. With a weaker condition of high odd girth instead of high girth, there are several natural classes of graphs. In particular, in the family of the Kneser graphs one can find examples of high odd girth and high chromatic number. The proof of the lower bound for the chromatic number of the Kneser graphs, by L. Lovász <cit.>, was the birthplace of the connection between algebraic topology and graph coloring. Further developing this method, Stiebitz introduced a generalization of the Mycielski construction in <cit.> to build small graphs of high odd girth and high chromatic number. Generalized Mycielski on odd cycles have been studied independently by many authors and several results on their chromatic number <cit.>, circular chromatic number <cit.> and on various other related parameters <cit.> are proved. In this work, building on the ideas from several works in the literature, we first present a relatively short proof that the generalized Mycielski graphs on odd cycles have circular chromatic number 4. The proof has the advantage of capturing the connection between algebraic topology and graph coloring with elementary techniques. We then present three similar classes of signed graphs of high negative girth and circular chromatic number 4. The graphs are built similarly to the generalized Mycielski on odd cycles when viewed as a quadrangulation of the projective plane, the main difference being that the subgraph induced by the outer layer induces a Möbius ladder. In Section <ref>, we give the necessary notation and the terminology. In Section <ref>, we provide a historical account of what is known. In Section <ref>, we discuss three families of signed graphs and in the Section <ref>, we prove that their circular chromatic number is 4. Finally, we conclude our paper with the Section <ref>. § NOTATION We consider simple graphs unless clearly stated otherwise. A signed (simple) graph (G,σ) is a graph G together with the assignment σ of signs to the edges. We denote by (G, -) the signed graph G with all edges negative (and (G, +) accordingly). If G is bipartite, then (G, σ) is called a signed bipartite graph (in some literature, this term is used to refer to a balanced signed graph, that is a signed graph with no negative cycle). The sign of a structure in (G, σ) (such as a cycle, a closed walk, a path) is the product of the signs of edges in the said structure counting multiplicity. Given an integer n, n≥ 3, we denote by C_n the cycle (graph) on n vertices. That is a 2-regular connected graph on n vertices. Furthermore, we view C_n as a plane graph, that is, the graph together with a planar embedding. For topological use of C_n, one may identify it with the regular polygon on n vertices. Vertices of C_n are normally labeled as v_1, v_2, …, v_n. The exact square of C_n, denoted C^#2_n, is the graph on the same set of vertices where two vertices are adjacent if they are at a distance (exactly) 2 in C_n. Observe that for odd values of n, C^#2_n is also a cycle of length n. For even values of n, C^#2_n consists of two connected components, each isomorphic to a cycle of length n/2. They are induced on sets of vertices with odd and even indices and will be denoted, respectively, by C^#2o_n and C^#2e_n. Given a positive real number, we denote by O_r the (geometric) circle of circumference r. That would be a circle of radius r/2π. The antipodal of a point x on O_r is the unique point x on O_r which is collinear with x and the center of the circle. Given a real number r, r≥ 2, a circular r-coloring of a signed graph (G, σ) is a mapping ψ of the vertices of G to the points of O_r in such a way that when xy is a negative edge, then the distance of ψ(x) from ψ(y) on O_r is at least 1 and if xy is a positive edge, then the distance of ψ(x) from ψ(y) is at least 1, equivalently, the distance between ψ(x) and ψ(y) is at most r/2-1. The circular chromatic number of (G, σ), denoted χ_c(G, σ), is the infimum of r such that (G, σ) admits a circular r-coloring. When restricted to signed graphs where all edges are negative, we have the classic notion of circular coloring of graphs. This extension to signed graphs is first presented in <cit.> noting that a different but similar parameter under a similar name has been introduced in <cit.>. However, compared to <cit.>, the role of positive and negative edges are exchanged for better suitability with literature on structural theory on signed graphs, especially in regard to the minor theory of signed graphs. Among basic results, the following should be noted for the purpose of this work. The infimum in the definition is always attained for finite graphs, even allowing multi-edges and positive loops, but a negative loop cannot be colored with a finite r. For the class of signed bipartite (multi)graphs, we have the trivial upper bound of χ_c(G, σ)≤ 4, to see this, map the vertices of one part of G to the north pole of O_4 and the vertices of the other part to the east point. Even with such a strong upper bound the problem of determining the exact value of the circular chromatic number of a given signed bipartite graph is of high importance and, in general, quite a difficult problem. In particular, as it is pointed out in <cit.>, using some basic graph operations, namely indicators, one can transform a graph G into a signed bipartite graph F(G) such that the circular chromatic number of F(G) determines the circular chromatic number of G. A basic example of this sort is the construction S(G), which is obtained from a given graph G by replacing each edge uv of G with a negative 4-cycle ux_uvvy_uv where x_uv and y_uv are new and distinct vertices. It is then shown in <cit.> that χ_c(S(G))=4-4/χ_c(G)+1. Further connections with some well-known study and theorems, such as the four-color theorem, is discussed in <cit.> and <cit.>. Motivated by these observations and in connection with some other studies, some of which are mentioned in the last section, the question of constructing signed bipartite graphs of high negative girth but circular chromatic number 4 is of high interest. In this work, we present two bipartite analogues of the generalized Mycielski graph on odd cycles as examples of signed bipartite graphs. The proofs also lead to an elementary understanding of the relation between coloring problems of graphs and basic notions of algebraic topology, namely the winding number. Recall that given a closed curve γ on the plane, the winding number of γ, defined rather intuitively, is the number of times γ is winded around the origin in the clockwise direction, noting that: if the origin is not in the part bounded by γ, then the winding number is 0 and that winding in anticlockwise direction is presented by a negative number. Here the closed curves we work with are mappings to O_r with the center of O_r being the center of the plane. They can be thought of as continuous mappings of [0,1] to O_r with the condition that the two endpoints, i.e., 0 and 1 are mapped to the same point. § A HISTORICAL NOTE In 1955 Mycielski introduced the construction <cit.> that is now known as the Mycielski construction. His goal of the construction was to build triangle-free graphs of high chromatic number. In this construction, given a graph G one adds a vertex v' for each vertex v of G, which is joined to all neighbors of v in G and then adds a vertex u which is joined to all vertices v'. It is not difficult to prove that the resulting graph has chromatic number χ(G)+1. Generalization of the construction, where one adds several layers of copy vertices before adding a universal vertex to the last layer, was first considered independently in Habilitation thesis of M. Stiebitz <cit.> and Ph.D. thesis of N. Van Ngoc <cit.>. (The former is written in German, but its result can also be found in <cit.> and the latter is in Hungarian.) Stiebitz applied methods of algebraic topology to prove that if one starts with K_2 and iteratively builds a generalized Mycielski, at each step the chromatic number would increase by 1. This does not hold for every graph, though. For example, the chromatic number of the complement of C_7 is 4, and any generalized Mycielski of it, except the original one, is also of chromatic number 4. It has been shown recently in <cit.> that the result of Stiebitz is equivalent to the Borsuk-Ulam theorem. First English publications of the fact that the generalized Mycielski based on an odd cycle has chromatic number 4 appeared independently in <cit.>. The proof of Payan <cit.> is about the special case of M_k(C_2k+1) as they appear as subgraphs of nonbipartite Cayley graphs on binary groups, but it works the same for any M_ℓ(C_2k+1). This proof has strongly motivated the work presented here. The proof of <cit.> is presented quite differently, but the hidden idea behind the proof is the same. The result of <cit.> is more general. It is shown that if G is not bipartite but admits an embedding on the projective plane where all facial cycles are 4-cycles, then χ (G)=4. That such structures are necessary for 4-chromatic triangle-free projective planar graphs was conjectured in <cit.> and proved in <cit.>. The well-known fact that M_ℓ(C_2k+1) quadrangulate the projective plane is evident from our presentation of these graphs in the next section. The circular chromatic number of Mycielski constructions was first studied in <cit.>. That of the generalized Mycielski is studied in <cit.> among others. In particular, that χ_c(M_ℓ(C_2k+1))=4 follows, independently, from the general results of <cit.> and of <cit.>. In the latter, it is shown that if the lower bound of 2k for the chromatic number is proved using topological connectivity, then the same lower bound works for the circular chromatic number as well. § THE CONSTRUCTION The main body of the construction we will work with is an almost quadrangulation of the cylinder which we define here. Given positive integers ℓ and k, C__ℓ× (2k+1) is the graph whose vertex set is V={v__i,j| 1 ≤ i ≤ℓ, 1≤ j ≤ 2k+1} with the edge set E={ v__i,jv__i+1, j-1, v__i,jv__i+1, j+1| 1 ≤ i ≤ℓ-1, 1≤ j ≤ 2k+1}. Here, and in the rest of this work, the addition on the indices is taken modular the maximum value of the said index, which is 2k+1 in this case. We note that, as a graph C__ℓ× (2k+1) is isomorphic to the categorical product P_ℓ× C_2k+1, but the standard labeling of this product does not fit well with our purpose. A general picture of this graph is depicted in Figure <ref> where the dashed circles are only presenting the layers, but they will play a key role. §.§ M_ℓ(C_2k+1) Given positive integers ℓ and k, the generalized Mycielski graph of the odd cycle C_2k+1, M_ℓ(C_2k+1) is built from C__ℓ× (2k+1) by the following two steps: * Connect v__1,j to v__1,j+k (Figure <ref>, right). * Add a new vertex u and connect it to all vertices v__ℓ,j, j=1,…, 2k+1 (Figure <ref>, left). Observe that the added edges in the first item form an isomorphic copy of C_2k+1. One can easily observe that starting with this cycle, the classic definition of a generalized Mycielski graph results in the same graph. The graph M_1(C_3) is K_4. The graph M_2(C_5) is the well-known Grözsch graph. To show that M_2(C_5) is the smallest 4-chromatic triangle-free graph is proposed as an exercise in <cit.>. Furthermore, Chvátal showed in <cit.> that M_2(C_5) is the only 4-chromatic triangle-free graph on 11 vertices. The following is a key property of M_ℓ(C_2k+1). The shortest odd cycle of M_ℓ(C_2k+1) is the minimum of 2k+1 and 2l+1. Since this is a folklore fact, we do not provide a proof but we note that the main idea to verify it is also presented in the next proposition. §.§ BQ(ℓ,2k-1) Next, given integers ℓ and k satisfying ℓ, k ≥ 2, we define the signed bipartite graph BQ(ℓ,2k-1) also from C_ℓ× (2k-1) as follows. * Edges of C_ℓ× (2k-1) are all negative. * Connect v__1,j to v__2,j+k by a positive edge (Figure <ref>, right). * Add a new vertex u and connect it to each of the vertices v__ℓ,j, j=1,…, 2k-1, with a negative edge (Figure <ref>, left). We view this construction as one of the bipartite analogues of the generalized Mycielski. The second item of the construction, which is presented in Figure <ref> (right) is the main difference with the previously known constructions: While in construction of M_ℓ(C_2k+1) we add some edges between vertices of the first layer, in this new construction we add some connection between vertices of the first layer and the second layer. Therefore this operation preserves the bipartition. The underlying graph of the induced subgraph on the first two layers is isomorphic to what is known as the Möbuis ladder with 2k-1 steps. We will refer to it as such. The case of BQ(2,3) is (K_3,4,M) depicted in Figure <ref>. It is the signed bipartite graph where each one of the edges of a maximum matching of K_3,4 is assigned a positive sign and all the other edges are assigned a negative sign. The fact that the underlying graph of BQ(ℓ,2k-1) is bipartite is easily observed. The parity of the levels gives a natural bipartition of the graph. We show that based on the choice of k and l this signed bipartite graph does not have a short negative cycle. Given integers l and k where l,k≥ 2, the shortest negative cycle of BQ(ℓ,2k-1) is of length min{2l, 2k}. We first present two natural choices for a negative cycle, one of length 2k and another of length 2l. The first is a negative cycle on the first two layers. Take a positive edge and connect its two ends with one of the two paths using only the negative edges that connect the two layers. This would result in a negative cycle of length 2k. The second negative cycle we consider is by taking a positive edge and connecting each of its ends to the vertex u by a shortest path (all edges negative). One of these paths will be of length l and the other would be of length l-1. Together with the first chosen edge itself then, they form a negative cycle of length 2l. It remains to show that the shortest of these two types of cycles gives us the negative girth. To that end, we will first show that a shortest negative cycle can only use one positive edge of BQ(ℓ,2k-1). Towards a contradiction, let C be a negative cycle with more than two positive edges. We aim to present a negative cycle C' whose length is at most |C|-2. We take two positive edges of C that come consecutively on the cyclic order. Assume xy and x'y' are these two edges and that x' is followed by y in the cyclic order of C (that is to say, there is no positive edge in the x'-y path in C). We remove the two positive edges xy and x'y' and the x'y path connecting them in C, but then we add a xy' copy of this path (which also has no positive edge). The result is a closed walk whose sign is the same as that of C, and whose length is |C|-2. But then this closed walk must contain a negative cycle, whose length then is also at most |C|-2, a contradiction. Finally, if C is a cycle that uses exactly one negative edge, say xy, then the x-y path P_xy=C-xy either passes through u in which case we have at least 2l edges in C, or the natural image of P_xy to the cycle in between the first and second layers also connects x to y. But the shortest such path is of length 2k-1, thus P_xy is of length at least 2k-1, and the negative cycle is of length at least 2k. §.§ BQ(ℓ,2k) The third family of (signed) graphs we consider in this work are built quite similar to the previous construction. More precisely, given integers ℓ and k satisfying ℓ, k ≥ 2, we define the (signed) graph BQ(ℓ,2k) from C_ℓ× (2k) as follows. * Edges of C_ℓ× 2k are all negative. * Connect v__1,j to v__1,j+k and v__2,j to v__2,j+k by negative edges (Figure <ref>, right). * Add a new vertex u and connect it to each of the vertices v__ℓ,j, j=1,…, 2k, with a negative edge (Figure <ref>, left). As an example, the (signed) graphs BQ(3,4) and BQ(4,6) are depicted in Figures <ref> and <ref> respectively. Given integers l and k, where l,k ≥2, the shortest negative cycle of BQ(l, 2k) is of length min{2l-1, 2k+1}. A cycle of BQ(l, 2k) which does not contain any step of the Möbius ladder induced by the first two layers is even. That is to say any odd cycle has at least one step of this Möbius ladder. A step of the Möbius ladder together with one of the two paths that are connecting the end vertices of this step through the first two layers form an odd cycle of length 2k+1. Also, there is another natural choice for an odd cycle constituted by this step and the shortest path which contains the universal vertex u connecting the end vertices of this step. This cycle is of length 2l-1. In a similar way as in the proof of Proposition <ref> one may conclude that one of these two odd cycles of BQ(l,2k) is the shortest. The two constructions BQ(ℓ, 2k-1) and BQ(ℓ, 2k) can be defined uniformly as follows. Starting with an i-star (i=2k-1 or i=2k) on the projective plane, we complete it to quadrangulation of the planar part except for the vertices on the outer layer which are at distance ℓ or ℓ-1 from the center of the star, and assign a negative sign to everything so that all facial 4-cycles are positive. We then complete the outer layer to a Möbuis ladder, choosing signs for the crossing edges so that all faces are positive 4-cycles but the non-contractible cycles are negative. We view this class of signed graphs as Basic Qudrangulations of the projective plane and thus use the notation BQ(ℓ, i). §.§ BM(ℓ,2k) The last construction we present here, BM(ℓ,2k), is built from C_ℓ, 2k as follows. Taking all the edges of this graph as negative edges, on the last layer of the cylinder, as in the other cases, we add a (universal) vertex which is joined to all vertices of this layer with negative edges. On the first layer we add a set {u_1, …, u_k} of vertices, then join each u_i, i=1, … k, to v_1, i and v_1, i+1 with negative edges and to v_1, i+k and v_1, i+k+1 with positive edges. See Figure <ref> for a depiction. We leave it to the reader to check the following. Given integers l,k ≥ 2, the shortest negative cycle of the signed bipartite graph BM(l,2k) is of length min{2l+2,2k}. In particular, BM(k-1,2k) has 2k^2-k+1 vertices and its shortest negative cycles is of length 2k. § WINDING NUMBER AND COLORING Given a simple closed curve γ on the plane, and a continuous mapping φ of γ to O_r, we define the winding number of the pair (γ, φ) to be the winding number of the curve φ(γ) with center of O_r considered as the center of the plane. Intuitively speaking, (γ, φ) tells us how many times the curve γ is wrapped around O_r in the clockwise direction noting that a negative number reflects an anticlockwise mapping. This value then will be denoted by ω(γ, φ). A mapping c of the vertices of the cycle C_n to the points of O_r can be extended to a continuous mapping of C_n to O_r with the former being viewed as the closed curve or the polygon. There are 2^n natural ways to do this. For each pair v_i,v_i+1 of the vertices of C_n, the pair c(v_i), c(v_i+1) partitions the circle O_r into two parts. The segment of the polygon that represents the edge v_iv_i+1 can be projected into one of these two parts. We note that c is allowed to map several vertices of C_n to the same point and that even if v_i and v_i+1 are mapped to the same point, in our view, they partition the circle O_r into two parts: a part of length 0 and a part of length r. These 2^n extensions are in a one-to-one correspondence with the 2^n possible orientations of C_n: orient the edge v_iv_i+1 in such a way that the mapping follows the clockwise direction of O_r. Given a coloring c of the vertices of the cycle C_n, two extensions of c to a mapping of the polygon to O_r are of special importance. The first is the extension corresponding to the directed cycle C_n. Here v_iv_i+1 is mapped to the part of the circle where c(v_i+1) follows c(v_i) in the clockwise direction. Let us denote this extension by c^D. A trivial observation here is that the winding number of (C_n, c^D) is never 0. The other natural extension is to choose the shortest of the two parts of the circle determined by c(v_i) and c(v_i+1) and project the line v_iv_i+1 onto it. The orientation corresponding to this extension then depends on whether c(v_i) is the start or the end of this shorter part of the circle with respect to the clockwise orientation. We denote this extension by c^sh and observe that this extension may result in winding number 0 for some choices of c (and r). Given the cycle C_n, a mapping c of its vertices to O_r and an extension φ of c to the polygon, a combinatorial way to compute ω(C_n, φ) is as follows: take an (open) interval I on O_r which does not contain any image of the vertices of C_n. Then in an extension φ of c to a mapping of the polygon to O_r, each edge of C_n either traverses I completely or does not touch any point of it. Now the winding number ω(C_n, φ) is the number of edges that traverse I in the clockwise direction minus the number of edges that traverse it in the anticlockwise direction (and thus independent of the choice of I). Let c be a mapping of the vertices of a cycle C_n to the circle O_r. Consider the continuous mapping (C_n, c^D) and an (open) interval I of O_r which does not contain any point c(v_i). Color the edges of C_n with two colors, say green and orange, as follows: if the image of an edge e under c^D contains I, then color it green, otherwise, color it orange. We are interested in the pairs of consecutive edges v_i-1v_i and v_iv_i+1, which are colored differently. If in such a pair, the first edge is colored green, then in the next pair of this sort (next in the cyclic order of indices), the first edge must be orange and vice versa. Thus, the total number of such pairs is even, that is regardless of the choices of n and c. To use this observation, we will work with certain types of mappings c. We say a mapping c of the vertices of C_n to the points on O_r is far-polar if the followings hold: for each i the pair of the points c(v_i-1) and c(v_i+1) on O_r partitions O_r into two unequal parts and that c(v_i) is on the larger of the two parts. More generally, a mapping ϕ of the vertices of a graph G to the circle O_r is called far-polar if for each vertex x of G there is a diameter D_x which separates ϕ(x) from ϕ(y) for all neighbors y of x. In the following, we present how the condition of c being a far-polar mapping provides a connection between c^D extension of c on C_n and c^sh extension of the mapping c on C_n^#2. Let c be a far-polar mapping of C_n to O_r and let I be an interval of O_r which does not contain any c(v_i). Then in the extension c^sh of a mapping of the one or two cycles in C_n^#2 to O_r, the number of edges v_i-1v_i+1 that does not cross over I is an even number. Consider three consecutive vertices v_i-1, v_i, v_i+1 of the cycle. If, following the c^D extension of C_n, both edges v_i-1v_i and v_iv_i+1 are colored orange, that is to say, in the extension they do not pass through I, then since c is far-polar, c(v_i) must be on the longer of c(v_i-1)c(v_i+1) or c(v_i+1)c(v_i-1) and thus I is on the shorter part. Thus in the extension c^sh of C_n^#2, c(v_i-1)c(v_i+1) passes through I. Similarly, if both edges v_i-1v_i and v_iv_i+1 are colored green, then c(v_i-1)c(v_i+1) passes through I. On the other hand, if one of the edges is green and the other orange, then together, they must cover more than half of the O_r. Implying that c(v_i-1)c(v_i+1) does not pass through I. Overall the number of edges of C_n^#2 that do not pass through I in c^sh extension is the number of vertices of C_n incident with both green and orange edges, where the colors are determined by the extension c^D of C_n. The number of such pairs then must be even as it is observed above. We may now observe that, given a real number r, r<4, any circular r-coloring of C_n must be a far-polar coloring. Thus we have the following two consequences depending on the parity of n. Let c be a circular r-coloring of an even cycle C_n. Let c_o (resp. c_e) be its restriction on the vertices with odd (resp. even) indices. Then the winding numbers of (C^#2o, c^sh_o) and (C^#2e, c^sh_e) are of the same parity. That is because after choosing a suitable interval I, by Lemma <ref>, the total number of edges of C^#2 that does not cross over I in the extension c^sh is even. As the total number of edges is also even (that is n), the number of edges of C^#2 that cross over I is also even. However, the winding number of each of (C^#2o, c^sh_o) and (C^#2e, c^sh_e), which is the difference of the number of edges crossing I in the clockwise direction and the number of edges crossing it in the anticlockwise direction, has the same parity as the total number of the edges of the cycle in consideration that cross over I (in the c^sh extension). This proves our claim as the sum of the two winding numbers is an even number. Using this lemma, we can build a cylinder of many layers, as shown in the example of Figure <ref>, with the property that in any circular r-coloring c of the red graph (r<4), all of the dashed grey cycles must have winding numbers of the same parity. Observe that in this construction, the zigzag red cycle between two consecutive layers is an even cycle, and its exact square consists of the two grey cycles presenting the two layers. If we then add structures to the two ends in such a way that one force an odd winding number on one of the grey cycles and the other forces an even winding number on another one of them, then the result would be a graph which admits no circular r-coloring for r<4. A basic method to achieve these conditions is presented next. Given an odd integer n, a positive real number r, and a far-polar mapping c of C_n to O_r, the winding number ω(C^#2_n, c^sh) is an odd number. By Lemma <ref>, the total number of edges of C^#2_n that does not cross over I is even. As n is an odd number, C^#2_n is isomorphic to C_n, and, hence, the number of edges crossing over I is odd. This is the sum of the number of edges crossing over I in the clockwise direction and in the anticlockwise direction. Thus the winding number, which is the difference between these two numbers, is also an odd number. Applying this lemma on circular r-coloring for r<4 we have the following. Given an odd integer n, n=2k+1, a real number r satisfying 2+1/k≤ r <4, and a circular r-coloring c of C_n, the winding number ω(C^#2_n, c^sh) is an odd number. We observe that if r<4 and c is a circular r-coloring of C_n, then it is, in particular, a far-polar mapping of C_n. That is because for three consecutive vertices v_i-1, v_i, and v_i+1, having partitioned O_r to two parts based on c(v_i-1) and c(v_i+1), the part that contains c(v_i) must be of length at least 2. As r<4, this must be the larger part. Then the statement follows from the previous lemma. Let G be the star K_1,n with u being the central vertex and A being the independent set of order n. Let c be a circular r-coloring of G with r<4. Then for any cycle C built on A, the winding number of (C, c^sh) is 0. This is observed by taking a small interval I sufficiently close to c(u) and noting that first of all, a vertex of C cannot be mapped to c(u); secondly, since r<4, for any pair x and y of vertices in A in the partition of O_r to two parts by c(x) and c(y), the part containing c(u) is of length at least 2 and thus it is the larger of the two, meaning in the shortest extension, c(x)c(y) will never cross over I. We may now give a new proof of the following theorem. For any positive integers ℓ and k, we have χ_c(M_ℓ(C_2k+1))=4. It is enough to observe that M_ℓ(C_2k+1) is obtained from the l× (2k+1) cylindrical grid of Figure <ref> by adding diagonal edges to the bottom layer (that is connecting pairs at a distance k of the grey cycle) and adding a universal vertex to the top layer (as mentioned in the previous section). As any circular r-coloring with r<4 is also far-polar, any such a coloring would imply an odd winding number for the layers in c^sh extension from one end and an even winding number for the layers from the other end. So a proper mapping to O_r where r<4 is impossible. On the other hand, one can easily color M_ℓ(C_2k+1) with 4 colors, which gives the upper bound 4 on the circular chromatic number as well. Next, we show that BQ(ℓ, 2k-1) shares the same property. We will note later that Theorem <ref> follows from the next theorem. For given positive integers ℓ and k, satisfying l,k ≥ 2, we have χ_c(BQ(ℓ, 2k-1))=4. Towards a contradiction, let c be a circular r-coloring of BQ(ℓ, 2k-1) with r<4. We will have a contradiction if we show that the cycle C' formed on v__1,1v__1,2⋯ v__1,2k-1 in this cyclic order has an odd winding number under the mapping c^sh (restricted on the vertices of this cycle). We emphasize that edges of C' are not in BQ(ℓ, 2k-1). To this end we first consider another cycle, C^⋆, (also not part of our graph) by considering the following sequence of vertices of the first layer of BQ(ℓ, 2k-1): v__1,1v__1,k+1v__1,2v__1,k+2⋯ v__1,k. Note that in this cycle v__1,j is followed by v__1,j+k where the addition is taken 2k-1. We may also note that this is the diagonally drawn cycle on the first layer of Figure <ref> (right). Our claim is that the mapping c, viewed as a mapping of the vertices of C^⋆ to O_r, is a far-polar mapping. Toward proving the claim, we consider c(v__1,j), c(v__1,j+k), and c(v__1,j+1). The first observation is that since v__2,j is adjacent to both v__1,j and v__1,j+1 with negative edges, the points c(v__1,j) and c(v__1,j+1) of O_r partition O_r in such a way that the part containing c(v__2,j) is at least 2. As r<4, it follows that c(v__2,j) is on the larger part of O_r when it is partitioned by c(v__1,j) and c(v__1,j+1). It remains to show that c(v__1,j+k) is also on the same part. If not, that is if c(v__1,j+k) is on the shorter side of O_r, then one of the arcs c(v__1,j+k)c(v__2,j) and c(v__2,j)c(v__1,j+k) contains the shorter side of c(v__1,j)c(v__2,j) and the other contains the shorter side of c(v__1,j+1)c(v__2,j). As each of these shorter arcs is of length at least one, we conclude that the distance of c(v__1,j+k) and c(v__2,j) is at least one. However, since c is a circular r-coloring where r<4 and v__1,j+kv__2,j is a positive edge, they should be at a distance at most r/2-1<1, a contradiction. Finally, observing that C' is the exact square of C^⋆, and by Lemma <ref>, we conclude that the winding number of C' is odd. To prove that χ_c(BQ(ℓ,2k)=4 we need a few more lemmas. Let c be a far-polar mapping of C_4 to O_r. Then the winding number w(C_4, c^D) is 2. The points c(v_1) and c(v_3) partition O_r into two unequal parts. Since c is a far-polar coloring, we know that c(v_2) and c(v_4) both should be on the larger of these two parts. Without loss of generality, we may assume that images of the vertices are in the following cyclic order: c(v_1), c(v_3), c(v_2), c(v_4). Let I be an interval of O_r that does not contain any c(v_i). As the winding number is independent of the choice of I we can choose I to be in c(v_3)c(v_2). Then following the orientation of C_4 and clockwise direction of O_r, the arcs c(v_1)c(v_2) and c(v_3)c(v_4) contain the interval I while the arcs c(v_2)c(v_3) and c(v_4)c(v_1) do not intersect it. Let c be a far-polar mapping of C_2k to O_r and let the edges of C_2k be e_1, e_2, …, e_2k. The number of edges colored green in the extension c^D of c has the same parity as the number of odd (or even) indexed vertices being incident to both green and orange edges. Let J be the set of vertices of C_2k incident to both green and orange edges. Let J_o and J_e be the partition of J into odd and even indexed vertices (a natural bipartition of on C_2k). Consider a maximal green path in C_2k. Thus the two ends of each such path are in J. Moreover, if the length of the path is even, then both ends of the path belong to the same subset J_o or J_e of J. Thus each even green path contributes 0 to one of J_o or J_e and 2 to the other. If the length of the path is odd, then one of its ends is in J_o and the other is in J_e, thus contributing 1 to each of these two sets. The claim then follows as the odd length green-paths determine the parity of the total number of green edges. Let c be a far-polar coloring of the cycle C_4k. The number of edges colored green in the extension c^D of c and the winding number w(C_4k^#2e, c^sh) (or similarly w(C_4k^#2o, c^sh)) are of the same parity. For each edge v_i-1v_i+1 of C_4k^#2e there is an odd indexed vertex v_i of C_4k corresponding to it. (And similarly, for each edge v_i-1v_i+1 of C_4k^#2o there is an even indexed vertex v_i of C_4k.) As we have observed before, an edge v_i-1v_i+1 in C^#2 does not cross over the interval I in the c^sh extension if and only if the edges incident to v_i are colored differently (i.e. one of v_i-1v_i and v_iv_i+1 is green, the other is orange). So the number of non-crossing edges in C_4k^#2e in the c^sh extension is just the number of odd indexed vertices being incident to both green and orange edges in C_4k. From the previous lemma, we know that this number has the same parity as the total number of green edges in the cycle C_4k. As C_4k^#2e is a cycle on 2k vertices, its total number of edges is an even number, so the total number of edges which cross I should also have the same parity, and so does the difference of the number of edges cross I in the clockwise direction and anticlockwise direction. This completes the proof. We use M_2k to denote the Möbuis ladder with 2k steps. As a graph that is isomorphic to the graph build on C_4k by adding an edge between each pair of vertices at a distance 2k. In the next lemma we show that the Möbuis ladder M_2k can replace the role of the odd cycle in Lemma <ref>. It can then be used similarly to build families of graphs with circular chromatic number at least 4. For any far-polar mapping c of M_2k to O_r, the winding number w(C_4k^#2e, c^sh) (or similarly w(C_4k^#2o, c^sh)) is odd. By Lemma <ref>, it is enough to prove that the number of green edges of C_4k in the extension c^D of c is odd. We use the notation C_1,2,3,⋯ t for oriented cycle with vertices v_1,v_2, ⋯ v_t and directed edges v_iv_i+1 for i t. We will view M_2k as union of 2k 4-cycles, see Figure <ref> for reference. Consider all oriented 4-cycles formed by two consecutive steps of ladder, C_1 : C_1,2,(2k+2),(2k+1), C_2 : C_2,3,(2k+3),(2k+2), ⋯ C_2k : C_2k,(2k+1),1,4k. By Lemma <ref> we know that each C_i's has two green edges in c^D extension. Therefore, in total, the sum of the number of their green edges is an even number as well. To prove our claim, we present a different counting of this number. Consider the oriented 4k-cycle, C_1,2,3,⋯ 4k, half of its edges (from v_1v_2 to v_2kv_2k+1) agree in orientation with the one in the corresponding C_i, but the other half is oriented the opposite direction. So if we want to get back the same orientation as in the C_i's, we should switch 2k edges. As changing the orientation of a green edge makes it orange and vice versa, we switch the parity of the number of green edges an even number of times. Now we have to consider the steps of the ladder as well. Except for the edge between v_1 and v_2k+1, every other step v_iv_2k+i is oriented as v_2k+iv_i in C_i and as v_iv_2k+i in C_i-1 (for 1<i ≤ 2k). So they contribute exactly one green edge (in one of their orientations) to the total sum. The edge between v_1 and v_2k+1 is oriented as v_2k+1v_1 in both C_1 and C_2k, contributing 0 or 2 to the total sum. Therefore the contribution of the steps is odd in total. So, in summary, starting with the oriented C_4k, changing the orientation of an even number of its edges, and then adding the steps of the ladder, we should get back the same number of green edges as we had in total in the C_i's. Since that is an even number, the oriented C_4k must have an odd number of green edges. We can now state our theorem for BQ(ℓ,2k). For any positive integers ℓ and k, we have χ_cBQ(ℓ,2k)=4. As in the previous cases, we can consider BQ(ℓ,2k) as a graph obtained from the l × 2k cylindrical grid by adding a universal vertex on the first layer and completing the last two layers into a Möbuis ladder. Any r < 4 r-coloring would be a far-polar coloring of this graph, which, by Lemma <ref> would imply an odd winding number for each of the last layers in the C^sh extension, but by Observation <ref> the first layer has the winding number 0, but by Lemma <ref> all layers have the same parity of winding number. Finally, we use this to prove that BM(ℓ, 2k) also has the same circular chromatic number. For any positive integers ℓ and k, we have χ_c(BM(ℓ,2k))=4. As BM(ℓ, 2k) is a signed bipartite graph, 4 is an upper bound on its circular chromatic number. To prove that it is also a lower bound, we consider BQ(ℓ+2,2k) and switch at first 2k vertices of the Möbuis ladder built on the first two layers. These are vertices labelled v_1,1, v_2,1, v_1,2, v_2,2, …, v_1,k, v_2,k. At the end all diagonal edges of the Möbuis ladder are positive. We consider a homomorphic image of this signed graph by identifying two ends of each diagonal edge of the Möbuis ladder. That is v_1,1 is identified with v_1,k+1, v_2,1 is identified with v_2,k+1 and so on. Then, we can identify each v_1,i with v_3,i+k as well (i ∈{1,2, … k}). It can then be verified that the image is the signed graph obtained from BM(ℓ, 2k) by adding a positive loop to each of the k vertices u_j and to the next layer. However, a positive loop does not change the circular chromatic number of a signed graph. Thus we have χ_c(BM(ℓ,2k))≥χ_c(BQ(ℓ+2,2k))=4. We note that by identifying the two ends of each positive edge in BQ(ℓ, 2k+1) we get a copy of M_ℓ(C_2k+1) together with some positive loops. Thus, in a similar fashion, one can view Theorem <ref> as a corollary of Theorem <ref>. § CONCLUDING REMARKS The special subclass of M_k(C_2k+1), on 2k^2+k+1 vertices, is conjectured in <cit.> to have the smallest number of vertices among 4-chromatic graphs of odd-girth 2k+1. In <cit.>, this is verified to be the case with an added assumption that every pair of odd cycles share a vertex. For the general case, a lower bound of (k-1)^2 for the number of vertices of a 4-critical graph of odd girth 2k+1 is given in <cit.> modifying the method of <cit.>. A natural bipartite analogue of this question is to find the smallest number of vertices of a signed bipartite graph of negative girth 2k whose circular chromatic number is 4. Here we gave two families of such graphs, where the graphs of negative girth 2k have 2k^2-k+1 vertices. The starting point of this work has been a joint work of the second author with Lan Anh Pham and Zhouningxin Wang on the study of C_-4-critical signed graphs (for a definition, see <cit.>). In an unpublished work, they have shown that a C_-4-critical signed graph of negative girth at least 2k must have at least k^2 vertices. Based on the fact that χ_c(C_-4)=8/3, our result in this work implies that BQ(k, 2k-1) is a signed bipartite graph of negative girth 2k which does not map to C_-4. Thus BQ(k, 2k-1) contains a C_-4-critical signed graph. As BQ(k, 2k-1) has 2k^2-k+1 vertices, this implies the smallest number of the vertices of a C_-4-critical signed graph of negative girth 2k is somewhere between k^2 and 2k^2-k+1. Acknowledgment. This work is supported by the following grants and projects: 1. ANR-France project HOSIGRA (ANR-17-CE40-0022). 2. Indo-French Center of Applied Mathematics, project AGRAHO “Applications of graph homomorphisms”(MA/IFCAM/18/39). 3. Math-AmSud project PLANNING. 4. National Research, Development and Innovation Office (NKFIH) grant K–120706 of NKFIH Hungary. 5. WLI grant(SB22231494MAIITM008570) of IIT Madras, India. The second author would also like to thank Lan Ann Pham and Zhouningxin Wang for earlier discussions on this subject. plain
http://arxiv.org/abs/2307.04504v1
20230710115604
An Algorithm with Optimal Dimension-Dependence for Zero-Order Nonsmooth Nonconvex Stochastic Optimization
[ "Guy Kornowski", "Ohad Shamir" ]
math.OC
[ "math.OC", "cs.LG" ]
theoremTheorem *theorem*Theorem proposition[theorem]Proposition *proposition*Proposition example[theorem]Example lemma[theorem]Lemma corollary[theorem]Corollary definition[theorem]Definition remark[theorem]Remark assumption[theorem]Assumption claim[theorem]Claim
http://arxiv.org/abs/2307.04118v1
20230709081438
Twotier -- A Layered Analysis of Backbone Members in a Moderate Sized Community Sports Organization
[ "Qingran Wang", "Jia Yu", "Mengjun Ding", "Weiqiang Sun" ]
cs.SI
[ "cs.SI" ]
Twotier - A Layered Analysis of Backbone Members in a Moderate Sized Community Sports Organization Qingran Wang and Jia Yu contributed equally to this paper. We would like to thank all members of the SJTU Health community for their selfless commitments in building a strong community. Qingran Wang, Jia Yu, Mengjun Ding, Weiqiang Sun,  Senior Member, IEEE August 12, 2023 ============================================================================================================================================================================================================================================================================================== Backbone members are recognized as essential parts of an organization, yet their role and mechanisms of functioning in networks are not fully understood. In this paper, we propose a new framework called Twotier to analyze the evolution of community sports organizations (CSOs) and the role of backbone members. Tier-one establishes a dynamic user interaction network based on grouping relationships, and weighted k-shell decomposition is used to select backbone members. We perform community detection and capture the evolution of two separate sub-networks: one formed by backbone members and the other formed by other members. In Tier-two, the sub-networks are abstracted, revealing a core-periphery structure in the organization where backbone members serve as bridges connecting all parts of the network. Our findings suggest that relying on backbone members can keep newcomers actively involved in rewarding activities, while non-rewarding activities solidify relations between backbone members. community sports organizations(CSOs), backbone, two-tier analysis, core-periphery structure § INTRODUCTION Community sports organizations (CSOs) are non-profit and voluntary organizations whose primary responsibility are to provide sports services to their members, often with low threshold to entry <cit.>. Despite the huge physical and psychological benefits CSOs can bring to their community members, the development of CSOs are often constrained by their voluntary nature and the limited resource available to them <cit.>. It is thus important to understand the development principle of CSOs such that the limited resource may be put to the most effective use. It has always been intuitively felt that there is usually a group of highly active and influential people who actuate and drive the development of a network. In the process of product marketing based on the human interaction network, marketers will take the nodes occupying the structural hole position in the network as the influential initial node, in order to achieve the greatest influence in the network <cit.>. In online social networks such as Weibo, Twitter, etc., users with a large number of followers are considered as influential users, and the topics they publish tend to generate great network effects <cit.>. Similarly, in CSOs, there are also some influential users who have “the right and the ability to influence in an indirect or intangible way" <cit.>, and their presence and activities have significantly higher effects on the operation and development of the organization. Backbone members are important nodes in a network that are well-connected to other members and play a crucial role in facilitating communication and information flow. They are defined as members who are relatively more important, active, and have a greater number of friends compared to other members. The identification and analysis of backbone members can provide insights into the structure and dynamics of the network, which is valuable for understanding its behavior and performance. The problem of vital node identification has attracted increasing attention in different fields <cit.>. Typically, researchers build user social networks based on participant interaction data collected over a period of time and then work to identify key nodes in the network. In this scenario, various centrality measures <cit.> such as degree centrality, closeness centrality, and betweenness centrality can be used to indicate the importance of nodes. With the rich set of metrics introduced in <cit.> we can also identify important nodes in CSOs. However, little attention has been paid to the role that backbone members play, nor the mechanisms by which they function in the network. At the same time, studies are often focused on the group of backbone members themselves, and interactions between the backbone and non-backbone members are largely neglected. In addition to the internal forces generated by the backbone members, external interventions such as rewards and penalties may also be crucial for network development <cit.>. Targeting interventions on leaders have been shown to be more effective than applying them to random individuals for community health campaigns <cit.>. Understanding the mechanisms by which internal forces work can help us better implement external interventions <cit.>. And, if these two forces work together, they can bring even greater developmental benefits. In this research, with longitudinal data recorded, we focus on the development of CSOs, with a particular emphasis on backbone members, defined as the top X% of influential members based on coreness centrality. We introduce Twotier, a new framework for analyzing dynamic networks, which allows us to study both the evolutionary characteristics of components and their connections. Our main finding is that backbone members play a critical role as the trunk of the network, while others act as leaves and are regularly updated. Rewarding activities and backbone members are essential for organization expansion, while non-rewarding activities solidify the backbone group. The main contributions of this work are three-fold. Firstly, we introduce Twotier, a novel mathematical framework for analyzing dynamic networks. Secondly, we demonstrate its applicability in a moderate-sized CSO. Finally, using our framework and numerical results, we provide practitioners with tailored approaches to improve outcomes for different groups within their organization. The remainder of this paper is organized as follows. In Section II, we introduce Twotier, the main method for network analysis in this work, and explain its specific procedure. In Section III, we present an overview of the dataset and the experimental results obtained in each tier, including the role of backbone members in network development and their performance under external factors. Section IV provides an overview of related work. Finally, in Section V, we conclude the paper. § TWOTIER: A LAYERED ANALYSIS ON CSOS This section introduces the Twotier framework, which analyzes the role of backbone members in a moderate-sized community sports organization. In Tier-one, we build a dynamic network based on team-wise links between members and classify them into two groups: backbone members and general members, using the dynamic W-KS algorithm to calculate their influence. Community detection is performed to transition from individuals to communities. In Tier-two, we analyze the evolutionary regularities of different types of communities by abstracting the dynamic network into the network of communities extracted in Tier-one. The framework is illustrated in Fig. <ref>. To explore the influence of different types of activities on organizational development, we separate the network into two sub-networks: one formed under rewarding activities and the other under non-rewarding activities. Table <ref> summarizes the symbols used in this paper. §.§ Tier-one Analysis §.§.§ Evaluating Social Influence by Dynamic W-KS In a CSO with teaming-based relationships, user interactions can change over time, resulting in a dynamic network. Therefore, it is not advisable to apply vital node identification approaches designed for static networks. For example, it may be challenging to determine the importance of a node that is active during some time periods but inactive in others. To address this issue, we extend the weighted k-shell decomposition method to be applied on dynamic networks as a series of static networks. Considering the duration of activities and the fact that the study is conducted under a six-year time span, we take a three-month time window to build a dynamic network containing 24 consecutive equal-length time frames, in each of which the network is considered non-evolving. The validity of this partitioning approach has been demonstrated in our previous work <cit.>. In this case, the network is expressed as G={G_t=(V_t, E_t), for all t in [0, T]}, where V_t is the node set and E_t is the set of edges. The weights of the edges are assigned with the number of links that connect the same node pair within a time frame. Generally speaking, in a static network, the hubs are the key players when networks have a broad degree distribution. In addition, the topology of the network is also an important measure <cit.>. With the weighted k-shell decomposition method (W-KS) proposed in <cit.>, we shall be able to rank the nodes according to both the degree and position of the node in each static undirected weighted network. Define the weighted degree D_t^i of node i in time frame t as D^i_t=[√(d^i_t·∑_j∈n_t^iw^ij_t)], the combination of degree d^i_t and the sum of all its link weights ∑_jw^ij_t, rounded to the nearest integer. In W-KS, all nodes with D not greater than 1 are removed first. Then, the D of other nodes is recalculated in the trimmed network and the pruning process is repeated until no nodes with D less than or equal to 1 are left in the network. The pruned nodes are grouped in the first shell with k=1. Then the next k-shell with k=2 and further higher k-shells are separated from the remaining network iteratively until no nodes remain. Finally each node has a value k, with larger k indicating greater node influence. The influence of node i in the t-th time frame, I_t^i, can be represented by I_t^i= {[ k_t^i, i in the t^th network; 0, i not in the t^th network ] . . The influence of node i in the dynamic network is defined as the sum of its influence in each time frame I^i=∑_t=0^TI_t^i, where I_t^i is derived from the methods of static networks, i.e., in our case, weighted k-shell decomposition. In Fig. <ref>, we show a particular example in which fig_kshell1 W-KS is applied to a static (but accumulated) network, and fig_kshell2 is extended and then applied to a dynamic network. It can be seen that the dynamic W-KS can better capture the temporal nature of the CSO and can thus characterize node influences more accurately. §.§.§ Community structure detection Backbone members (BMs) are identified as the top X% of influential members, measured by any metric. We use the dynamic W-KS algorithm to calculate node influence, specifically coreness centrality, to select BMs. All other members are classified as general members (GMs). In turn, the network can be divided into three components: a) The sub-network formed by BMs and the interactions between them (BSN); b) The sub-network formed by GMs and the interactions between them (GSN); and c) The links that connect BSN and GSN. We use the quality metric modularity Q defined in <cit.> to explore the community structure in the two sub-networks of the CSO respectively: Q_t=12m∑_i,j[w^ij_t-e_t^ie_t^j2m]δ(i, j), where e^i_t=∑_jw^ij_t is the sum of the weights of the edges attached to node i, and m=12∑_ijw_t^ij is the sum of the weights of all edges in G_t. The δ-function is 1 if node i and j in the same community, otherwise δ=0. A Q value higher than 0.3 suggests that distinct community structures do exist in the network <cit.>. §.§ Tier-two Analysis §.§.§ Community evolution Once the presence of community structure is confirmed, we can then proceed with the analysis on community evolution. To capture intermittent participation that is often seen in CSOs, we extend the community evolution events used in our previous study <cit.>, by adding Suspend and 𝑅𝑒 𝑒𝑚𝑒𝑟𝑔𝑒. A community is said to be suspended, if it appears in a time frame, but disappears for some time, and then re-emerges. A community is said to be re-emerging, if it did not exist in the previous time frame, but has appeared at least once in the past time frames. According to their effects on the community structure, the evolution events other than Continue may be roughly classified into two categories: a) Events that bring significant structural changes to the network (V - Violent), mainly through adding/removing nodes from the network - Form, Dissolve, Suspend and 𝑅𝑒 𝑒𝑚𝑒𝑟𝑔𝑒 and b) Events that cause marginal structural changes to the network (S - Stable), mainly through adding/removing links between existing nodes - Grow, Merge, Shrink and Split. Communities that undergo S-type evolution are relatively closed groups with close internal interactions but limited communication with other external communities. However, V-type evolution brings about more diverse changes in the network structure, which promotes communication among different groups and plays an important role in the stability and development of the network. The information about community evolution events is presented in Table <ref> in detail. §.§.§ Community abstraction To explore the interactions of different communities on a horizontal level, we abstract the network of communities by hiding the details of the original connections between individuals within communities. The network described by Eq.(<ref>) is similar to Eq. (<ref>), however, the nodes are now communities detected in Tier-one and edges are connections between communities. G^com={G_t^com=(V^com_t, E^com_t), for all t in [0, T]}, There are two kinds of nodes in the network: a) communities formed by backbone members (BCs); and b) communities formed by general members (GCs). The connections in the network are classified into three categories: a) edges between BCs (BBEs); b) edges between GCs (GGEs); and c) edges between BCs and GCs (BGEs). We characterize the structure of the network using network density and betweeness centrality, where network density is denoted as Density(G^com)=2L^com/N^com(N^com-1), where N^com denotes the number of communities in the network and L^com denotes the number of connected edges between communities in the network. The betweeness centrality is formulated as BC^com_z=∑_m,n ∈ V^comσ(m, n|z)/σ(m, n), where m and n denote any community. The network with core-periphery structure has a high backbone node centrality and a high density of BSN. However, it is difficult to form a connected network by connections between general nodes only. § EXPERIMENTS AND RESULTS In this section, we apply the proposed Twotier framework to a sports organization and obtain information about the evolution of the organization and its structural characteristics. The experimental steps are illustrated in Fig. <ref>. The data that we use are collected from a non-profit sports organization, through an online platform serving a community with more than 10,000 members. Users can organize communal activities, most of which require users to participate in teams with less than 10 members. Necessary tools are provided for team members to communicate with each other. The platform went online in May 2015, and by June 2021, 790 activities have been held, with 4879 different individual participants in 6426 teams. The activities can be rewarding, denoted as Type-A, with a total of 119 activities, or without any reward, with a total of 671, denoted as Type-B. According to the teaming relations in the entire time span, each different user is represented with a node, and team-wise relations are represented by undirected links with weight of 1. It is clear that a link is in fact representing the quadruple of ⟨ node pair, id of team, id of the activity, time of the 𝑎𝑐𝑡 . .ivity, type of the activity⟩. In the entire time span, there are 73813 such links. §.§ Influence of Nodes by dynamic weighted k-shell decomposition (dynamic W-KS) While traditional weighted k-shell decomposition (W-KS) divides all nodes into 87 shells, dynamic W-KS divides all nodes into 457 shells, allowing for a more detailed division with more prominent gaps between nodes. Furthermore, for nodes within the same layer, we sort them based on their degree. To verify that the extended node influence determination approach is superior, we compare the coverage when the top X% members are selected as backbone members. The coverage reflects the range of influence in the network. It is defined as the proportion of the number of selected kernel members, and their neighbors, to the size of the given network. We compare in two network scenarios, one ignoring temporal properties and aggregating all members who have ever appeared in the organization to form an aggregation network; the other creates a time-frame network at three-month intervals. As shown in Fig. <ref>, with selected proportion X increases from 1 to 50, the coverage under both methods increases but dynamic W-KS generally gives higher results than W-KS. Considering degree, the position in network topology and activeness, the dynamic W-KS can give a more comprehensive picture of the importance of a node and is therefore chosen in our further analysis. In the following parts, we choose the cases X = 5, X=10, and X=20 to carry out the experiment respectively. After classifying the network members, we find that the BMs have a greater degree and are in a more important position in the network topology than GMs, as the results in Table <ref> show. In addition, they are involved in a larger number of activities and active in the network for a longer period of time than GMs. On average, the participation of BMs in non-rewarding activities is higher than in rewarding activities, which is the opposite of GMs. The network structure formed by each of these two groups is also different. §.§ The Evolution of communities In each time frame, the BSN has a giant component, occasionally there are also some small groups independent of the huge part, while the GSN is consist of many small groups that are separate from each other. The average Q of the two sub-networks across all time frames is higher than 0.3, suggesting that they both have a distinct community structure. Further, we present the evolutionary relationships of the detected communities after dividing the 9 evolution events into three categories. Fig. <ref> illustrates the percentage of evolution events for BSN and GSN, with rewarding activities (type-A), with non-rewarding activities (type-B), and with both, respectively. It can be observed that in GSN under type-B activities, form and dissolve among these 9 evolution events are always the major ones. However, this is very different in GSN under type-A activities with diverse and abundant events. With no stimulus, the GMs are less willing to participate, resulting in a large number of dropouts after attending the activities. This suggests that type-B activities are suitable to be held regularly to consolidate the connection between BMs rather than to absorb new members. It can also be seen that the BSN has more S-type community evolution events (light green area), while there is a higher number of V-type community evolution events (light red area) in GSN. The mobility within the backbone member groups is stronger than that of general members. This indicates that the backbone groups play the role of the trunk, while other members renew at a faster rate and act like leaves. §.§ The Structure of Abstracted Network We present the graph of the abstracted network in some time frames while X=10 in Fig. <ref>, where the red circles represent BCs and the green circles are GCs. The red, green and brown edges are BBEs, GGEs and BGEs, respectively. The size of the circles reflects the number of members in the community, and the thickness of the edges indicates the intensity of interaction between members of communities. It is interesting that the network, which is star-shaped, contains a dense cohesive core and a sparsely connected periphery. This observation is further verified in Fig. <ref>,  <ref> and  <ref>. It can be seen in Fig. <ref>figcomnum5 figcomnum10 and figcomnum20 that the number of BCs is stable, while the number of GCs has a larger variation and follows a similar trend as the network size changes. The GCs out-numbers BCs in most time frames, but the former is far more sparse than the latter. Fig. <ref>figedge5 figedge10 and figedge20 show the proportion of the weights of the three types of edges to the total weight of all edges. The edges are mostly between BCs or between BCs and GCs. Fig. <ref> presents the network density of the two sub-networks, containing either BCs and BBEs or GCs and GGEs. We can see that the BCs-formed sub-network has a high density, while the density of the sub-network formed by GCs is always low. In Fig. <ref>, the betweenness centrality of backbone members and general members are displayed respectively. By comparison, it can be found that backbone members have higher betweenness centrality, playing an important role as mediators, while GCs are isolated from each other and must be bridged by BCs. This also suggests that promoting communications between groups at the periphery may be effective to increase network connectivity. A further look into the structure of the networks under type-A and type-B activities shows that the two networks also exhibit a clear core-periphery structure. With type-A activities, the number of connections between BCs and GCs (BGEs) significantly out-numbers that are among BCs (BBEs) or GCs (GGEs) (Fig. <ref>fig2edgeA), while with type-B activities, the number of BBEs out-numbers that of BGEs (Fig. <ref>fig2edgeB). This indicates that while BMs tend to be active in both types of activities, they may behave very differently. In type-A, i.e., rewarding activities, BMs are more likely to connect with GMs, but in type-B activities, BMs tend to connect with other BMs. This on the first hand validates the trunk-like function that the BMs play. On the other hand, this suggests that even though rewarding activities are generally more popular than non-rewarding ones and participation is much higher, the backbone members are still important convening points for the activities, and are thus very important to the development of the CSO. It can be seen in Fig. <ref>fig3densityfig3densityAfig3densityB that in both types of activities, connections between GCs are very rare, again validating the leaf-like behavior of GCs and GMs. It also can be observed from Fig. <ref>fig3densityB that non-rewarding activities provide an important vehicle for backbone members to develop and consolidate very close relations, and should be regarded as an important tool in the entire CSO development toolset. § RELATED WORK Research in the domain of “who are the most important nodes in the network" has considered various criteria to define influential participants. One stream of relevant research is node centrality <cit.>, such as degree centrality, closeness centrality and so on, which attempts to quantify the structural importance of actors in a network. Considering two metrics, the degree of the node and its position in the network topology, Kitsak et al. <cit.> designed a k-shell decomposition method to divide the network nodes into different layers, i.e., to determine the importance of the nodes hierarchically at the group level. Subsequently, authors in <cit.> introduced a generalized method for calculating the k-shell structure of weighted networks. Our work goes a step further to improve the existing weighted k-shell method by taking the temporal nature of the network into account. Taking influential nodes as research objects, researchers have drawn many interesting conclusions. Kerlund <cit.> shows that influential users have a narrow focus in terms of the content they post and how they profile themselves and tend to produce more original content than other users. Zhao et al. <cit.> find that for the entertainment news, the influential spreaders may appear at the later stage of spreading. Borgatti et al. present an intuitive description of the core-periphery structure in <cit.>, that is, the network contains a dense, cohesive core and a sparse, unconnected periphery, and then quantified the core-periphery structure using the quadratic assignment procedure. Authors of <cit.> performed a detailed analysis on the key topological properties of the friendship graph for three different user categories of Leaders, Followers and Neutrals and yielded interesting insights. Wang et al. <cit.> divided the nodes in the network into two categories: central nodes and ordinary nodes, further they found that the information dissemination mode can be summarized into three specific patterns. By decomposing the complex network structure into different parts and analyzing them systematically, it is possible to grasp the development pattern of the network in more detail. Social Network Analysis (SNA)<cit.>, focusing on understanding the nature and consequences of relations between individuals or groups, has been widely used to study social networking platforms. Newman proposed that social networks can be naturally divided into communities or modules in <cit.>. Blondel et al. <cit.> proposed a method known as the Louvain algorithm for community detection, with the core idea of optimizing the quality function known as modularity. Frequent changes in the activity and communication patterns of network members result in the associated social and communication network being subject to constant evolution. For a deeper understanding of network development, Palla et at. quantified network evolution from the perspective of social group evolution in <cit.>. The evolution events are specifically classified into seven categories by group evolution discovery based on the changes of members in the communities in <cit.>. Based on this, <cit.> introduced an improved GED algorithm, describing network evolution in the context of CSOs. These works, from the perspective of community evolution in networks, have inspired the analysis of dynamic network structures on a vertical timeline. § CONCLUSIONS We propose a new framework called Twotier for analyzing the network structure and apply it to probe a non-profit sports organization. Firstly we establish a time-evolving network based on the team-wise relationships of participants. Taking the degree and topological position of nodes as well as the temporal nature of the CSO into account, we extend the weighted k-shell decomposition to determine the influence of the nodes and next classify the participants into two categories: backbone members and general members. Further the network of communities is abstracted by hiding the connections within communities. We not only analyze the development of the organization from the perspective of community evolution on the vertical timeline, but also pay attention to the connections between different groups in the horizontal time frames. In addition we have discussed the effect of the external stimulus on both groups. Our findings are summarized as follows. The backbone members of the CSO are not only characterized by their high degree and closeness centrality, but also by the fact that the average number of participating activities and active time frames of them are much higher than the general members. On average, the participation of backbone members in non-rewarding activities is higher than in rewarding activities, being the opposite of the general members. Through Tier-one analysis, we reveal that the two sub-networks, containing either backbone or general members, both have a clear community structure. The groups of backbone members play the role of the trunk, while the general members renew frequently and act like leaves. Through Tier-two analysis, we identify that there is a core-periphery structure in the organization. Backbone members serve as a critical link between different groups within an organization, and organizational leaders should pay special attention to their role in managing the organization effectively. However, we also note the potential negative impact of few ties between communities of general members on membership stability and organizational development. Therefore, it is necessary to implement measures to strengthen interaction between these groups and break down isolation. More importantly, we observe that external stimulus affects the organization in different ways. Even though rewarding activities are generally more popular than non-rewarding ones and participation is much higher, the backbone members are still important convening points for the activities, and are thus very important to the development of the CSO. The non-rewarding activities provide an important vehicle for backbone members to develop and consolidate very close relations and should be regarded as an important tool in the entire CSO development toolset. These insights can help practitioners develop tailored approaches for different groups within their organization to ensure better outcomes. IEEEtran
http://arxiv.org/abs/2307.05598v1
20230710194032
Shock cooling emission from explosions of red super-giants: II. An analytic model of deviations from blackbody emission
[ "Jonathan Morag", "Ido Irani", "Nir Sapir", "Eli Waxman" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.SR" ]
firstpage–lastpage The Synthesis Lab: Empowering Collaborative Learning in Higher Education through Knowledge Synthesis Bodong Chen August 12, 2023 ==================================================================================================== Light emission in the first hours and days following core-collapse supernovae is dominated by the escape of photons from the expanding shock heated envelope. In a preceding paper, Paper I, we provided a simple analytic description of the time dependent luminosity, L, and color temperature, T_ col, for explosions of red supergiants with convective polytropic envelopes and in the absence of significant circum-stellar medium. It is valid up to H recombination (T≈0.7 eV). The analytic description was calibrated against the results of numerical calculations, approximating radiation transport by diffusion with a "gray" (frequency independent) opacity. Here we present the results of a large set of 1-dimensional numeric multi-group (frequency dependent) photon diffusion calculations, for a wide range of progenitor parameters (mass, radius, core/envelope mass and radius ratios, metalicity) and explosion energies, using opacity tables that we have constructed for this purpose (and are publicly available) including the contributions of bound-bound and bound-free transitions. We provide an analytic description of the small, ≃10% deviations of the spectrum from blackbody at low frequencies, hν< 3T_ col, and an improved (over Paper I) analytic description of the strong suppression of the flux due to line absorption at high frequencies, hν> 3T_ col. We show that the effects of deviations from ionization and excitation LTE and of `expansion opacity' corrections are small, and that the effect of deviations from a polytropic density distribution are also small. Our analytic results are a useful tool for inferring progenitor properties, explosion velocity, and also relative extinction based on early multi-band shock cooling observations of supernovae. radiation: dynamics – shock waves – supernovae: general § INTRODUCTION In core collapse supernovae (SNe) explosions, a radiation mediated shock (RMS) traverses outwards through the stellar progenitor, heating and expelling material as it passes. If no significant circumstellar material (CSM) is present around the star, arrival of the shock at the surface produces a hard UV/X-ray ∼10^45 erg s^-1 `shock-breakout' emission, lasting from tens of minutes to an hour. The breakout pulse is then followed in the coming hours and days by thermal UV/optical `shock-cooling' emission, caused by diffusion of photons out of the shock-heated stellar ejecta. Typical luminosities and temperatures during shock-cooling are of the order 10^42-10^44 erg s^-1, and 1-10 eV. As the photons diffuse out, deeper parts of the ejecta are gradually exposed over time <cit.>. In order to constrain the properties of the progenitor star, it is helpful to have high cadence multi-band observations in the first hours of shock-cooling <cit.>. Among these measurements, ultraviolet observations are especially important, as they are closer to the thermal emission peak, and can be used to determine the emission temperature and the UV extinction self-consistently <cit.>. Combined with an accurate theoretical model, these measurements can be used to reproduce the progenitor and explosion parameters, including radius, surface composition, explosion energy per unit mass, and the extinction. Analytic models are especially important for solving the "inverse problem" of inferring system parameters and uncertainties from the observed spectral energy distribution (SED). Emission during shock-cooling is amenable to modeling in the case of `envelope breakout' (i.e.in the absence of CSM with signifcant optical depth), largely because the system is near local thermal equilibrium (LTE) at this time. However, catching supernovae within the first hour presents a practical challenge, and few such observations have been achieved <cit.>. Existing and upcoming observatories, such as the Zwicky Transient Factory (ZTF) <cit.>, the upcoming Vera Rubin Observatory <cit.>, and the expected launch of the wide-field UV space telescope ULTRASAT <cit.> will greatly increase the quantity and quality of early measurements, enabling a systematic study. The numeric calculations presented here provide a systematic analysis of the deviation of the spectra of the emitted radiation from blackbody, which enables an improvement of the accuracy of the analytic models. We adopt a `multigroup' (MG) numeric approach, where radiative transfer is solved under the diffusion approximation separately for different photons frequencies, including coupling to hydrodynamics. Several codes using different approximations and schemes have been used to address the shock breakout and cooling problem: STELLA <cit.> is a 1 dimensional code that solves for the angle-dependent intensity with a variable Eddington factor, employing a ray-tracing scheme. Their opacity includes free-free, bound-free and atomic lines from <cit.>, and employs the Sobolev approximation, based on <cit.>. STELLA should contain the required physical components for modeling the SED of shock-cooling emission, and this has been done in several works <cit.>. <cit.> numerically solved the planar shock breakout problem under the diffusion approximation and incorporating free-free opacity. They also include inelastic Compton scattering, which is important for shock breakouts at high velocities. Other hydrodynamically coupled multigroup codes are presented in the literature <cit.>, though they have not been used or formulated to solve for shock-cooling emission. Many other codes focus computational efforts on line modeling and are specialized to describe emission at later phases of the supernova, when the ejecta is freely-coasting. The ARTIS <cit.> and SEDONA <cit.> codes both solve frequency-dependent, time-dependent radiative transfer using Monte Carlo. ARTIS is primarily used to solve type Ia SN problems, while SEDONA has also been used to calculate core-collapse supernova emission <cit.>. The latter contains a module for coupling hydrodynamics to radiation in a Lagrangian sense, though to our knowledge this has not been used in the context of the first day of shock-cooling emission. Other multigroup codes solve for radiative transfer under a `steady-state' approximation <cit.>. It is not guaranteed that the steady-state approximation is always appropriate at earliest times, though our results suggest that it may be reasonable for modeling if one employs realistic density and temperature profiles. In a preceding paper, <cit.>, we modeled shock-cooling emission following breakout from a spherical envelope, assuming local thermal equilibrium (LTE). Using earlier analytic results <cit.>, we derived an analytic description of the bolometric luminosity L and color temperature T_ col, that provides a good approximation (to 10% in L and 5% in T_ col) of the results of our 'gray' diffusion simulations for the emission over ∼1 hour to ∼1 week after shock breakout for a wide range of explosion energies and progenitor parameters (radii, core and envelope masses, metallicities). The analytic expression is advantageous relative to those of previous analytic works <cit.> thanks to its calibration using a large set of numeric results incorporating a realistic opacity with free-free, bound-free, and bound-bound components, which have an important effect on emission. We also used a set of MG diffusion simulations to show that the spectrum is well described by a Planck spectrum with an effective color temperature, except at UV frequencies where the flux may be significantly suppressed due to line absorption. In this work we relax the assumption that the photons are at LTE and account for frequency-dependent emission, absorption and transport both numerically and analytically. We produce hydrodynamically coupled 1-dimensional multigroup simulations for a wide range of explosion energies and progenitor parameters that span the parameter range of Paper I - explosion energies in the range E=10^50-10^52 erg, progenitor masses and radii M=2-40M_⊙, R=3×10^12-2×10^14 cm, envelope to core mass ratios M_ env/M_ c=10-0.3, and metallicities Z=0.1Z_⊙-Z_⊙. Our simulations are based on the work of <cit.>. Their code allows inelastic Compton scattering, which we do not incorporate in the bulk of the simulations, due to its negligible effect during shock-cooling. We include a frequency-dependent opacity κ_ν, for which we use both the publicly available TOPS code <cit.> and one that we developed ourselves and is now available to the community <cit.>. Relative to TOPS, our code is largely open source, and can produce opacity tables at arbitrary density, temperature, and frequency resolution. It is based on experimentally verified atomic line lists <cit.>. We note that our MG simulations cannot be used to make predictions for the exact spectrum including lines, as they have finite frequency resolution and do not include expansion opacity and deviations of excitation and ionization states from LTE (see  <ref>). However, they are very useful for predicting the coarser-grained, line averaged SED in shock-cooling following envelope breakout. This paper is organized as follows. In  <ref> we define our notation and summarize the analytic results of , , , and , that we use in this paper. In  <ref> we describe our numeric calculations and present convergence and code verification tests. In  <ref> we describe our opacity tables, as well as convergence and verification tests for these. In  <ref> we derive an analytic description of the deviations of shock cooling emission spectra from blackbody, and in  <ref> we compare the analytic description to simulation results. In sections  <ref> and  <ref>, we address the sensitivity of our results to "expansion opacity" corrections and to deviations of plasma ionization and excitation from those of LTE. A comparison to earlier work, including to STELLA radiation transport calculations for several non-polytropic profiles obtained using MESA stellar evolution calculations <cit.>, is given in  <ref>. The agreement of our results with these earlier calculations provides additional support to our code's validity, and to the conclusion of the detailed analysis of SW17, who demonstrated that the shock cooling emission is not sensitive to deviations of the density profile from a polytropic one. Our results are summarized and discussed in  <ref>. § RMS BREAKOUT AND SHOCK-COOLING EMISSION- SUMMARY OF EARLIER ANALYTIC RESULTS USED IN THIS PAPER RMS breakout and shock-cooling is extensively discussed in the literature and briefly summarized in . At radii r close to the stellar radius R (δ≡ (R - r)/R≪ 1), the initial density of a polytropic envelope approaches a power-law, ρ_0 = f_ρρ̅δ^n. Here ρ̅≡ M/(4π R^3/3) is the average pre-explosion density of the ejecta (exculding the mass of a possible remnant), and n=3/2 for convective RSG envelopes. f_ρ is a numerical factor, of order unity for convective envelopes, that depends on the inner envelope structure <cit.>. The predicted breakout and cooling emission are nearly independent of f_ρ. As the shock approaches the edge of the star, it accelerates down the steep density profile and the flow approaches the self-similar solutions of <cit.>. The shock velocity diverges in this regime as v_ sh = v_ s∗δ^-β_1 n, with β_1=0.19, and with v_ s∗ a constant defined by eq. (<ref>). Based on numerical calculations, <cit.> find v_ s∗≈ 1.05 f_ρ^-β_1 v_∗, v_∗≡√(E/M), where M is the mass of the ejecta, E is the energy deposited in the ejecta, and v_∗ is its characteristic expansion velocity. This approximation holds to better than 10% for M_ env/M_ c<1/3, and overestimates v_ s∗ by approx. 20% for M_ env/M_ c=0.1 . Breakout takes place when the scattering optical depth of the plasma layer lying ahead of the shock equals τ_ es=c/ v_ sh. We denote the shock velocity v_ sh and pre-shock envelope density ρ_0 at this point in terms of breakout parameters ρ_ bo and v_ bo, respectively. We may rewrite eqs. (<ref>) and (<ref>) as ρ_0=ρ_ bo ( v_ boτ/c)^n/(1+n), v_ sh = v_ bo (v_ boτ/c)^-β_1 n/(1+n). The location at which breakout "occurs", i.e. where τ=c/ v_ sh, is given by δ_ bo = (n+1)c /κρ_ bo v_ bo R, where κ is the opacity. For RSGs, ρ_ bo and v_ bo are approximately related to the progenitor parameters and explosion energy by <cit.> ρ_ bo = 1.16 × 10^-9 M_0^0.32 v_∗,8.5^-0.68 R_13^-1.64κ_0.34^-0.68 f_ρ^0.45 g cm^-3, v_ bo/v_∗ = 3.31 M_0^0.13 v_∗, 8.5^0.13 R_13^-0.26κ_0.34^0.13 f_ρ^-0.09. Here, R= 10^13R_13 cm, κ=0.34 κ_0.34 cm^2 g^-1, v_∗=v_∗,8.5 10^8.5 cm s^-1, and M=1 M_0 M_⊙. The duration over which the breakout pulse is emitted from the star is approximately given by the shock crossing time of the breakout layer, δ_ boR/ v_ bo =(n+1)c /κρ_ bo v_ bo^2= (n+1)t_ bo=74.9 ρ_ bo,-9^-1κ_0.34^-1 v_ bo,9^-2 s, where ρ_ bo=10^-9ρ_ bo,-9 g cm^-3 and v_ bo= 10^9 v_ bo,9 cm s^-1. The observed pulse duration may be longer than this intrinsic duration due to light travel time effects, which spread the pulse over R/c. δ_ bo is given as a function of progenitor parameters and explosion energy as δ_ bo=0.02 R_13^0.90 (f_ρ M_0 v_ s*,8.5 κ_0.34)^-0.45, where v_ s*=v_ s*,8.5 10^8.5. For later use, we provide here the density profile during the spherical phase, given by eq. (9) of . We recast this equation in terms of r and t using eqs. (3), (4) and (8), and eq. (<ref>) from this paper, ρ(r,t) = 1.69 × 10^-11 (f_ρ M_0)^0.27 v_s*,8.5^8.73 r_14^-11.73 t_ d^8.73 g cm^-3, where the radial coordinate is r=r_14 10^14 cm. Alternatively, using eq. <ref>, ρ(r,t) = 1.82 × 10^-11 R_13^2 v_ bo,9^7.73κ_0.34^-1 r_14^-11.73 t_ d^8.73 g cm^-3. Similarly, assuming a self-similar diffusion profile <cit.>, we have for the temperature: T(r,t) = 4.83 R_13^1/4 (f_ρM_0)^0.27 v_ s*,8.5^2.66κ_0.34^0.02 t_ d^2.14 r_14^-3.18 eV, T(r,t) = 5.02 R_13^0.71 v_ bo,9^2.31ρ_ bo,9^0.08κ_0.34^1/3 t_ d^2.14 r_14^-3.18 eV. In , we described shock cooling emission by interpolating between the exact planar phase solution () valid at early times (hours), and the later approximate spherical phase solution (). The combined bolometric luminosity L and emission (color) temperature T_ col are given by L/L_ br=t̃^-4/3+t̃^-0.172× Aexp(-[at/t_ tr]^α), T_ col/T_ col,br=min[0.97 t̃^-1/3,t̃^-0.45]. Assuming a blackbody spectral distribution, the emitted luminosity is then given by, L_ BB=L×π B_ν(T_ col)/σ T_ col^4. Here {A,a,α}={0.9,2,0.5} , and t̃≡ t / t_ br. t_ tr is roughly the time at which the photons will be able to diffuse out of the envelope in dynamical time. The br (break) subscript marks the values at the transition between the planar and spherical phase. They are given in terms of the model parameters v_ s* ,f_ρM, and R as t_ br= 0.86 R_13^1.26 v_ s*,8.5^-1.13 (f_ρM_0κ_0.34)^-0.13 hrs, L_ br=3.69×10^42 R_13^0.78 v_ s*,8.5^2.11 (f_ρM_0)^0.11κ_0.34^-0.89 erg s^-1, T_ col,br= 8.19 R_13^-0.32 v_ s*,8.5^0.58 (f_ρM_0)^0.03κ_0.34^-0.22 eV. M_0 denotes mass in units of solar mass. Both the break values and t_ tr can be directly deduced from observations. Eqs. (<ref>)-(<ref>) are valid at times 3R/c < t < min[ t_0.7 eV , t_ tr/a ], where 3 R / c = 0.67 t_ br,3^-0.1 L_br,42.5^0.55 T_br,5^-2.21 hrs, t_0.7 eV = 8.01 t_ br,3 T_ br,5^2.22 days. The former is the time past which we showed light-travel time effects to be unimportant. The latter is the time at which the photosphere temperature is T=0.7 eV, based on , roughly corresponding to Hydrogen recombination. The transparency time, t_ tr, occurs roughly when the dynamical time matches the diffusion time, given by t_ tr = √(κ M_ env/8 π c v_ s ∗), = 19.5 √(M_ env,0κ_0.34 v_ s*,8.5^-1) days. For later use we also define the homologous time t_ hom, which is also the early validity of the formula: t_ hom = R/5v_ s,* = 0.1 R_13/v_ s*,8.5 days. During shock-cooling, the luminosity is determined at the diffusion depth, the location from which photons will diffuse outwards in dynamical time <cit.>. The color temperature meanwhile, is roughly determined by the temperature at the thermal depth, the last absorption surface for diffusing photons (the radius from which photons diffuse out of the ejecta without further absorption). In this paper we treat these quantities as frequency-dependent (compare with the approximate prescription, , eq. 30). Specifically the thermal depth r_ col is defined by τ_⋆,ν(r=r_ col,ν)≡∫_r_ col,ν^∞ρ√(3κ_ abs,ν(κ_ abs,ν+κ_ es))dr'=1, where the abs, es and ν subscripts indicate absorption, (electron) scattering and frequency dependence, respectively. This integral is often approximated in the literature as √(3τ_ absτ_ es) or √(3τ_ abs(τ_ abs+τ_ es)). We find that the choice of approximation has a negligible effect on the SED due to the steep ρ(r) dependence, with the exception of regimes with strong lines, where the observed effect can be tens of percents. § DESCRIPTION OF THE NUMERICAL CODE We numerically solve the radiation hydrodynamics equations of a spherically symmetric flow. We allow the photon distribution to deviate from thermal equilibrium with a multi-group treatment, and handle radiative transfer under the diffusion approximation. For the matter equation of state (EOS), we assume an ideal Hydrogen gas in LTE, including ionization as dictated by the Saha equation. The MG emission/absorption and diffusion opacities (effective opacities for each photon energy group), are based on our tables with a solar mix-like composition. This section is structured as follows. The equations of the numerical scheme are given in  <ref> and  <ref>, and the initial and boundary conditions are described in  <ref>. The validation of the numeric code and the convergence of the calculations are described in  <ref>. §.§ Radiation-hydrodynamics equations In Lagrangian coordinates, the velocity v and density ρ evolve in response to the radiation energy density u_ν per unit frequency ν and matter energy density e, as follows: dr/dt= v, ρ = ρ_0r_0^2/r^2∂ r_0/∂ r, d v/dt=-1/ρ1/r^2d/dr(1/3r^2[e+q+∫_0^∞ u_ν d ν]) , where the 0 subscript denotes initial values. Correspondingly, the energy densities evolve according to, du_ν/dt=.∂ u_ν/∂ t|_ emis/abs+.∂ u_ν/∂ t|_ compres+.∂ u_ν/∂ t|_ diff, de/dt= -∫_0^∞.∂ u_ν/∂ t|_ emis/abs dν +.∂ e/∂ t|_ compres. The emission / absorption, compression, and diffusion terms are given by .∂ u_ν/∂ t|_ emis/abs=ρκ_ abs,ν c [4π B_ν (T)/c-u_ν], .∂ u_ν/∂ t|_ compres=-[4/3u_ν -1/3∂(ν u_ν)/∂ν]1/r^2∂(r^2 v)/∂ r, .∂ e/∂ t|_ compres=-(e+p+q)1/r^2∂(r^2 v)/∂ r, .∂ u_ν/∂ t|_ diffusion=-1/r^2∂ (r^2 j_ν)/∂ r, where the frequency-dependent photon energy flux density j is given by j_ν=-1/ρκ_*(c/3∂ u_ν/∂ r-1/c∂ j_ν/∂ t), and the Planck energy density per unit frequency is given by 4π B_ν (T)/c=8π/ c^3h ν^3/e^h ν/T-1. κ_* is the diffusion opacity, given by the Rosseland mean of sum of the Thomson scattering opacity κ_ es and the absorption opacity κ_ abs,ν across the frequency bin (of each photon energy group in the MG calculation), κ_*^-1= (Δν_ bin)^-1∫ dν(κ_ es+κ_ abs,ν)^-1, where κ_ abs,ν is given by our high-resolution opacity tables. The absorption opacity of each group is given by κ_ abs= (Δν_ bin)^-1∫ dνκ_ abs,ν. In a small fraction of the simulations we also include ineslastic Compton scattering using the Kompaneets equation, as shown below. Inelastic Compton scattering is unimportant during shock-cooling, but can have an important effect on the breakout spectrum, which we include in our tests of the code in  <ref>. .∂ u_ν/∂ t|_ scat=-ρκ_ es c ν/m_ e c^2∂/∂ν[ T ∂/∂ν (ν u_ν) . + . (hν - 4 T) u_ν + c^3/8π(u_ν/ν)^2 ], where m_ e is the electron mass and T is the matter temperature (in units of energy). Each of the above equations is solved explicitly using operator splitting, with the exception of the diffusion term, where u and j are solved for together implicitly. The flux is initially started as j_0=-1/ρκ_ esc/3∂ u_0/∂ r. The time derivative term in eq. (<ref>) becomes important in optically thin regions. In our problem it does not have an important effect since the luminosity is determined deep in the ejecta. Following <cit.>, we rewrite the radiative compression term in eq.  (<ref>) as .∂ u_ν/∂ t|_ compres=-u_ν( 1 -1/3∂log u_ν/∂logν)1/r^2∂(r^2 v)/∂ r. We include Hydrogen recombination in the EOS, given by e=(1/γ-1)n(1+Y)T-(1-Y)n I_H, p=n(1+Y)T, where Y(T) is the ionization fraction, I_H is the Rydberg energy, n is the atomic number density, γ=5/3 is the monotonic adiabatic index. This prescription becomes an ideal gas equation of state when the plasma is fully ionized. Y is solved for iteratively using the Saha equation, assuming the presence of Hydrogen only. §.§ Numerical scheme In our numerical scheme, we solve the continuity equations (eqs. <ref>-<ref>) by a standard staggered mesh leap-frog method. Energy evolution in time is solved via operator splitting. The equations are divided into parts, with diffusion (including a flux limiter), radiative processes and compression calculated consecutively as follows. Frequency dependent diffusion, is solved implicitly using a Newton Raphson (NR) solver, then the matter is compressed explicitly. The output from these is then fed into the radiative processes (emission, absorption, and scattering), which are solved iteratively using two loops of NR solvers that each solve for energy conservation. In the inner loop, we solve implicitly for u_ν, while keeping e constant, while in the outer loop we solve implicitly for e. The inner loop includes several protections from non-physical results in u_ν, including an `overshoot' protection to prevent u_ν from crossing B_ν(T) when attempting to equalize u_ν=B_ν(T) (see details in  <ref>). The initial guess for u_ν involves solving radiative transfer explicitly, and if the NR solver fails to find a solution after 30 iterations, the solver also attempts a solution starting from the original u_ν value. Finally, where needed (e.g. eq. <ref>, <ref>, κ_*, κ_ abs,ν), we extract the temperature and pressure by solving the equation of state, eq. (<ref>) implicitly. The entire set of equations for the evolution of the matter and radiation energies is solved using a predictor-corrector with opacities updated at every iteration prior to the diffusion step. §.§.§ Time steps The minimum of the following constraints limits our simulation time step. For grid cells i, the usual Courant upper limit is Δ t_ c=f_ cmin{Δ x_ i/C_ s,i}, where Δ x is the grid spacing and C_s is the speed of sound. Diffusion also limits the time step according to Δ t_ d=f_ dmin{u_ν/∂ j_ν/∂ x} _ i,ν, where the minimum is taken over cells i and bins centered at frequency ν. The factor f_ d, along with all the similar f factors here, is of order unity and smaller than one, and our results are shown to be insensitive to the exact value. Finally, we limit the time step also by limiting the maximal change due to radiative processes of the total energy density of the radiation/plasma, Δt_r=f_r u/j_B+j_C j_B+j_C<0, e/j_B+j_C j_B+j_C>0, where u=∫_0^∞u_ν dν is the bolometric radiation energy density. The effective plasma and Compton scattering emissivity are given by j_ B=ρ c∫_0^∞κ_ν(B_ν-u_ν) dν and j_ C=4uρκ_ esc(T-T_γ)/m_ ec^2, where κ_ν is the frequency-dependent absorption opacity, B_ν is the blackbody distribution, and T is the matter temperature. The radiation temperature is defined as in <cit.> as T_γ=1/4u∫_0^∞[hν u_ν+c^3/8π(u_ν/ν)^2]dν §.§.§ `Protections' on Diffusion and Radiative Transfer The very high opacity, reaching κ_ν∼10^6 cm^2 g^-1 at some frequencies, may lead to numeric problems in the application of the implicit solution with finite time steps. At infinitely small time steps, u_ν will be kept close to B_ν at such large opacity regions. However, using finite time steps, the implicit result for u_ν can at times `overshoot' B_ν (or proceed in the wrong direction due to strong dependence of κ_ν on temperature). In order to avoid impractically short time steps, we add several limiting `protections' immediately after the inner Newton Raphson solver for u_ν. Namely, if the resulting implicit u_ν lies outside of the range between u_ν of the previous step and B_ν, we override the result to the nearest of these values. We also limit the change during the emission/absorption time step to |Δ u_ abs,nu/u_ν|<f_ abs, where f_ abs is in the range {0.1,0.5}. Its value does not affect our results. Though the latter constraint affects the relative rates of physical processes, absorption still proceeds quickly relative to the other processes in this scheme. We also add a flux limiter to the simulations. The P1 diffusion approximation doesn't require one in principle. However, at certain frequencies, strong absorption and the strong sensitivity of κ_ν to temperature, can lead to a situation where the photon energy density in a particular frequency group and cell depletes abruptly -for example to match B_ν(T)-. The change may occur faster than the time in which diffusion can respond given finite time resolution, and thus may lead to non-physical flow. Namely, either flux would flow in the wrong direction, or |j|>u_ cellc, where u_ cell is the energy density in the cell from which the flux j is exiting, and c is the speed of light. In lieu of adding another time constraint, we insert a flux limiter as follows, j_ν→ j_ν, FL=j [1+(|j_ν|/u_ cellc)^m]^-1/m where m is a positive integer constant. The derivative for j -defined in between cells- with respect to u, in either one of the adjacent cells, is written as ∂ j_ν, FL/∂ u=α/[1+|γ|^m]^1/m-γ^m-1/[1+|γ|^m]^1/m+1[j_ν/|j_ν|α-|γ|c∂ u/∂ u_ cell], where α≡∂ j_ν/∂ u and γ≡ j_ν/u_ cellc. This derivative is used in the Newton Raphson solver during the diffusion step. We test the flux limiter on the gray diffusion runs with various values of m, finding a deviation of less than a percent from the non-flux limited runs when m=8, which we choose in our simulations. §.§ Initial and boundary conditions Our numeric calculation involves a succession of three simulations, with each simulation starting later in physical time using a snapshot of the hydrodynamic variables as described by its predecessor. Each successive simulation also contains increasing physical complexity; i.e. first a hydrodynamic-only calculation, then a gray diffusion calculation, and finally a MG simulation. This way, later time stages of interest include all the relevant processes, while allowing the computations to be performed in practical time. All simulations are proceeded through to the latest times for comparison. Following and , we begin with a simplified progenitor structure, comprising of a uniform density core surrounded by a polytropic envelope at hydrostatic equilibrium. We start a hydrodynamic only simulation where we inject a high thermal energy density in the innermost cells of the core and capture the resulting shock using artificial viscosity. Then a radiative diffusion-hydrodynamics gray simulation is started between 24 and 8 shock crossing times prior to shock breakout, with the exact start time have negligible effect in this range. Both of the simulations are identical to the ones in (and described there in detail), with the important exception that in the gray simulation we increase the initial cell resolution towards the stellar edge[At the edge we use a grid spacing Δ r_ grid∝ (R-r), down to a predetermined scattering optical depth of τ_ es≲ 10^-3. The additional edge resolution has no effect on the results of the gray simulation and allows us to improve edge resolution in the subsequent MG calculation.]. Then we begin a multi-group diffusion simulation, as described in  <ref>. The simulation is started between 20 shock crossing times prior to breakout and up to 2 R/c times after breakout, with the exact time at which the simulation is started having negligible effect on the later shock-cooling emission. Typical resolutions for each of the respective stages listed above are 4000-8000, 1600-3200, and 200-1600 cells (MG runs that are started after shock breakout typically have 50-200 cells), with 32-256 photon groups in the MG phases. All calculations are continued until at least after the recombination time t_0.7 eV. Multigroup simulations that are started prior to breakout have a similar initial grid to the gray diffusion simulations. Namely the grid changes smoothly, with modest resolution in the interior, highest resolution at the starting location of the shock, and steadily decreasing resolution outwards , before approaching a constant for τ≲ c/v_ bo, and finally increasing resolution at the stellar edge, with Δ r ∼ (R-r), down to at least a scattering optical depth of τ_ es∼ 10^-2. MG simulations that were started after shock breakout have a simpler initial cell grid, spaced logarithmically in τ_ es, with the same stellar edge resolution. All MG simulations have photon frequency bins that are constant in time and are spaced logarithmically. For all simulations we assume a static reflective boundary in the inner surface, and a free boundary at the outer surface that for the diffusion simulations accelerates as ∂_t v_ b = j_ bκ / c, where the subscript b denotes boundary values. The boundary flux is given by j_ b=f_ eddcu_ b, where f_ edd=0.3-0.5 is the Eddington factor. The results are insensitive to the exact value of f_ edd since the flux is determined deep within the plasma, at τ∼ c/v≫1. §.§ Code validation and numerical convergence In , we validated our numerical hydrodynamical-only code against the analytic planar stellar breakout solutions of <cit.> and <cit.>, and our gray diffusion code against the analytic planar "Sakurai-Weaver" Anzats solutions of . We also reproduced the bolometric breakout flux expected from a planar stellar breakout, as also described in . In <cit.>, an earlier version of the multi-group code that we use here, underwent several test problems involving radiative diffusion, emission/absorption, and inelastic Compton scattering. We perform here two additional tests of the multigroup code; the problem of a steady planar radiation mediated shock, and the breakout spectrum in a hydrogen dominated stellar envelope, both including inelastic Compton scattering and only free-free absorption opacity. We calculate the structure of the steady planar radiation mediated shock at two representative velocities, β= v/c=1% and 10%, i.e. spanning breakout velocities in our parameter range. We find for both cases that density ρ, velocity v, and bolometric photon energy density u, converge to 3%, 0.5%, and ∼1% relative to the analytic result <cit.>. For the β=1% simulation, the photons are in LTE, and u_ν is in excellent agreement with a Planck distribution matching total bolometric luminosity u. For β=10%, the photons carrying most of the energy are in Compton equilibrium. While an exact analytical solution does not exist for the spectrum in this case, we find good agreement between the numeric results for the frequency of the peak of the radiation energy density behind the shock and an analytic estimate using a Wien distribution[The Wien distribution is determined by two parameters, the photon energy density u, which is determined analytically, and the number density of photons n_γ, which we extract from the simulation.], see figure <ref>. At lower frequencies, the energy density transitions to a thermal distribution due to large free-free opacity, producing a visible deviation from the Comptonized spectrum, as is expected. Next, we compare our results for envelope breakout with the approximate table values from , which again assume a fully Comptonized Wien spectrum. We find reasonable (10's of %) agreement in peak temperature and luminosity (fig. <ref>), which is somewhat remarkable, since the temperature and luminosity profiles in these tables are a function of only the breakout parameters (R,ρ_ bo,β_ bo - see  <ref>). For the comparison we extract these parameters from the simulation without performing additional fitting for the SED. There is again a noticeable deviation in the low energy tail due to thermalization. <cit.> performed MG diffusion calculations of the planar envelope breakout phase, and obtained similar results to ours. They find 10's of % agreement with in peak temperature and luminosity, as well as a thermalized low-frequency tail of similar shape. In , we showed convergence of the hydrodynamic and gray simulations. Our MG calculations are also converged with respect to spatial resolution. Doubling the spatial resolution produces at most a few percent change (and often less than 1%) in L_ν≡∂ L/∂ν. We also verify that we are converged with respect to the resolution of the outermost cell, finding that L_ν varies by less than 1% when the minimum scattering optical depth varies between τ_ es∼ 10^-2 and 10^-3, in agreement with the conclusions of <cit.>. We note that our frequency resolution is high enough that our SED is insensitive to the number of photon frequency groups, but is coarse relative to the atomic line scale, which we discuss at length in  <ref>. As described in , we include in both the gray and MG diffusion simulations, a non-radiating plasma component coupled with artificial viscosity q. This addition helps stabilize against numerical instabilities associated with the density inversion that occurs at the outer edge of the ejecta. § OUR COMPOSITE OPACITY TABLE Calculating the frequency dependent opacity requires employing several approximations and assumptions. Primary challenges involve solving the many-electron Schrödinger equation and estimating microplasma interactions between species. The assumption that all degrees of freedom are in thermal equilibrium is often, though not always, employed. Due to these approximations, there are large uncertainties in the opacity. For example, a factor of 2 discrepancy in the Rosseland mean exists between TOPS and OP <cit.> in our regime of interest (as shown in ). We built our own frequency-dependent opacity table, containing free-free, bound-free, and bound-bound components, and assuming local thermal equilibrium. The code that produces these tables is now available to the community on github <cit.> and produces tables at arbitrary density, temperature, and frequency resolution (we use Δν/ν∼ 10^-5-10^-6 in practice). In we used the Rosseland mean opacity from the high-resolution tables to provide a formula for blackbody emission. Here we bin the tables in frequency to produce our multigroup opacities. For the absorption term (κ_ abs,ν in eq. <ref>) we use the average opacity across each bin, while for the diffusion term (κ_* in eq. <ref>) we take the Rosseland mean across the bin. In this work, we include 10 important atoms up to iron: H, He, C, N, O, Ne, Mg, Si, S, Fe (though the opacity code can handle an arbitrary mixture). We show later that the resulting supernova lightcurves are insensitive to the exact composition in the case of Hydrogen dominated envelopes. As discussed in , the frequency-dependent TOPS opacities are verified against experiments at temperatures and densities exceeding tens of eV and 10^-6g cm^-3 respectively <cit.>. The Kurucz database of atomic transitions was calibrated against measured line frequencies and oscillator strengths <cit.>, but is incomplete for highly ionized species at high, T>10 eV, temperature. We separately run simulations using both our opacity table and using TOPS. This section is written as follows. In  <ref>, we describe the construction of the opacity, and in  <ref> we describe tests and results of the table. §.§ Opacity - Construction We solve for the ionization and excitation population numbers self-consistently with the Saha equation, and using electron levels provided by the NIST database[<https://www.nist.gov/pml/atomic-spectra-database>]. For the free-free components and Hydrogen-like bound-free and bound-bound components, we use the equations provided in <cit.>. We verify our free-free calculations against TOPS, finding 7% agreement or better. For line transitions in Hydrogen-like ions with charge Z, the decay rate is given by A(Z)∼ν^2f=Z^4A(Z=1), where A(Z=1) is the decay rate for the equivalent Hydrogen line, and ν∼ Z^2 is the frequency of the atomic transition. Non Hydrogen-like bound-free absorption is based on the table in <cit.>[This table only includes photoionization from electons in the ground-state. To calculate the electron ground occupation levels, we include all states up to 0.3 eV above the ground state to account for hyperfine splitting.]. In general, the results of the table agree w/TOPS up to a factor of 2, with the exception of a pure Fe mix we tested, where we observe an order of magnitude difference. Bound-bound oscillator strengths, degeneracies and lower energy levels are taken from Kurucz CD 23[Kurucz line lists are expanded semi-regularly, representing ongoing progress. Our simulation results are insensitive to the choice of line list, including CD 23 and the most up-to-date file from Oct. 8th 2017. In cases of mismatch between the reported lower and upper state energy gap and the transition frequency, we use the lower state and the transition frequency. Missing decay rates are estimated analytically where available. For nearly all lines, the natural line width is smaller than the thermal width, which in turn is smaller than our grid resolution.]. Our line sampling method varies with line width relative to the grid resolution Δν_ grid. Lines with full width half max (FWHM) that are much thinner than the grid spacing, Δν, are sampled as a single grid point, with magnitude proportional to ∼ (Δν)^-1 to conserve total flux. Broad lines are sampled as a Voight function ϕ_ν, including native line width and thermal broadening. Intermediate width lines are sampled from the frequency derivative of ∫ϕ_ν dν such that oscillator strength is conserved. We find that we are insensitive to the exact choice of cutoff between each of the sampling regimes in the ranges 1/30<FWHM/Δν_ grid<1/3 and 1<FWHM/Δν_ grid<10, respectively. We therefore choose the cutoffs at FWHM/Δν_ grid=1/10 [To avoid cases where the Lorentz wings can have an affect, we also require the FWHM of the Lorentzian (not including thermal broadening) to be 3×10^3 times thinner than δν_ grid to sample as a delta function. We also don't use the delta function for any of the Hydrogen lines.] and 3. For quicker calculation in the latter two cases, the broad Voight wings are interpolated at 100 times coarser resolution than the frequency grid, and combined at the end of the calculation[We are insensitive to the location of the cutoff between fine and coarse sampling resolution. Nominally, we place the cutoff 3 FWHM's away from the center.]. Finally, ions and electronic states with electron occupation fractions below 10^-14 are removed (we tested that this omission has no effect). We also include Red-wing continuum suppression <cit.>, and the Hummer-Mihalas factor that suppresses higher electron states <cit.>. Our opacity code allows computation of other ingredients that were not added in this work as they were shown to have negligible influence on the multigroup simulations in our range of parameters, including the Dappen Anderson Mihalas factor <cit.>, and electron collisional broadening. These latter processes are described in the documentation for the opacity. §.§ Tests of the Opacity Calculation and Results We test our frequency-dependent opacity code against TOPS and OP for the simplest case of a pure Hydrogen mixture, finding good agreement (≲ 15%) in both Rosseland mean and frequency-dependent opacity (see figs. <ref> and <ref>). As a further sanity check, we also compare our bound-bound opacity for a pure Fe mix with an independent code from <cit.>, finding excellent agreement (not shown). We find that our simulations are converged with respect to the underlying opacity table, as tested in more than 5 separate parameter choices spanning progenitor radius and explosion energies for 128 photon groups. We observe <2% difference in the SED when varying between Δν/ν∼ 10^-5-10^-6 base grid, and <1% when changing the number of {R≡ρ/T^3,T} grid points from {16,66} to {30,120}. § DEVIATIONS FROM BLACKBODY - CALIBRATED ANALYTIC MODEL Here we provide an analytic description of the deviations of emitted spectrum from blackbody. Our approximation formulae are derived piecewise, based on the strength of the absorption opacity, κ_ abs,ν, relative to the scattering opacity, κ_ es. At frequencies where κ_ abs,ν>κ_ es, the emitted flux may be approximated as a blackbody with a frequency-dependent thermal depth (surface of last absorption) r_ col,ν, and corresponding frequency-dependent color temperature T_ col,ν=T(r_ col,ν), given by L_ν, BB=4π r_ col,ν^2 B_ν(T_ col,ν). At frequencies where the absorption opacity is smaller than the scattering opacity, κ_ abs,ν<κ_ es, we base our approximation on the flux f_ν emitted by a semi-infinite planar slab of temperature T in the two-stream approximation <cit.>, f_ν=4π/√(3)√(ϵ_ν)/1+√(ϵ_ν)B_ν(T), where ϵ_ν=κ_ abs,ν/(κ_ abs,ν+κ_ es). We therefore suggest L_ν,ϵ=(4π)^2/√(3)r_col,ν^2√(ϵ_ν)/1+√(ϵ_ν)B_ν(T_col,ν), as an approximation for the escaping spectral luminosity in this regime <cit.>. We first derive an expression describing the emission at the regime of relatively low absorption opacity, κ_ abs,ν<κ_ es, which occurs primarily at intermediate frequencies near and below the Planck peak (e.g. hν∼ 1-3 eV in fig. <ref>). Absorption in this regime is dominated by free-free transition with a small bound-free contribution. Neglecting the bound-free contribution we approximate eq. (<ref>), that defines the frequency-dependent thermal depth, as ∫_r_ col,ν^∞ρ√(3κ_ ff,νκ_es)dr' =1. Here we have neglected κ_ abs,ν with respect to κ_ abs,ν and used κ_ abs,ν→κ_ ff,ν= 4.13×10^-31g_ ffρ T^-1/2(hν)^-3(1-exp(-hν/T)) cm^2 g^-1, where the density ρ is in cgs, temperature T is in ergs. We approximate the gaunt factor g_ ff=√(3)/πK_0(hν/T)∼0.717(hν/T)^-0.27, with K_0 being the zeroth modified Bessel function of the second kind. Solving eq. (<ref>) using the analytic / spherical phase density profiles (eqs. <ref> - <ref>) we obtain the radius, temperature and opacity at the thermal depth, r_ col,ν = 1.29 × 10^14 R_13^-0.01 (f_ρM_0)^0.09 v_ s*,8.5^0.78κ_0.34^0.03 t_ d^0.80ν_eV^-0.08 cm, T_ col,ν = 2.13 R_13^0.28 (f_ρM_0)^-0.02 v_s*,8.5^0.17ν_ eV^0.25κ_0.34^-0.08 t_ d^-0.42 eV, κ_ ff,ν = 0.02 (f_ρ M_0)^-0.05 R_13^-0.23 v_ s*,8.5^-0.66κ_0.34^-0.29 t_ d^-0.19ν_ eV^-1.66 cm^2 g^-1. We find that modifying the expression for r_ col,ν to r_ col,ν = R + 1.29 × 10^14 R_13^-0.01 (f_ρM_0)^0.09 v_ s*,8.5^0.78κ_0.34^0.03 t_ d^0.80ν_eV^-0.08 cm while keeping the expressions for T_ col,ν and κ_ ff,ν unchanged provides a good description of the spectrum also at the planar phase (and at the transition from planar to spherical evolution), In break notation we have: r_ col,ν = R + 2.18 × 10^13 L_ br,42.5^0.48 T_ br,5^-1.97κ_0.34^-0.07t̃^0.80ν_ eV^-0.08 cm, T_ col,ν = 5.47 L_ br,42.5^0.05 T_ br,5^0.92κ_0.34^0.22t̃^-0.42ν_ eV^0.25 eV, κ_ ff = 0.03 L_ br,42.5^-0.37 T_ br,5^0.56κ_0.34^-0.47t̃^-0.19ν_ eV^-1.66 cm^2 g^-1, where t̃=t/t_ br and R is given in terms of the break parameters by R = 2.41×10^13 t_ br,3^-0.1 L_br,42.5^0.55 T_br,5^-2.21 cm. The thermal depth values can then be inserted into eq. (<ref>) with ϵ_ν=κ_ ff,ν/(κ_ ff,ν+κ_ es) in order to describe the emission in the low absorption frequency range. For frequency regions with strong absorption, where κ_ abs,ν<κ_ es, we return to eq. (<ref>). The thermal depth at these frequencies is located at the outer edge of the ejecta, where the density decreases sharply and the temperature, determined by the free-streaming photons, is nearly uniform. We therefore approximate r_ col,ν≈ const. (ν) and T_ col,ν≈ const. (ν) for these frequencies, and describe the emission as a gray blackbody L_BB, eq. (<ref>). At low frequencies where the free-free opacity dominates, we find numerically that the luminosity is well approximated by L_BB(0.85 T_ col). Meanwhile, at frequencies near and above the Planck peak, where atomic transitions dominate, we use both the simulations and a separate analytic estimate (see  <ref>) to improve upon the approximate L_ BB (0.74 T_ col) description of the UV suppression of , replacing the suppression factor 0.74 with a function of (R,t) lying in the range [0.6,1]. The combined freq-dept formula is thus L_ν = [L_ BB (0.85 T_ col)^-m + L_ν,ϵ^-m]^-1/m hν<3.5 T_ col 1.2 × L_ BB(0.85 R_13^0.13 t_d^-0.13× T_ col) hν>3.5 T_ col, where m=5, and L_ν,ϵ is again given by eqs. (<ref>) and (<ref>) with the choice κ_ abs,nu→κ_ ff,ν. The 1.2 factor accounts for modest UV excess we observe in our results at the planck peak due to the presence of strong lines. The frequency slope in the Raleigh-Jeans regime is similar, but slightly lower than the blackbody value L∼ν^2. Eq. (<ref>) can be further simplified to be given in terms of only L and T_ col, with a minor decrease in the approximation's accuracy, L_ν = π/σL/T_ col^4[ (B_ν(0.85 T_ col)/(0.85)^4)^-m + . . ( 8/√(3) x^-0.155 T_ col,5^-0.1√(ϵ_ a)/1+√(ϵ_ a) B_ν(1.63 x^0.247 T_ col) )^-m]^-1/m hν<3.5 T_ col 1.2× L_ BB(1.11 L_42.5^0.03 T_5^0.18× T_ col) hν>3.5T_ col, where x=hν/T_ col, T_ col = 5 T_ col,5 eV, and ϵ_ a = 0.0055 x^-1.664 T_ col,5^-1.0996. L and T_ col are given by eqs. (<ref>) and (<ref>). § NUMERIC RESULTS In figs. <ref> and <ref> we plot shock-cooling numeric results from over a dozen multigroup simulations. Deviations from our gray blackbody formula (eqs. <ref>-<ref>) are small. In the Raleigh Jeans regime, the SED slope is slightly shallower than L ∼ν^2. As frequency increases in this regime, L_ν passes from 10's of percents above the Planck distribution to 10's of percents or more below it. Near the Planck peak at hν∼ 3.5 eV, the SED can be 10's of % in excess of the prediction of our blackbody formula (50% in extreme cases, further discussed in  <ref>). In the Wien tail, the flux is suppressed due to line absorption. The SED obtained numerically is also compared to our frequency dependent formula (eq. <ref>) yielding good agreement, with an RMS error of Δ L_ν/L_ν≲20% for hν<3 T_ col (and several 10's of % or more in the Wein tail, reflecting a very small inaccuracy in the radiation temperature). The frequency-dependent formula is generally closer to the simulation results than the gray formula prediction throughout. Both formulas shown in the figure use breakout parameters (ρ_ bo, β_ bo, R) that are derived by comparing the breakout bolometric luminosity to (as was done in ), without additional fitting. We find in many simulations that the ratio of bolometric luminosity between the multigroup and gray simulations is approximately given by (κ_ es/κ_ Ross), where the Rosseland mean opacity κ_ Ross is evaluated at the diffusion depth, where L is determined (recall that the gray simulation only include Thomson opacity). This factor is less important during the planar phase, but generally reduces L in the multigroup simulation by 10's of % during the spherical phase, primarily due to Wien suppression. We test the sensitivity of our results to the choice of opacity table using simulations that span the progenitor radius and explosion energy parameter range. When we use TOPS table instead of our own, we find that near recombination time, T_ col in the presence of the TOPS-derived opacity can be lower by up to 10's of % (usually <5% - see example in fig. <ref>). This result is in general agreement with our gray analysis in , where we concluded that for T_ col<4 eV, it is preferable to use our opacity table due the presence of lab-confirmed lines. For higher temperatures, early during shock-cooling, we find negligible (few percent) SED difference between the two opacity tables, since most of the observable frequencies (hν≲10 eV) are in the Rayleigh Jeans regime, and are less affected by the presence of lines. We also test the sensitivity to composition in our simulations, finding a negligible variation when metallicity varies from Z=0.1 solar to solar metallicity (see fig. <ref>), in agreement with results. We conclude that the SED in Hydrogen dominated envelopes is insensitive to metallicity. § EXPANSION OPACITY, FINITE FREQUENCY RESOLUTION, DEVIATIONS FROM LTE §.§ Expansion opacity and finite frequency resolution Our numeric calculation method assumed that the opacity, κ_ν, does not vary significantly across the extent Δ r of a single spatial resolution element. The large velocity gradients in the flow we are considering, combined with the presence of strong absorption lines, may lead to significant variations of κ_ν across Δ r due to the space dependent Doppler shift. This effect may have a significant impact on the calculation of photon transport. For the diffusion calculation, we use a Rosseland average of the opacity, κ_R,i=1/κ^-1 where the average is over the frequency band ν_i-ν_i+1, implying a photon mean free path of l_R=1/(κ_R,iρ). The Rosseland mean is dominated by the frequencies at which the opacity is low, i.e. where lines are absent, and hence the presence of strong lines does not affect l_R significantly. In the presence of a large velocity gradient, the photon mean free path may become significantly shorter- as the photon propagates through the varying velocity of fluid, the frequencies of the lines shift and the photon will be absorbed when it reaches a position where the shifted frequency of a strong line coincides with the photon's frequency. In other words, the effective Doppler broadening "closes" the low κ_ν "windows" in frequency space, that allow a low value of κ_R and large l_R. The absorption mean free path is given in this case by l_ exp≈c/ vΔν/νr, where Δν is the frequency separation between strong lines, and we have used ∂ v/∂ r=v/r as appropriate for the homologous expansion phase. A line is considered "strong" if it leads to τ>1 when integrated over the photon propagation path taking into account the Doppler shift- for line opacity κ_ν=κ_lν_lδ(ν-ν_l), τ_ν=∫ dr ρκ'=κ_lρ ct(ν_l/ν) (where κ' is the Doppler shifted opacity). Thus, Δν is the frequency distance between lines with κ_lρ ct>1. l_ exp can be used to define an effective "expansion opacity", taking into account the finite velocity difference across Δ r due to the expansion of the plasma. The ratio between the Rosseland and expansion mean free paths is l_ exp/l_R=(κ_Rρ r)c/ vΔν/ν =(κ_Rρ r)c/ v(νdN/dν)^-1≈ 10τ_Rc/ v(νdN/dν)^-1, where dN/dν is the (strong) line density and we have used ρ∝ r^-10 (as appropriate for the shock cooling expanding plasma) yielding 10τ_R=κ_Rρ r. In the region where the escaping flux is determined, τ_R≈ c/ v, we have l_ exp/l_R≈10(c/ v)^2(Δν/ν)≈ 10^3.5(Δν/ν). We therefore expect l_ exp/l_R≫1 in this region, and thus only a negligible effect of the "expansion opacity" on the escaping flux. This is demonstrated to be the case in Fig. <ref>, showing l_R/l_ exp for various photon energies near recombination time as a function of τ_ es, the electron scattering optical depth. As demonstrated in the figure, the suppression of the mean free path due to the "expansion opacity" effect at this time may affect the spectrum at photon energies >5 eV, since l_ exp/l_R<1 may be obtained at the thermalization depth of such high energy photons. We arrive at the same conclusion from examination of simulations spanning different progenitor radii and explosion energies. In order to test the possible impact of the expansion opacity on the high energy spectrum, we compare (see fig. <ref>) the flux obtained in the numeric simulation, which does not include the effects of expansion opacity, with the flux obtained using the analytic approximations of eqs. (<ref>) and (<ref>) with r_ col,ν (eq. <ref>) calculated with the density and temperature profiles obtained in the simulation. In the latter we use a high resolution, Δν/ν∼10^-5 opacity table and take into account the Doppler shifts of the lines[T_ col,ν, and ϵ_col,ν are determined at r=r_ col,ν, and we approximate ϵ_col,ν= τ_ abs,ν / (τ_ abs,ν + τ_ es), where the optical depths again include Doppler shifting and are evaluated up to the thermal depth, (τ_*,ν=1).]. The SED is given by L_ν, Dopp = Eq. (<ref>) τ_ es ( τ_*,ν=1) ≤ f_ cut Eq. (<ref>) τ_ es ( τ_*,ν=1) > f_ cut, where f_ cut determines the scattering optical depth τ_ es at the thermal depth below which we neglect the effect of scattering. The results are not sensitive to the choice of f_ cut in the range 0.3-3. We show results for f_ cut=1. The analytic approximation, with r_ col calculated using a course frequency grid and neglecting expansion opacity effects, as done in the numeric simulations, reproduces well the spectrum obtained in the simulations. The modifications of r_ col, and the implied flux modification in the analytic approximation, due to the inclusion of the Doppler shifts of lines using a high resolution frequency grid, yield therefore an estimate of the magnitude of this effect. The good agreement between the simulations results and the analytic estimate obtained as described above implies that the effect of expansion opacity is small, typically of order 10%. The comparison of the analytic and numeric results described above, reveals several frequency bins at intermediate photon energies, 5-8 eV, in which the flux obtained in the numeric simulation exceeds that which is obtained by the analytic estimate by a factor of a few. This is due to the presence of strong and relatively isolated lines, which lead to a large average absorption opacity κ_ abs,i of the photon bin, which in turn leads to a blackbody photon spectrum across the frequency bin with temperature corresponding to the plasma temperature. Since the radiation energy density at intermediate photon energies is below the plasma temperature at radii where the optical depth is small (see  <ref> and fig. <ref>), the flux obtained in bin i is significantly larger than the flux obtained at other frequencies. This result is an artifact of the numeric calculation using finite frequency resolution- the emitted flux would match the blackbody spectrum (of the plasma temperature) only across the (very) narrow line width, hardly affecting the total flux in the finite frequency range of bin i. In order to remove this effect, we have identified and removed a few such isolated lines from the opacity table used in the simulations. The 5 removed lines are listed in table <ref>. Much of the remaining discrepancy between the simulations and the analytic estimate is likely due to the fact that the flux obtained in the numeric simulation exceeds that of the analytic results due to an overestimate of the effect of lines. §.§ Deviation from LTE Ionization and Excitation The density of the plasma emitting the radiation during the hours to days of shock breakout and cooling is relatively high, and the radiation is close to thermal equilibrium with the plasma - see fig. <ref>. This situation is quite different from that prevailing later, on weeks time scale, where the density is low and the radiation is far from thermal equilibrium, causing the ionization fraction and excitation level distribution to deviate largely from Boltzmann. The relatively large density implies that the time scale for all relevant processes (electron-electron and electron-ion collisions, photo-ionization and excitation, electron impact excitation), with the exception of electron impact ionization, are much shorter than the dynamical time (see fig. <ref>). This, combined with the fact that the photon spectrum at energies exceeding the Planck peak (e.g. hν≳ 3 eV during recombination), is close to thermal out to very low τ_ es, implies that the ionization fraction and the excitation level distribution of the low energy excited states are both close to LTE. The fact that at low optical depth, τ_ es≲ 1, the radiation energy density falls below that of LTE at low photon energy, implies that the level distribution of the higher energy excited states may deviate from LTE at the outer edge of the ejecta. However, we note that the distribution of excited energy levels are strongly dominated by UV transitions to and from the highly populated ground state, hence deviations from LTE occupation of the higher energy states are expected to be mild. In addition, the effect of lines on the SED in this energy range is small, and hence we do not expect a significant effect due to deviations from LTE. § COMPARISON TO EARLIER WORKS <cit.> (herafter ) produced an analytic model including deviations from blackbody due to free-free opacity. They arrive at an SED that disagrees with our simulations by an order of magnitude or more in the infrared and up to a factor of two near the Planck peak. In <cit.>, the formula is compared to simulations produced by the STELLA code. They conclude, similarly, that the model's infrared behavior does not describe the SED for hν<3T_ col, and suggest using a blackbody formula for this range. In fig. <ref>, we compare the formula, including the <cit.> corrections, with the results of our MG simulations. We find that their analytic model does not reproduce well the simulation results, due the different values obtained in their analytic approximation for L and T_ col, which in turn is largely due to their opacity approximation neglecting bound bound transitions (see Paper I). Next, we compare our numeric results to those obtained using STELLA <cit.> radiation transport calculations <cit.> for several non-polytropic profiles obtained using MESA stellar evolution calculations <cit.>. In each case, we approximate the progenitor density profile used in earlier simulations by a simple polytrope. We show two such examples in fig. <ref> and summarize the profile parameters we use in table <ref>. In figs. <ref> and <ref> we then compare the resulting emission from our multigroup simulations to the published results, without performing any further fitting. The comparison to the results of <cit.> is shown in fig. <ref>. We find a good agreement with our results when we include in our calculations only bound-free (neglecting bound-bound) line opacity. This is due most likely to the fact that <cit.> treated bound-bound absorption as scattering. While this approximation may be valid at late times, when the density of the plasma is low and the radiation energy density is far below that of thermal equilibrium, this is not a good approximation at the early stage of shock cooling emission (see discussion in  <ref>). The comparison to the results of <cit.> is shown in fig. <ref> (top). We find a good agreement between the results of the different calculations. Finally, fig. <ref> (bottom) shows a comparison to the results of <cit.>. There is good agreement between the results at the latest observed times (t=3 days after breakout). At early time, there is good agreement at longer wavelengths, λ>3000Å, where the free-free emission dominates, but there is a clear discrepancy at higher frequencies. The very high temperature of the emission peak obtained in <cit.> at these times, T∼70 eV, appears to be non-physical, since for the corresponding progenitor parameters it is associated with the temperature of the envelope at an optical depth τ∼ 1000. <cit.> arrive at the same conclusion, noting that that particular model features a very weak line absorption opacity, and as a result, unreasonably high emission temperatures. The agreement of our results with these earlier calculations provides additional support to our code’s validity (in particular to the applicability of the diffusion approximation), and to the conclusion of the detailed analysis of SW17, that the shock cooling emission is not sensitive to deviations of the density profile from a polytropic one. § DISCUSSION AND SUMMARY We derived an analytic description of the deviations of shock cooling spectra from blackbody for red supergiant explosions. The approximation is given by eqs. (<ref>) and (<ref>). The definitions of (and equations for) all variables appearing in eqs. (<ref>) and (<ref>) are given in the Appendix. The analytic description holds from post breakout (t>3R/c), up to H recombination (T≈0.7 eV), or until the photon diffusion time through the envelope becomes shorter than the dynamical time, see eq. (<ref>). The analytic expressions were calibrated against a large set of numeric MG calculations, and provide an excellent approximation to the numeric results- see  <ref>, figures  <ref> and <ref>. In accordance with SW17 and Paper I, we find that the results are not sensitive to metalicity and to the ratio of core to stellar radii, in the range R_ c/R=10^-1-10^-3. In  <ref> and  <ref> we showed the effects of deviations from ionization and excitation LTE and of `expansion opacity' corrections are small. Our analytic formula depends on the same four parameters as the previous one in , {R, v_ s∗,f_ρ M,M_ env}. Of these parameters, the SED is most sensitive to the progenitor radius R, and the ejecta velocity v_ s∗, and insensitive to the parameters f_ρ M and M_ env, where M and M_ env are the total mass of the ejecta and mass of the envelope, and f_ρ is a dimensionless factor of order unity that depends on the inner envelope structure (see eq. <ref>). Therefore, the former two parameters are the ones most readily extracted from observations. We note also that deducing parameter values is hindered by the difficulty of determining T_ col at early times, when the maximum observable photon frequencies (hν≲ 10 eV) may be located below the thermal peak. For this regime, the frequency-dependent deviations from blackbody may prove to be an important discriminator between models of the Raleigh-Jeans part of the spectrum. Our results are compared in  <ref> to earlier numerical results including STELLA radiation transport calculation results for several non-polytropic profiles <cit.> obtained using MESA stellar evolution calculations. The agreement of our results with these earlier calculations provides additional support to our code's validity (in particular to the applicability of the diffusion approximation), and to the conclusion of the detailed analysis of SW17, that the shock cooling emission is not sensitive to deviations of the density profile from a polytropic one. Based on our analysis of  <ref>, we find that the emergence of Balmer lines does not coincide with the recombination time but likely occurs earlier (approximately at T∼2-3 eV). While the recombined H fraction at this time is very low, and the effect of the lines on the SED should be small, the lines may be visible in a spectrum. The appearance of the first Balmer lines should not be associated with the recombination time t_0.7 eV, when the temperature drops to 0.7 eV. Our frequency-dependent formula agrees with the simulations' results with an RMS error of ≲20% (with the exception of the Wien tail, where deviations are larger but reflect a very small inaccuracy in radiation temperature). This error is somewhat larger than the uncertainty in our numeric results, ≈ 10%, as quantified by the difference between the simulations' results and the high frequency-resolution calculation (eq. <ref>). Therefore, in order to best incorporate the uncertainty of our analytic model when comparing to observations, we suggest using a covariance matrix (as function of (ν,t)) of the residuals between the frequency dependent formula (eq. <ref> or <ref>) and our published simulations (see Data Availability). § ACKNOWLEDGEMENTS We thank Barack Zackay for his contribution to our numerical code, as well as Gilad Sadeh for insightful discussion. EW's research is partially supported by ISF, GIF and IMOS grants. § DATA AVAILABILITY Numerical codes used in this paper will be provided upon reasonable request to the corresponding author. Our opacity table code is available online for public use at <cit.>. Numeric results from our simulations are available at <https://www.dropbox.com/s/ub5zg1rngodjb1d/RSG_Solar_Metallicity.mat?dl=0>. mnras § SUMMARY OF MODEL EQUATIONS The basis for the analytic approximations of the spectra given in this paper is the analytic approximation of Paper I for the bolomteric luminosity L and the color temperature T_ col (eqs. <ref>-<ref>), L/L_ br=t̃^-4/3+t̃^-0.172× Aexp(-[at/t_ tr]^α), T_ col/T_ col,br=min[0.97 t̃^-1/3,t̃^-0.45]. Here {A,a,α} ={0.9,2,0.5}, t̃=t/t_ br, and we define t=0 as the time at which the breakout flux peaks. The break parameters (with br subscript) are given as a function of progenitor radius R, ejecta velocity v_s*, and total ejecta mass M, by (eqs. <ref>-<ref>) t_ br= 0.86 R_13^1.26 v_ s*,8.5^-1.13 (f_ρM_0κ_0.34)^-0.13 hrs, L_ br=3.69×10^42 R_13^0.78 v_ s*,8.5^2.11 (f_ρM_0)^0.11κ_0.34^-0.89 erg s^-1, T_ col,br= 8.19 R_13^-0.32 v_ s*,8.5^0.58 (f_ρM_0)^0.03κ_0.34^-0.22 eV. Here, R= 10^13 R_13 cm, κ=0.34 κ_0.34 cm^2 g^-1, v_ s*=v_ s*,8.5 10^8.5 cm s^-1, M_0 denotes mass in units of solar mass, and f_ρ≃1 depends on the inner structure of the envelope (see eq. <ref>). v_ s∗ is related to the characteristic ejecta velocity v_∗ by (eq. <ref>) v_ s∗≈ 1.05 f_ρ^-0.19v_∗, v_∗≡√(E/M), where E is the energy deposited in the ejecta. Our analytic approximation for the luminosity and spectra of the shock cooling emission, taking into account deviations from a blackbody spectrum, is (eq. <ref>) L_ν = [L_ BB (0.85 T_ col)^-m + L_ν,ϵ^-m]^-1/m hν<3.5 T_ col 1.2 × L_ BB(0.85 R_13^0.13 t_d^-0.13× T_ col) hν>3.5 T_ col, with m=5 and (eqs.<ref>, <ref>) L_ BB=L×π B_ν(T_ col)/σ T_ col^4, L_ν,ϵ=(4π)^2/√(3)r_col,ν^2√(ϵ_ν)/1+√(ϵ_ν)B_ν(T_col,ν), ϵ_ν=κ_ ff,ν/κ_ ff,ν+κ_ es and (eqs. <ref>-<ref>) r_ col,ν = R + 2.18 × 10^13 L_ br,42.5^0.48 T_ br,5^-1.97κ_0.34^-0.07t̃^0.80ν_ eV^-0.08 cm, T_ col,ν = 5.47 L_ br,42.5^0.05 T_ br,5^0.92κ_0.34^0.22t̃^-0.42ν_ eV^0.25 eV, κ_ ff = 0.03 L_ br,42.5^-0.37 T_ br,5^0.56κ_0.34^-0.47t̃^-0.19ν_ eV^-1.66 cm^2 g^-1. Here L_ br=L_ br,42.5 10^42.5 erg s^-1, T_col=5 T_ col,5 eV, and ν=ν_ eV eV, and R in terms of the break parameters is (eq. <ref>) R = 2.41×10^13 t_ br,3^-0.1 L_br,42.5^0.55 T_br,5^-2.21 cm. A simpler approximation, that depends only on the L and T_ col and is slightly less accurate, is (eq. <ref>) L_ν = π/σL/T_ col^4[ (B_ν(0.85 T_ col)/(0.85)^4)^-m + . . ( 8/√(3) x^-0.155 T_5^-0.1√(ϵ_ a)/1+√(ϵ_ a) B_ν(1.63 x^0.247 T_ col) )^-m]^-1/m hν<3.5 T_ col 1.2× L_ν, BB(1.11 L_42.5^0.03 T_5^0.18× T_ col) hν>3.5T_ col where x=hν/T_ col, T_ col = 5 T_5 eV, and ϵ_ a = 0.0055 x^-1.664 T_ col,5^-1.0996. Our analytic approximations are valid at (eq. <ref>-<ref>) 3 R / c = 17 R_13 min < t < min[t_ 0.7 eV, t_ tr/a], where t_0.7 eV = 6.86 R_13^0.56 v_ s*,8.5^0.16κ_0.34^-0.61 (f_ρM_0)^-0.06 days, t_ tr = 19.5 √(κ_0.34M_ env,0/v_ s*,8.5) days.
http://arxiv.org/abs/2307.05711v1
20230711183014
On the evolution equations of interfacial variables in two-phase flows
[ "Giuseppe Orlando", "Paolo Francesco Barbante", "Luca Bonaventura" ]
physics.flu-dyn
[ "physics.flu-dyn", "math-ph", "math.MP" ]
On the evolution equations of interfacial variables in two-phase flows Giuseppe Orlando^(1) Paolo Francesco Barbante^(1), Luca Bonaventura^(1) ================================================================================ ^(1) MOX, Dipartimento di Matematica, Politecnico di Milano Piazza Leonardo da Vinci 32, 20133 Milano, Italy [email protected], [email protected], [email protected] Keywords: Two-phase flows, Geometric variables, Interfacial area density, Unit normal Many physical situations are characterized by interfaces with a non trivial shape so that relevant geometric features, such as interfacial area, curvature or unit normal vector, can be used as main indicators of the topology of the interface. We analyze the evolution equations for a set of geometrical quantities that characterize the interface in two-phase flows. Several analytical relations for the interfacial area density are reviewed and presented, clarifying the physical significance of the different quantities involved and specifying the hypotheses under which each transport equation is valid. Moreover, evolution equations for the unit normal vector and for the curvature are analyzed. The impact of different formulations is then assessed in numerical simulations of rising bubble benchmarks. § INTRODUCTION Two-phase flows play an important role in several natural processes and engineering systems. A moving interface delimits the bulk regions of the single phases. According to the interface geometrical configuration, two-phase flows can be classified as either separated or disperse. A separated flow is characterized by large regions of both phases. On the contrary, a disperse flow is constituted of particles, such as bubbles or droplets, dispersed in a carrier fluid, which is called the continuous phase. In both separated and disperse flows, the exchanges between the two phases occur at the interface and phase exchange terms are proportional to the interface area <cit.>. Hence, the computation of this quantity is a prerequisite to obtain reliable values of exchange terms, especially in non-equilibrium conditions or when chemical reactions occur. The use of suitable evolution equations to predict the interfacial area concentration has a long tradition in the literature, see e.g. <cit.>, and represents, in the case of disperse flows, a popular alternative to Population Balance Equation (PBE) models, like that proposed in <cit.>, which applies the method of moments to derive several transport equations for the moments of the considered Probability Density Function (PDF) <cit.>. The use of a relation for the evolution of the interfacial area leads instead to a single transport equation, thus providing a significant advantage in terms of computational efficiency with respect to the alternative PBE approach. Moreover, since the phase exchange terms depend on the geometry of the interface, considering higher order statistics such as the mean curvature or the outward unit normal vector could significantly improve the description of such terms. The analysis of evolution equations of geometrical quantities that characterize the interface in two-phase flows has been subject of several contributions, see among many others <cit.>. Most of these investigations were focused on the analysis of the evolution equation for the interfacial area density, which will be reviewed and extended in Section <ref>. The evolution equations for the unit normal vector and the mean curvature have been analyzed in <cit.>, considering only the normal component of the interfacial velocity. In this work, following the discussion in <cit.>, we will extend these relations considering the complete interfacial velocity as advecting field, providing thus more generality in the analysis of the dynamics of the interface. The two phases exchange mass, momentum, and energy across the interface, and a suitable modelling for such terms is therefore needed. In future work, we plan to couple the equations developed in Section <ref> and <ref> with the multifluid Bear-Nunziato type equations for two-phase flows. The results presented in Section <ref> show that the equations developed in this article can be used to predict the interfacial area. The paper is structured as follows. In Section <ref>, we review the classical local balance laws which model two-phase flows and we show that the interfacial source terms are proportional to the interfacial area density. In Section <ref>, we obtain local instantaneous transport equations for the interfacial area density. In Section <ref>, we analyze the evolution of the unit normal and of the mean curvature. Some numerical experiments to validate the different relations are proposed in Section <ref>. Some conclusions and perspectives for future work are finally presented in Section <ref>. § LOCAL BALANCE EQUATIONS Let Ω⊂ℝ^d, 2 ≤ d ≤ 3 be a connected open bounded set with a sufficiently smooth boundary ∂Ω. The canonical form of a balance equation can be written as <cit.> ∂ρΨ/∂ t + (ρΨ𝐮) = 𝐉 + ρ f Ω. Herein Ψ is the conserved quantity (either a scalar or a tensorial one), ρ is the density, 𝐮 is the velocity, 𝐉 is the flux (molecular or diffusion) of the variable Ψ and f is the source density. For Ψ = 1, 𝐉 = 0 and f = 0, we obtain the continuity equation ∂ρ/∂ t + (ρ𝐮) = 0 Ω. For Ψ = 𝐮, 𝐉 = 𝐓 and f = 𝐛, we get the balance of momentum ∂ρ𝐮/∂ t + (ρ𝐮⊗𝐮) = 𝐓 + ρ𝐛 Ω. Here 𝐓 is the stress tensor and 𝐛 is the body force. Eventually, for Ψ = E, 𝐉 = 𝐓𝐮 - 𝐪 and f = 𝐛·𝐮 + r_heat, we obtain the balance of energy ∂ρ E/∂ t + (ρ E 𝐮) = (𝐓𝐮 - 𝐪) + ρ(𝐛·𝐮 + r_heat) Ω, where E is total energy per unit of mass, 𝐪 is the heat flux and r_heat is the heating source per unit mass. From now on, we assume that the domain Ω consists of two subdomains Ω_1(t) and Ω_2(t), separated by an interface Γ(t). Hence, we consider a single discontinuity across a smooth surface separating two parts occupied by the fluid where the fields are smooth. We aim to analyze the aforementioned physical laws directly incorporating the interface conditions inside the balance equations. To do so, we define the characteristic function X_k of phase k as X_k(𝐱,t) = 1, 0 and we take the product of the equation (<ref>) with the characteristic function (<ref>) so as to obtain X_k∂ρΨ/∂ t + X_k(ρΨ𝐮) = X_k𝐉 + X_kρ f. Products including the characteristic functions are discontinuous, so that the derivatives must be treated in a distributional sense <cit.>. If we take the characteristic function inside the derivatives we get ∂(X_kρΨ)/∂ t + (X_kρΨ𝐮) - (X_k𝐉) - X_kρ f = ρΨ(∂ X_k/∂ t + 𝐮· X_k) - 𝐉· X_k. Notice that, with a slight abuse of notation, we employ the same symbol for both distributional and classical derivatives and the proper operator to be considered will follow from the context. Let us briefly analyze the right-hand side of (<ref>), which can be rewritten as ρΨ(∂ X_k/∂ t + 𝐮· X_k) - 𝐉· X_k = ρΨ(∂ X_k/∂ t + 𝐮· X_k + 𝐯_I·∇ X_k - 𝐯_I· X_k) - 𝐉·∇ X_k = [ρΨ(𝐮 - 𝐯_I) - 𝐉] · X_k. The last equality stems from the so-called topological equation for phase k, which rules the evolution of the characteristic function <cit.>: ∂ X_k/∂ t + 𝐯_I· X_k = 0, where 𝐯_I is the interfacial velocity, whose definition will be specified in Section <ref>. We substitute (<ref>) into (<ref>) so as to obtain ∂(X_kρΨ)/∂ t + (X_kρΨ𝐮) - (X_k𝐉) - X_kρ f = [ρΨ(𝐮 - 𝐯_I) - 𝐉] ·∇ X_k. The right-hand side of (<ref>) is the total flux of the property Ψ of phase k across the interface Γ. As outlined in <cit.>, ∇ X_k is the product between δ(Γ), the Dirac delta distribution with support on the interface, and the outward unit normal from phase k 𝐧_k, namely ∇ X_k = -𝐧_kδ(Γ). The distribution δ(Γ) is defined by the following relation: <δ(Γ), ϕ> = ∫_Γϕ dΣ ∀ϕ∈ C^∞_0(Ω) and represents an interfacial area density. Indeed, with a slight abuse of notation, the following relation holds: ∫_Ωδ(Γ) dΩ = A, where A is the interface area. Hence, we remark that the interfacial flux terms on the right-hand side of (<ref>) are proportional to the interfacial area density, whose computation is therefore fundamental for an accurate estimate of these exchange terms. § LOCAL INSTANTANEOUS TRANSPORT EQUATIONS FOR THE INTERFACIAL AREA DENSITY In this Section, we analyze the evolution equations for the interfacial area density. We assume that the interface Γ(t) between two phases is a d-1-dimensional manifold in a d-dimensional Euclidean space with d = 2,3. Notice that we assume that the surface Γ(t) is closed and without contact lines. Two representations of an interface in space can be considered <cit.>. In the first description, the surface is seen as the zero-level of a suitable function F(𝐱,t), with 𝐱∈Ω denoting the spatial coordinates and t ∈(0, T_f] denoting the temporal coordinate. Here, T_f is the final time. The second representation is given by 𝐱 = 𝐱(α,t), where α are the surface coordinates. The velocity of a point on the surface with coordinates α is defined as 𝐯_I = ∂𝐱/∂ t. In the following, for the sake of simplicity, we omit the explicit dependence on space and time for the different geometric variables. Since F is identically zero for all the points located on the interface, its Lagrangian derivative at velocity 𝐯_I is null, which entails the following kinematic equation: ∂ F/∂ t + 𝐯_I·∇ F = 0. Moreover, the unit vector normal to the surface is given by: 𝐧 = ±∇ F/|∇ F|. From (<ref>) and (<ref>), it follows immediately that two different interfacial velocity fields with the same normal component yield the same Lagrangian derivative for F. Indeed, if we substitute (<ref>) into (<ref>), we obtain ∂ F/∂ t±(𝐯_I·𝐧)|∇ F| = 0. As already reported in Section <ref>, the characteristic function X_k (<ref>) follows the same dynamics (<ref>). We derive now the local instantaneous equations for the Dirac delta distribution δ(Γ) with support on the interface. For this purpose, we take the gradient of the evolution equation of the characteristic function (<ref>) and we obtain the following relation: ∂∇ X_k/∂ t + ∇(𝐯_I·∇ X_k) = 0. There are functions, like the outward unit normal vector or the interfacial velocity, whose definitions are properly meaningful only for the points on the surface Γ, as explained in <cit.>. Therefore, for these quantities, we have to employ derivatives that are defined intrinsically on the surface. The relations for time and space derivatives of this kind for a generic scalar function f have been introduced in <cit.>: ∂_s f/∂ t = ∂f̃/∂ t + (𝐯_I·𝐧)(∇f̃·𝐧) ∂_s f/∂ x_i = ∂f̃/∂ x_i - n_i(∇f̃·𝐧), where f̃ is any smooth extension of f outside Γ. In particular, for a first-order tensor 𝐟 we have 𝐟 = (𝐈-𝐧⊗𝐧):∇𝐟̃ = 𝐟̃ - [(∇𝐟̃)𝐧]·𝐧. Here, 𝐈 is the d × d identity tensor and we define the gradient of a first-order tensor 𝐟̃ as the second-order tensor whose components are [∇𝐟̃]_ij = ∂f̃_i/∂ x_j. Notice that, as explained in <cit.>, given f defined only on Γ, fδ(Γ) denotes the distribution f̃δ(Γ). It is useful that f̃ satisfies the condition ∇f̃·𝐧|_Γ = 0. Analogous considerations hold in the case 𝐟δ(Γ), for which one can impose [(∇𝐟̃)𝐧] |_Γ = 0. The smooth extensions f̃ and 𝐟̃ do not satisfy in general (<ref>) and (<ref>), respectively. However, relations (<ref>) and (<ref>) make the value of the surface derivatives independent of the value of f̃ or 𝐟̃ so that we can consider f and f̃ or 𝐟 and 𝐟̃ without distinction. On the other hand, if f or 𝐟 are already defined and regular in the whole space-time domain Ω×(0, T_f], one can define fδ(Σ) or 𝐟δ(Σ), but this distribution will also depend on the value of ∇ f ·𝐧|_Γ or of [(∇𝐟)𝐧]|_Γ. The following relation holds: ∂(f̃δ(Γ))/∂□ = ∂f̃/∂□δ(Γ) + ∂δ(Γ)/∂□f̃, which reduces to <cit.> ∂(f̃δ(Γ))/∂□ = ∂_sf/∂□δ(Γ) + ∂δ(Γ)/∂□f, for quantities f defined uniquely on Γ which satisfy (<ref>) and (<ref>). We consider now the outward unit normal as a function defined in the whole space-time domain by means of (<ref>). Applying (<ref>) and (<ref>) to ∂∇ X_k/∂ t, one obtains ∂∇ X_k/∂ t = -∂[δ(Γ)𝐧_k ]/∂ t = -∂δ(Γ)/∂ t𝐧_k - ∂𝐧_k/∂ tδ(Γ). If we substitute (<ref>) into (<ref>) and we compute the scalar product by 𝐧_k we obtain -∂δ(Γ)/∂ t - ∂𝐧_k/∂ tδ(Γ)·𝐧_k -[(𝐯_I·𝐧_k)𝐧_kδ(Γ)] + (𝐯_I·𝐧_k)(𝐧_k)δ(Γ) = 0. Since 𝐧_k·𝐧_k = 1, one can verify that ∂𝐧_k/∂ t·𝐧_k = 0 and, therefore, (<ref>) reduces to ∂δ(Γ)/∂ t + [(𝐯_I·𝐧_k)𝐧_kδ(Γ)] - (𝐯_I·𝐧_k)(𝐧_k)δ(Γ) = 0. This is a well known relation and it is described in several contributions <cit.>. In (<ref>), the term 𝐧_k is directly related to curvature effects. Indeed, the following relation holds <cit.>: H = -1/2𝐧_k, where H denotes the mean curvature. Equation (<ref>) can be rewritten in other forms. First of all, since 𝐧_k = 𝐧_k <cit.>, we immediately obtain ∂δ(Γ)/∂ t + [(𝐯_I·𝐧_k)𝐧_kδ(Γ)] - (𝐯_I·𝐧_k)(𝐧_k)δ(Γ) = 0. Moreover, (<ref>) is equivalent to ∂δ(Γ)/∂ t + (𝐯_I·𝐧_k)𝐧_k·∇δ(Γ) = -δ(Γ)∇(𝐯_I·𝐧_k)·𝐧_k, a relation present in <cit.>. Indeed, the following relation holds: [(𝐯_I·𝐧_k)𝐧_kδ(Γ)] = (𝐯_I·𝐧_k)𝐧_k·∇δ(Γ) + [(𝐯_I·𝐧_k)𝐧_k]δ(Γ) = (𝐯_I·𝐧_k)𝐧_k·∇δ(Γ) + (𝐯_I·𝐧_k)𝐧_kδ(Γ) + ∇(𝐯_I·𝐧_k) ·𝐧_kδ(Γ). If we substitute (<ref>) into (<ref>), we recover (<ref>). The relations (<ref>), (<ref>) and (<ref>) contain only the normal velocity (𝐯_I·𝐧_k)𝐧_k. To rewrite them in order to make the complete interfacial velocity 𝐯_I appear, we first define the tangential velocity 𝐯_I_t as 𝐯_I_t = 𝐯_I - (𝐯_I·𝐧_k)𝐧_k = (𝐈 - 𝐧_k⊗𝐧_k)𝐯_I. Adding and subtracting [𝐯_I_tδ(Γ)] to (<ref>), we obtain the following relation: ∂δ(Γ)/∂ t + [𝐯_Iδ(Γ)] - (𝐯_I·𝐧_k)(𝐧_k)δ(Γ) - [𝐯_I_tδ(Γ)] = 0. It can be proven <cit.> that [𝐯_I_tδ(Γ)] = δ(Γ)𝐯_I_t. We now propose a proof of relation (<ref>), which is valid considering 𝐯_I_t defined in the whole space-time domain Ω×(0, T_f]. First of all, since [𝐯_I_tδ(Γ)] = δ(Γ)𝐯_I_t + 𝐯_I_t·∇δ(Γ), we need to analyze ∇δ(Γ). Thanks to relation (<ref>), we obtain ∇(∇ X_k) = -∇(𝐧_kδ(Γ)) = -(∇𝐧_k)δ(Γ) - 𝐧_k⊗∇δ(Γ) ∇(∇ X_k)^T = -∇(𝐧_kδ(Γ))^T = -(∇𝐧_k)^Tδ(Γ) - ∇δ(Γ) ⊗𝐧_k. If we multiply the previous relations by 𝐧_k, we get ∇(∇ X_k)𝐧_k = -(∇𝐧_k)𝐧_kδ(Γ) - (𝐧_k⊗∇δ(Γ))𝐧_k = -(∇𝐧_k)𝐧_kδ(Γ) - (∇δ(Γ) ·𝐧_k) ·𝐧_k ∇(∇ X_k)^T𝐧_k = -(∇𝐧_k)^T𝐧_kδ(Γ) - (∇δ(Γ) ⊗𝐧_k)𝐧_k = -∇δ(Γ), where we exploited (∇𝐧_k)^T𝐧_k = 0. Since [∇(∇ X_k)] = [∇(∇ X_k)]^T, we obtain ∇δ(Γ) = (∇𝐧_k)𝐧_kδ(Γ) + (∇δ(Γ) ·𝐧_k)𝐧_k. Equation (<ref>) generalizes the relation for ∇δ(Γ) derived in <cit.>, which is valid when 𝐧_k is defined only on Γ. Hence, we get [𝐯_I_tδ(Γ)] = δ(Γ)𝐯_I_t + 𝐯_I_t·∇δ(Γ) = δ(Γ)𝐯_I_t + 𝐯_I_t·[(∇𝐧_k)𝐧_kδ(Γ) + (∇δ(Γ) ·𝐧_k)𝐧_k]. Since 𝐯_I_t·𝐧_k = 0 and ∇(𝐯_I_t·𝐧_k) = (∇𝐯_I_t)^T𝐧_k + (∇𝐧_k)^T𝐯_I_t = 0, we obtain from (<ref>) [𝐯_I_tδ(Γ)] = δ(Γ)𝐯_I_t - (∇𝐯_I_t)^T𝐧_k·𝐧_kδ(Γ) = δ(Γ)𝐯_I_t. If we now substitute (<ref>) into (<ref>), we get ∂δ(Γ)/∂ t + [𝐯_Iδ(Γ)] - (𝐯_I·𝐧_k)(𝐧_k)δ(Γ) - δ(Γ)𝐯_I_t = 0 It can be also shown <cit.> that 𝐯_I = 𝐯_I_t + (𝐯_I·𝐧_k)𝐧_k. Exploiting this relation in (<ref>) leads to the following equation <cit.>: ∂δ(Γ)/∂ t + [𝐯_Iδ(Γ)] = δ(Γ)𝐯_I. We can rewrite the term [𝐯_Iδ(Γ)] present in (<ref>) as [𝐯_Iδ(Γ)] = 𝐯_I·∇δ(Γ) + δ(Γ)𝐯_I and, noticing that 𝐯_I = 𝐯_I - [(∇𝐯_I)𝐧_k] ·𝐧_k, we get ∂δ(Γ)/∂ t + 𝐯_I·∇δ(Γ) = -δ(Γ)[(∇𝐯_I)𝐧_k] ·𝐧_k or, equivalently, ∂δ(Γ)/∂ t + 𝐯_I·∇δ(Γ) = -δ(Γ)𝐧_k⊗𝐧_k : ∇𝐯_I, a relation which has been derived in <cit.>. As discussed in <cit.>, (<ref>) is equivalent to ∂δ(Γ)/∂ t + (𝐯_I·𝐧_k)𝐧_k·∇δ(Γ) = -δ(Γ)∇(𝐯_I·𝐧_k) ·𝐧_k. One can also notice that [(∇𝐯_I)𝐧_k] ·𝐧_k = [(𝐯_I⊗𝐧_k)] ·𝐧_k - (𝐧_k)(𝐯_I·𝐧_k). Hence, we can rewrite (<ref>) as follows: ∂δ(Γ)/∂ t + 𝐯_I·∇δ(Γ) = δ(Γ)(𝐧_k)(𝐯_I·𝐧_k) - δ(Γ)[(𝐯_I⊗𝐧_k)] ·𝐧_k. Relation (<ref>) is particularly interesting because it provides an evolution equation for δ(Γ) in which the complete interfacial velocity is the advecting field and a quantity related to the curvature, i.e. 𝐧_k, appears as a forcing term. To the best of our knowledge, this novel formulation is not present in the literature we have reviewed and is presented here for the first time. Analogously, notice that [(∇𝐯_I)𝐧_k] ·𝐧_k = ∇(𝐯_I·𝐧_k) ·𝐧_k - (𝐧_k⊗𝐧_k) ·𝐯_I + (𝐧_k)(𝐯_I·𝐧_k), so as to obtain ∂δ(Γ)/∂ t + 𝐯_I·∇δ(Γ) = -δ(Γ)(𝐧_k)(𝐯_I·𝐧_k) -δ(Γ)∇(𝐯_I·𝐧_k) ·𝐧_k + δ(Γ)[(𝐧_k⊗𝐧_k)] ·𝐯_I. It is worthwhile to recall once more that, in all the previous relations, we have considered the outward unit normal vector and the interfacial velocity as variables already defined in the whole space-time domain Ω×(0, T_f]. Relations (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) are valid also by considering the outward unit normal vector and the interfacial velocity as functions uniquely defined on the interface Γ and then analyzing their extension, that, with a slight abuse of notation, we still denote by 𝐧_k and 𝐯_I. However, in this situation, relation (<ref>) allows to consider much simpler forms for the evolution equation of the interfacial area density. As evident from (<ref>) and reported in <cit.>, the following relation holds: ∇δ(Γ) = [∇δ(Γ) ·𝐧_k]𝐧_k. Indeed, since the extension of 𝐧_k satisfies property (<ref>), the term (∇𝐧_k)𝐧_kδ(Γ) is null and, therefore, (<ref>) reduces to (<ref>). Moreover, thanks to (<ref>), we can rewrite the term [𝐯_Iδ(Γ)] present in (<ref>) as [𝐯_Iδ(Γ)] = 𝐯_I·∇δ(Γ) + δ(Γ)𝐯_I so as to obtain from (<ref>) ∂δ(Γ)/∂ t + 𝐯_I·∇δ(Γ) = 0 or, substituting (<ref>) in (<ref>), ∂δ(Γ)/∂ t + (𝐯_I·𝐧_k)𝐧_k·∇δ(Γ) = 0, a relation present in <cit.>. Equation (<ref>) is clearly different from (<ref>). However, under the assumption that the fields involved depend only on the surface coordinates, the following relation holds <cit.>: δ(Γ)[∇(𝐯_I)𝐧_k] ·𝐧_k = 0 and, therefore, (<ref>) reduces to (<ref>). It is important to notice that relations (<ref>) and (<ref>) are rigorously valid and are equivalent to (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), if only if one analyzes variables defined uniquely on the surface and then considers an extension satisfying the property (<ref>). This is essential when performing numerical approximation. Indeed, since it is not possible to work directly with distributions in a numerical framework and it is often advantageous using equations that are valid in the whole space-time domain rather than solving PDEs on surface, relations such as (<ref>) are not directly applicable for numerical simulations, as we will see in Section <ref>. Moreover, even employing regular functions, it is not possible to guarantee in general that (<ref>) and (<ref>) are satisfied, from which the equivalence between (<ref>), (<ref>) and all the other relations is established. As pointed out at the beginning of the Section, all the previous relations are only valid under the assumption that the surface Γ is closed without contact lines. In case this hypothesis is not valid, we need to add an extra term which takes into account contact lines <cit.>: ∂δ(Γ)/∂ t + [(𝐯_I·𝐧_k)𝐧_kδ(Γ)] = (𝐯_I·𝐧_k)(𝐧_k)δ(Γ) - δ(Δ)𝐯_Δ·𝐭_k. Here, δ(Δ) is the Dirac delta distribution with support on the contact lines, 𝐯_Δ represents the velocity field of the contact lines and 𝐭_k is the unit vector tangent to the interface and directed in outward normal direction with respect to the contact lines. Evolution equations for higher order statistics such as curvatures or unit normal vector can then be considered and will be analyzed in Section <ref>. § EVOLUTION EQUATIONS FOR THE UNIT NORMAL VECTOR AND FOR THE MEAN CURVATURE In this Section, we provide an overview of models for the evolution of geometric features of interfaces separating the two phases in two-phase flows, such as the unit normal vector and the mean curvature. We follow the analyses in <cit.>, but we do not restrict ourselves a priori to situations where the tangential interfacial velocity is equal to zero as in those references. Considering the complete interfacial velocity as advecting field can be important for those applications where the interface is not well resolved and, therefore, we cannot rely on the direct computation of the unit normal vector from gradient of the volume fraction. For the sake of simplicity in the notation, we will denote the unit normal vector by 𝐧 and we will consider the definition 𝐧 = ∇ F/|∇ F|. As already discussed in Section <ref>, relation (<ref>) defines the normal vector for the whole space-time domain Ω_T = Ω×(0, T_f]. We analyze now the time evolution of the normal vector 𝐧. After some manipulations, we obtain ∂𝐧/∂ t = 1/|∇ F|∂∇ F/∂ t - 1/|∇ F|^3(∇ F ·∂∇ F/∂ t)∇ F = 1/|∇ F|(𝐈 - 𝐧⊗𝐧)∂∇ F/∂ t. Taking the gradient of (<ref>), it follows ∂∇ F/∂ t = -(∇𝐯_I)^T∇ F - [∇(∇ F)]^T𝐯_I. Moreover, after some manipulations, it can be shown that ∇𝐧 = 1/|∇ F|[∇(∇ F) - 𝐧⊗∇(∇ F)^T𝐧] = 1/|∇ F|(𝐈 - 𝐧⊗𝐧)∇(∇ F). If we assume that F is sufficiently regular, thanks to the Schwarz theorem, [∇(∇ F)]^T = ∇(∇ F) and substituting (<ref>) into (<ref>) we obtain the following relation: ∂∇ F/∂ t = -(∇𝐯_I)^T∇ F - |∇ F|[(𝐈 - 𝐧⊗𝐧)^-1(∇𝐧)]𝐯_I. Finally, substituting (<ref>) into (<ref>), we obtain ∂𝐧/∂ t + (∇𝐧)𝐯_I = (𝐧⊗𝐧 - 𝐈)(∇𝐯_I)^T𝐧 or equivalently d 𝐧/d t = (𝐧⊗𝐧 - 𝐈)(∇𝐯_I)^T𝐧 = [(𝐧⊗𝐧) : ∇𝐯_I]𝐧 - (∇𝐯_I)^T𝐧, a relation derived in <cit.>. On the other hand, substituting (<ref>) into (<ref>), we obtain the following relation: ∂ F/∂ t + (𝐯_I·𝐧)|∇ F| = 0. Taking the gradient of (<ref>), we get ∂∇ F/∂ t + ∇(𝐯_I·𝐧)|∇ F| + (𝐯_I·𝐧)∇(|∇ F|) = 0. Since ∇(|∇ F|) = [∇(∇ F)]^T𝐧 = [∇(∇ F)]𝐧, we obtain from (<ref>) ∂∇ F/∂ t = -∇(𝐯_I·𝐧)|∇ F| - (𝐯_I·𝐧)[∇(∇ F)]𝐧. Substituting (<ref>) into (<ref>), we obtain ∂∇ F/∂ t = -∇(𝐯_I·𝐧)|∇ F| - (𝐯_I·𝐧)|∇ F|(𝐈 - 𝐧⊗𝐧)^-1(∇𝐧)𝐧. If we employ the previous relation into (<ref>), we obtain ∂𝐧/∂ t + (𝐯_I·𝐧)(∇𝐧)𝐧 = (𝐧⊗𝐧 - 𝐈)∇(𝐯_I·𝐧) or, equivalently, thanks to (<ref>) and (<ref>) ∂_s𝐧/∂ t = -∇_s(𝐯_I·𝐧). Notice that equation (<ref>) can be directly obtained from (<ref>) considering only the normal component of the interfacial velocity, namely taking 𝐯_I = (𝐯_I·𝐧)𝐧. This further confirms the consideration in Section <ref> that two interfacial velocity fields with the same normal component yield the same Lagrangian derivative. Notice also that, as discussed in Section <ref>, if one considers the unit normal vector and the interfacial velocity as defined uniquely on the interface and analyzes their extension, (∇𝐧)𝐧 = 0 and ∇(𝐯_I·𝐧) ·𝐧 = 0. Hence, (<ref>) reduces to ∂𝐧/∂ t = -∇(𝐯_I·𝐧), a relation present in <cit.>. Moreover, if |∇ F| is constant, the second order tensor ∇𝐧 is symmetric and, therefore, in this situation, (<ref>) reduces to ∂𝐧/∂ t = (𝐧⊗𝐧 - 𝐈)∇(𝐯_I·𝐧). By comparing (<ref>) and (<ref>), we obtain the following relation: (∇𝐧)𝐯_I_t + (𝐧⊗𝐧 - 𝐈)(∇𝐧)^T𝐯_I = (∇𝐧)𝐯_I_t + (𝐧⊗𝐧 - 𝐈)(∇𝐧)^T𝐯_I_t = 0. It is also of interest to study the behaviour of the material derivative of the normal vector following the surface. The convective derivative following the surface can be simplified as follows: d_s𝐧/dt = ∂_s𝐧/∂ t + (∇_s𝐧)[𝐯_I_t + (𝐯_I·𝐧)𝐧] = ∂_s𝐧/∂ t + (∇_s𝐧)𝐯_I_t, since (∇_s𝐧)𝐧 = 0. This is a direct consequence of the relation ∇_s𝐧 = ∇𝐧 - (∇𝐧)𝐧⊗𝐧 = ∇𝐧 - ∇𝐧(𝐧⊗𝐧) = ∇𝐧(𝐈 - 𝐧⊗𝐧). Furthermore, thanks to (<ref>), the following identity holds: d_s𝐧/dt = ∂_s𝐧/∂ t + (∇_s𝐧)𝐯_I = ∂𝐧/∂ t + (𝐯_I·𝐧)(∇𝐧)𝐧 + (∇𝐧)(𝐈 - 𝐧⊗𝐧)𝐯_I = ∂𝐧/∂ t + (𝐯_I·𝐧)(∇𝐧)𝐧 + (∇𝐧)𝐯_I - (𝐯_I·𝐧)(∇𝐧)𝐧 = ∂𝐧/∂ t + (∇𝐧)𝐯_I = d𝐧/dt. It is interesting to point out that an analogous relation holds for a generic scalar field f; indeed: d_sf/dt = ∂_s f/∂ t + 𝐯_I·∇_sf = ∂ f/∂ t + (𝐯_I·𝐧)(∇ f ·𝐧) + 𝐯_I·[∇ f - (∇ f ·𝐧)𝐧] = ∂_s f/∂ t + 𝐯_I·∇ f + (𝐯_I·𝐧)(∇ f ·𝐧) - (𝐯_I·𝐧)(∇ f ·𝐧) = ∂ f/∂ t + 𝐯_I·∇ f = df/dt. The mean curvature H is directly linked to the outward unit normal by relation <cit.> H = -1/2𝐧. Taking the divergence of (<ref>), we derive the evolution equation for the mean curvature (<ref>). Notice that [(∇𝐧)𝐯_I] = -2𝐯_I·∇ H + ∇𝐧 : (∇𝐯_I)^T and that [(𝐧⊗𝐧 - 𝐈)(∇𝐯_I)^T𝐧] = -2H(∇𝐯_I)𝐧·𝐧 + (∇𝐯_I)^T𝐧·(∇𝐧)𝐧 + (𝐧⊗𝐧 - 𝐈) : [∇[(∇𝐯_I)^T𝐧]]^T. Hence, the evolution equation for the mean curvature reads as follows: ∂ H/∂ t + 𝐯_I·∇ H = H(∇𝐯_I)𝐧·𝐧 + 1/2∇𝐧 : (∇𝐯_I)^T - 1/2(∇𝐯_I)^T𝐧·(∇𝐧)𝐧 - 1/2(𝐧⊗𝐧 - 𝐈) : [∇[(∇𝐯_I)^T𝐧]]^T. Starting from (<ref>), we obtain the following relation: ∂ H/∂ t + (𝐯_I·𝐧)𝐧·∇ H = H∇(𝐯_I·𝐧) ·𝐧 - 1/2(𝐧⊗𝐧 - 𝐈) : ∇[∇(𝐯_I·𝐧)] + 1/2(∇𝐧) : ∇[(𝐯_I·𝐧)𝐧]^T - 1/2(∇𝐧)𝐧·∇(𝐯_I·𝐧). Notice that, as already observed for (<ref>) and (<ref>), relation (<ref>) can be directly obtained from (<ref>) considering only the normal component of the interfacial velocity. We recall here the relation between the Gaussian curvature K and the unit normal vector <cit.>: K = 1/2[𝐧(𝐧) + 𝐧×(∇×𝐧)]. After some manipulations (see Appendix <ref>), it can be shown that (<ref>) reduces to ∂_sH/∂ t = (2H^2 - K)(𝐯_I·𝐧) + 1/2[∇_s(𝐯_I·𝐧)]. The relation (<ref>) represents an extension of the evolution equation derived in <cit.>, which we report here for the convenience of the reader: ∂(-H)/∂ t = -(2H^2 - K)(𝐯_I·𝐧) - 1/2[∇(𝐯_I·𝐧)]. Notice that in <cit.>, the definition H = 1/2𝐧 was adopted. However, for the sake of coherence with the results reported in Section <ref>, we rely on (<ref>). The relation (<ref>) reduces to (<ref>) if all the variables are uniquely defined on the interface and one considers extensions which satisfy the properties (<ref>) and (<ref>). § NUMERICAL TESTS In this Section, we compare numerically some of the relations presented in Section <ref>. In particular, we focus on (<ref>), (<ref>), and (<ref>). (<ref>) and (<ref>) are clearly different and, since we cannot work with distributions when performing numerical approximations, a different behaviour is expected. For the sake of simplicity in the notation, we omit the explicit dependence on the interface for the Dirac delta δ(Γ). Before proceeding to the description of relevant test cases, we outline the time and space discretization strategies chosen for (<ref>), (<ref>), and (<ref>). First of all, we point out that the interfacial velocity 𝐯_I and the outward unit normal vector 𝐧 are considered given. Indeed, they are obtained from Navier-Stokes equations coupled with a level set method, which are solved independently of (<ref>) and (<ref>). For this purpose, we employ the implicit DG solver for incompressible two-phase flows with an artificial compressibility formulation developed in <cit.> as an extension of the solver for single-phase incompressible Navier-Stokes equations presented in <cit.>, to which we both refer for a detailed description of the discretization scheme. The conservative level set (CLS) method <cit.> in combination with a reinitialization procedure is employed to capture the moving interface. The time discretization is based on the L-stable two-stage TR-BDF2 method, which has been originally developed in <cit.> as a combination of the trapezoidal rule and of the Backward Differentiation Formula method of order 2 (BDF2). We refer also to <cit.> for a complete analysis of the scheme. Let Δ t be a discrete time step and t^n = nΔ t, n = 0, …, N, be discrete time levels. The first stage for (<ref>) reads as follows: δ^n + γ - δ^n/γΔ t + 1/2𝐯_I^n+γ·∇δ^n+γ + 1/2𝐯_I^n·∇δ^n = 0, where γ = 2 - √(2). The second stage reads as follows: δ^n+1 - δ^n+γ/(1-γ)Δ t + a_33𝐯_I^n + 1·∇δ^n+1 + a_32𝐯_I^n+γ·∇δ^n+γ + a_31𝐯_I^n·∇δ^n = 0, where a_31 = 1-γ/2(2-γ) a_32 = 1-γ/2(2-γ) a_33 = 1/2-γ. Analogously, the first stage for (<ref>) reads as follows: δ^n + γ - δ^n/γΔ t + 1/2𝐯_I^n+γ·∇δ^n+γ + 1/2𝐯_I^n·∇δ^n = -1/2δ^n+γ𝐧^n+γ⊗𝐧^n+γ : ∇𝐯_I^n+γ -1/2δ^n𝐧^n⊗𝐧^n : ∇𝐯_I^n, whereas the second stage reads as follows: δ^n+1 - δ^n+γ/(1-γ)Δ t + a_33𝐯_I^n+1·∇δ^n+1 + a_32𝐯_I^n+γ·∇δ^n+γ + a_31𝐯_I^n·∇δ^n = -a_33δ^n+1𝐧^n+1⊗𝐧^n+1 : ∇𝐯_I^n+1 -a_32δ^n+γ𝐧^n+γ⊗𝐧^n+γ : ∇𝐯_I^n+γ -a_31δ^n𝐧^n⊗𝐧^n : ∇𝐯_I^n. Finally, the first stage for (<ref>) reads as follows: δ^n + γ - δ^n/γΔ t + 1/2𝐯_I^n+γ·∇δ^n+γ + 1/2𝐯_I^n·∇δ^n = 1/2δ^n + γ(𝐧^n + γ)(𝐯_I^n + γ·𝐧^n + γ) - 1/2δ^n + γ[(𝐯_I^n + γ⊗𝐧^n + γ)] ·𝐧^n + γ + 1/2δ^n(𝐧^n)(𝐯_I^n·𝐧^n) - 1/2δ^n[(𝐯_I^n⊗𝐧^n)] ·𝐧^n, whereas the second stage reads as follows: δ^n+1 - δ^n+γ/(1-γ)Δ t + a_33𝐯_I^n+1·∇δ^n+1 + a_32𝐯_I^n+γ·∇δ^n+γ + a_31𝐯_I^n·∇δ^n = -a_33δ^n+1(𝐧^n+1)(𝐯_I^n+1·𝐧^n+1) - a_33δ^n+1[(𝐯_I^n+1⊗𝐧^n+1)] ·𝐧^n+1 -a_32δ^n + γ(𝐧^n + γ)(𝐯_I^n + γ·𝐧^n + γ) - a_32δ^n + γ[(𝐯_I^n + γ⊗𝐧^n + γ)] ·𝐧^n + γ -a_31δ^n(𝐧^n)(𝐯_I^n·𝐧^n) - a_31δ^n[(𝐯_I^n⊗𝐧^n)] ·𝐧^n. CLS method assumes that the level set function is a regularized Heaviside function. We recall here the relation between δ(Γ) and δ(F), namely the Dirac delta distribution with support equal to the function F. As discussed in <cit.>, the following relation holds: δ(Γ) = δ(F)|∇ F|, so that we can set δ^0 = |∇ϕ^0|, with ϕ denoting the level set function. For what concerns the spatial discretization, we consider discontinuous finite element approximations. The spatial discretization coincides with that described in <cit.> and implemented in the deal.II library, which has been employed for the numerical simulations. We consider a decomposition of the domain Ω into a triangulation 𝒯_h and denote each element by K. The skeleton ℰ denotes the set of all element faces and ℰ = ℰ^I∪ℰ^B, where ℰ^I is the subset of interior faces and ℰ^B is the subset of boundary faces. Suitable jump and average operators can then be defined as customary for discontinuous finite element discretizations. A face e ∈ℰ^I shares two elements that we denote by K^+ with outward unit normal 𝐧_e^+ and K^- with outward unit normal 𝐧_e^-, whereas for a face e ∈ℰ^B we denote by 𝐧_e the outward unit normal. For a scalar function φ the jump is defined as [[φ]] = φ^+𝐧_e^+ + φ^-𝐧_e^- if e ∈ℰ^I [[φ]] = φ𝐧_e if e ∈ℰ^B, whereas the average is defined as {{φ}} = 1/2(φ^+ + φ^-) if e ∈ℰ^I {{φ}} = φ if e ∈ℰ^B. Similar definitions apply for a vector function φ: [[φ]] = φ^+·𝐧_e^+ + φ^-·𝐧_e^- if e ∈ℰ^I [[φ]] = φ·𝐧_e if e ∈ℰ^B {{φ}} = 1/2(φ^+ + φ^-) if e ∈ℰ^I {{φ}} = φ if e ∈ℰ^B. We now introduce the following finite element space: Q_k = {v ∈ L^2(Ω) : v|_K∈ℚ_k ∀ K ∈𝒯_h}, where ℚ_k is the space of polynomials of degree k in each coordinate direction. We consider X_h = Q_2 for the discretization of δ. Given these definitions, the weak formulation associated to (<ref>) reads as follows: ∑_K ∈𝒯_h∫_Kδ^n+γ/γΔ tw dΩ + 1/2∑_K ∈𝒯_h∫_K𝐯_I^n+γ·∇δ^n+γ w dΩ + 1/2∑_e ∈ℰ∫_e{{δ^n+γ𝐯_I^n+γ}}·[[w]]dΣ - 1/2∑_e ∈ℰ∫_e{{𝐯_I^n+γ}}·[[δ^n+γw]]dΣ + 1/2∑_e ∈ℰ∫_eλ^n + γ/2[[δ^n+γ]] ·[[w]]dΣ = ∑_K ∈𝒯_h∫_Kδ^n/γΔ tw dΩ - 1/2∑_K ∈𝒯_h∫_K𝐯_I^n·∇δ^n w dΩ - 1/2∑_e ∈ℰ∫_e{{δ^n𝐯_I^n}}·[[w]]dΣ - 1/2∑_e ∈ℰ∫_e{{𝐯_I^n}}·[[δ^nw]]dΣ - 1/2∑_e ∈ℰ∫_eλ^n/2[[δ^n]] ·[[w]]dΣ ∀ w ∈ X_h, where λ^n+γ = max(|(𝐯_I^n+γ)^+·𝐧_e^+|, |(𝐯_I^n+γ)^-·𝐧_e^-|) λ^n = max(|(𝐯_I^n)^+·𝐧_e^+|, |(𝐯_I^n)^-·𝐧_e^-|). The numerical approximation of the non-conservative term is based on the approach proposed in <cit.> and recalled in <cit.>. We recast the non-conservative term into the sum of two contributions: the first one takes into account the elementwise gradient, whereas the second one considers its jumps across the element faces. Analogously, the weak formulation for (<ref>) reads as follows: ∑_K ∈𝒯_h∫_Kδ^n+γ/γΔ tw dΩ + 1/2∑_K ∈𝒯_h∫_K𝐯_I^n+γ·∇δ^n+γ w dΩ + 1/2∑_e ∈ℰ∫_e{{δ^n+γ𝐯_I^n+γ}}·[[w]]dΣ - 1/2∑_e ∈ℰ∫_e{{𝐯_I^n+γ}}·[[δ^n+γw]]dΣ + 1/2∑_e ∈ℰ∫_eλ^n + γ/2[[δ^n+γ]] ·[[w]]dΣ + 1/2∑_K ∈𝒯_h∫_Kδ^n+γ𝐧^n+γ⊗𝐧^n+γ : ∇𝐯_I^n+γ w dΩ + 1/2∑_e ∈ℰ∫_e{{(𝐯_I^n+γ·𝐧^n+γ)𝐧^n+γδ^n+γ}}·[[w]]dΣ - 1/2∑_e ∈ℰ∫_e{{𝐧^n+γ⊗𝐧^n+γδ^n+γ}} : <<𝐯_I^n+γw>> = ∑_K ∈𝒯_h∫_Kδ^n/γΔ tw dΩ - 1/2∑_K ∈𝒯_h∫_K𝐯_I^n·∇δ^n w dΩ - 1/2∑_e ∈ℰ∫_e{{δ^n𝐯_I^n}}·[[w]]dΣ - 1/2∑_e ∈ℰ∫_e{{𝐯_I^n}}·[[δ^nw]]dΣ - 1/2∑_e ∈ℰ∫_eλ^n/2[[δ^n]] ·[[w]]dΣ - 1/2∑_K ∈𝒯_h∫_Kδ^n𝐧^n⊗𝐧^n : ∇𝐯_I^n w dΩ - 1/2∑_e ∈ℰ∫_e{{(𝐯_I^n·𝐧^n)𝐧^nδ^n}}·[[w]]dΣ + 1/2∑_e ∈ℰ∫_e{{𝐧^n⊗𝐧^nδ^n}} : <<𝐯_I^nw>> ∀ w ∈ X_h. Finally, after some manipulations, we can rewrite the weak formulation for (<ref>) as follows: ∑_K ∈𝒯_h∫_Kδ^n+γ/γΔ tw dΩ + 1/2∑_K ∈𝒯_h∫_K𝐯_I^n+γ·∇δ^n+γ w dΩ + 1/2∑_e ∈ℰ∫_e{{δ^n+γ𝐯_I^n+γ}}·[[w]]dΣ - 1/2∑_e ∈ℰ∫_e{{𝐯_I^n+γ}}·[[δ^n+γw]]dΣ + 1/2∑_e ∈ℰ∫_eλ^n + γ/2[[δ^n+γ]] ·[[w]]dΣ + 1/2∑_K ∈𝒯_h∫_Kδ^n+γ∇𝐯_I^n+γ𝐧^n+γ·𝐧^n+γ w dΩ + 1/2∑_e ∈ℰ∫_e{{𝐯_I^n+γ·𝐧^n+γ}}[[𝐧^n+γδ^n+γw]]dΣ - 1/2∑_e ∈ℰ∫_e{{𝐧^n+γδ^n+γ}} : [[𝐯_I^n+γ⊗𝐧^n+γw]] = ∑_K ∈𝒯_h∫_Kδ^n/γΔ tw dΩ - 1/2∑_K ∈𝒯_h∫_K𝐯_I^n·∇δ^n w dΩ - 1/2∑_e ∈ℰ∫_e{{δ^n𝐯_I^n}}·[[w]]dΣ - 1/2∑_e ∈ℰ∫_e{{𝐯_I^n}}·[[δ^nw]]dΣ - 1/2∑_e ∈ℰ∫_eλ^n/2[[δ^n]] ·[[w]]dΣ - 1/2∑_K ∈𝒯_h∫_Kδ^n𝐧^n⊗𝐧^n : ∇𝐯_I^n w dΩ - 1/2∑_e ∈ℰ∫_e{{𝐯_I^n·𝐧^n}}·[[𝐧^nδ^nw]]dΣ + 1/2∑_e ∈ℰ∫_e{{𝐧^nδ^n}} : [[𝐯_I^n⊗𝐧^nw]] ∀ w ∈ X_h. The second TR-BDF2 stage can be described in a similar manner according to the formulations reported in (<ref>), (<ref>), and (<ref>). §.§ Rising bubble benchmark The two-dimensional rising bubble is a well-established benchmark for incompressible two-phase flows <cit.>. Moreover, this test case is particularly relevant since the motion of a single bubble is tracked and, therefore, the perimeter (that, in two dimensions, plays the role of the interfacial area) can be computed accurately. Two configurations are considered, whose physical relevant parameters and non-dimensional numbers are listed in Table <ref> and Table <ref>, respectively. The initial configuration consists of a circular bubble of radius r_0 = 0.25 centered at (x_0,y_0) = (0.5, 0.5) in the rectangular domain Ω = (0,1) ×(0,2). The density of the bubble is smaller than that of the surrounding fluid. The final time is T_f = 3. No-slip boundary conditions are imposed on the top and bottom boundaries, whereas periodic conditions are prescribed in the horizontal direction. The initial velocity field is zero. We start with the first configuration. The computational grid is composed by 320 × 640 elements, whereas the time step is Δ t ≈ 4.3 · 10^-3. Figure <ref> shows the numerical results obtained using both formulation (<ref>) and (<ref>) in comparison with the perimeter computed from the level set function and the reference results in <cit.>. One can easily notice that (<ref>) leads to non reliable results, whereas a good qualitative agreement is established using (<ref>). This further confirms the considerations reported in Section <ref>: while (<ref>) and (<ref>) are equivalent in the sense of distributions, they lead to significantly different numerical results and only a formulation which is valid for the whole space-time domain Ω×(0, T_f] should be used for numerical simulations. Figure <ref> shows instead a comparison between (<ref>) and (<ref>). The computational results are very similar, showing that the two relations are equivalent in the whole space-time domain, as discussed in Section <ref>. We analyze now the second configuration. This configuration is more challenging since the density of the bubble is much lower and it develops a non-convex shape with thin filament regions <cit.>. The time step is Δ t ≈ 3.6 · 10^-3. Figure <ref> shows the numerical results obtained using both formulation (<ref>) and (<ref>) in comparison with the perimeter computed from the level set function and the reference results in <cit.>. The same considerations of the previous configuration hold and, in particular, one can easily notice an excellent agreement between the reference solution and the numerical results obtained using (<ref>). Hence, this kind of relations are able to provide reliable values for the perimeter even when the interface undergoes large deformations and stretching, which are modelled by the terms on the right-hand side of (<ref>) and (<ref>). Finally, Figure <ref> shows a comparison between (<ref>) and (<ref>). One can easily notice once more that very similar results are obtained. We can empirically conclude from the numerical results that the right-hand side of (<ref>) and (<ref>) take into account stretching and deformation phenomena that increase the perimeter of the bubble. § CONCLUSIONS We have analyzed the evolution equations for a set of geometrical quantities that characterize the interface in two-phase flows. Several analytical relations have been presented, clarifying the hypotheses under which each equation is valid. More specifically, we have first reviewed the local balance laws which model two-phase flows and we have highlighted that interfacial source terms are proportional to the interfacial area density. Then, we have analyzed transport equations for the interfacial area density, for the unit normal vector and for the mean curvature. In particular, we have derived evolution equations in which the advecting field for these quantities is the complete interfacial velocity. This kind of relations can be important for applications where the interface is not well resolved. Finally, we have verified numerically the model equations on the classical rising bubble benchmark, showing their significantly different behaviour in a numerical framework. In future work, we plan to incorporate the relations for the geometrical quantities into Navier-Stokes equations and multifluid Baer-Nunziato type models for two-phase flows in order to compute terms related to surface tension and, more in general, to phase exchange. § ACKNOWLEDGMENTS The authors gratefully acknowledge N. Parolini for providing the original data of the rising bubble test case discussed in Section <ref>. L.B. and G.O. have been partially supported by the ESCAPE-2 project, European Union’s Horizon 2020 Research and Innovation Programme (Grant Agreement No. 800897). § EQUIVALENCE BETWEEN EVOLUTION EQUATIONS OF MEAN CURVATURE In this Appendix, we prove the equivalence between (<ref>) and (<ref>). We first notice that ∇_s(𝐯_I·𝐧) = ∇(𝐯_I·𝐧) - [∇(𝐯_I·𝐧) ·𝐧]𝐧 = (𝐈 - 𝐧⊗𝐧)∇(𝐯_I·𝐧) and that [∇_s(𝐯_I·𝐧)] = (𝐈 - 𝐧⊗𝐧) : ∇[∇_s(𝐯_I·𝐧)] = (𝐈 - 𝐧⊗𝐧) : ∇[(𝐈 - 𝐧⊗𝐧)∇(𝐯_I·𝐧)]. Since ∇[(𝐈 - 𝐧⊗𝐧)∇(𝐯_I·𝐧)] = -∇𝐧[∇(𝐯_I·𝐧) ·𝐧] - 𝐧⊗(∇𝐧)^T∇(𝐯_I·𝐧) + (𝐈 - 𝐧⊗𝐧)∇[∇(𝐯_I·𝐧)], we obtain (𝐈 - 𝐧⊗𝐧) : ∇[(𝐈 - 𝐧⊗𝐧)∇(𝐯_I·𝐧)] = - 2H∇(𝐯_I·𝐧) ·𝐧 + (𝐈 - 𝐧⊗𝐧) : ∇[∇(𝐯_I·𝐧)]. If we substitute into (<ref>), we obtain ∂_s H/∂ t = (2H^2 - K)(𝐯_I·𝐧) + H∇(𝐯_I·𝐧) ·𝐧 - 1/2(𝐧⊗𝐧 - 𝐈) : ∇[∇(𝐯_I·𝐧)]. Comparing (<ref>) with (<ref>), since ∂_sH/∂ t = ∂ H/∂ t + (𝐯_I·𝐧)𝐧·∇ H, we notice that the equivalence between (<ref>) and (<ref>) is established if 1/2∇𝐧 : ∇[(𝐯_I·𝐧)𝐧]^T - 1/2(∇𝐧)𝐧·∇(𝐯_I·𝐧) = (2H^2 - K)(𝐯_I·𝐧). Starting from (<ref>), we notice that K = 2H^2 - 𝐧·∇ H + 1/2|∇×𝐧|^2 - 1/2𝐧·[∇×(∇×𝐧)] and, therefore, we get (2H^2 - K)(𝐯_I·𝐧) = (𝐯_I·𝐧)𝐧·∇ H - 1/2(𝐯_I·𝐧)|∇×𝐧|^2 + 1/2(𝐯_I·𝐧)𝐧·[∇×(∇×𝐧)]. Hence, (<ref>) reduces to ∂ H/∂ t = H∇(𝐯_I·𝐧) ·𝐧 - 1/2(𝐧⊗𝐧 - 𝐈) : ∇[∇(𝐯_I·𝐧)] - 1/2(𝐯_I·𝐧)|∇×𝐧|^2 + 1/2(𝐯_I·𝐧)𝐧·[∇×(∇×𝐧)]. Since ∇×(∇×𝐧) = ∇(𝐧) - (∇𝐧) = 2∇ H - (∇𝐧) and |∇×𝐧|^2 = -[(∇𝐧)] ·𝐧 - ∇𝐧 : (∇𝐧)^T, we obtain ∂ H/∂ t + (𝐯_I·𝐧)𝐧·∇ H = H∇(𝐯_I·𝐧) ·𝐧 - 1/2(𝐧⊗𝐧 - 𝐈) : ∇[∇(𝐯_I·𝐧)] + 1/2(𝐯_I·𝐧)∇𝐧 : (∇𝐧)^T. Finally, since (𝐯_I·𝐧)∇𝐧 : (∇𝐧)^T = ∇𝐧 : ∇[(𝐯_I·𝐧)𝐧]^T - (∇𝐧)𝐧·∇(𝐯_I·𝐧), we recover (<ref>). plain
http://arxiv.org/abs/2307.04195v2
20230709150234
Natural Language Instructions for Intuitive Human Interaction with Robotic Assistants in Field Construction Work
[ "Somin Park", "Xi Wang", "Carol C. Menassa", "Vineet R. Kamat", "Joyce Y. Chai" ]
cs.RO
[ "cs.RO", "cs.AI", "cs.HC" ]
Natural Language Instructions for Intuitive Human Interaction with Robotic Assistants in Field Construction Work Somin Park^1^1Dept. of Civil and Env. Engineering, University of Michigan {somin, menassa, vkamat}@umich.edu., Xi Wang^2^2Dept. of Construction Science, Texas A&M University, [email protected]., Carol C. Menassa^1, Vineet R. Kamat^1, and Joyce Y. Chai^3^3Dept. of Elec. Engineering and Computer Science, University of Michigan, [email protected]. August 12, 2023 =========================================================================================================================================================================================================================================================================================================================================================== The introduction of robots is widely considered to have significant potential of alleviating the issues of worker shortage and stagnant productivity that afflict the construction industry. However, it is challenging to use fully automated robots in complex and unstructured construction sites. Human-Robot Collaboration (HRC) has shown promise of combining human workers’ flexibility and robot assistants’ physical abilities to jointly address the uncertainties inherent in construction work. When introducing HRC in construction, it is critical to recognize the importance of teamwork and supervision in field construction and establish a natural and intuitive communication system for the human workers and robotic assistants. Natural language-based interaction can enable intuitive and familiar communication with robots for human workers who are non-experts in robot programming. However, limited research has been conducted on this topic in construction. This paper proposes a framework to allow human workers to interact with construction robots based on natural language instructions. The proposed method consists of three stages: Natural Language Understanding (NLU), Information Mapping (IM), and Robot Control (RC). Natural language instructions are input to a language model to predict a tag for each word in the NLU module. The IM module uses the result of the NLU module and building component information to generate the final instructional output essential for a robot to acknowledge and perform the construction task. A case study for drywall installation is conducted to evaluate the proposed approach. The results of the NLU and IM modules show high accuracy over 99%, allowing a robot to perform tasks accurately for a given set of natural language instructions in the RC module. The obtained results highlight the potential of using natural language-based interaction to replicate the communication that occurs between human workers within the context of human-robot teams. § INTRODUCTION Robots have been adopted in the construction industry to support diverse field activities such as bricklaying <cit.>, earthmoving <cit.>, painting <cit.>, underground exploration <cit.>, concrete placement <cit.>, tunnel inspection <cit.>, curtain wall assembly <cit.>, and wall-cleaning <cit.>. Robotics is considered an effective means to address issues of labor shortages and stagnant growth of productivity in construction <cit.>. However, it is challenging for robots to work on construction sites due to evolving and unstructured work environments <cit.>, differing conditions from project to project <cit.>, and the prevalence of quasi-repetitive work tasks <cit.>. This is in contrast to automated manufacturing facilities that have structured environments <cit.>. It is expected that the construction robots will encounter situations different than what are stipulated in the design documents and will have to work with the human collaborator to resolve such unexpected conditions. Collaboration between humans and robots has the potential to address several such challenges inherent in the performance of construction tasks in the field. The advantage of collaborative robots lies in the opportunity to combine human intelligence and flexibility with robot strength, precision, and repeatability <cit.>. Collaboration can increase productivity and quality of the construction tasks and human safety <cit.>. It can also reduce physical exertion for humans since repetitive tasks will be carried out by robots. Therefore, in Human-Robot Collaboration (HRC), skills of human operators and robots can complement each other to complete designated tasks. In construction, communication between teammates is essential since construction work crews have many degrees of freedom in organizing and coordinating the work, and dynamic and unpredictable environments create high likelihood of errors <cit.>. Similarly, when collaborative robots assist human workers, interaction between humans and robots is critical in the construction field <cit.>. In human-robot construction teams, most of the robots are currently in the lower level of robot autonomy where human workers determine task plans and robots execute them <cit.>. To deliver plans generated by human workers to robots, human operators need proper interfaces <cit.>. However, designing intuitive user interfaces is one of the key challenges of HRC since interaction with robots usually requires specialized knowledge in humans <cit.>. Intuitive and natural interaction enables human operators to easily interact with robots and take full advantage of human skills, resulting in enhanced productivity <cit.>. In addition, during the natural interaction, shallower learning curves can be expected with future novice operators and low levels of fatigue can be maintained. Therefore, it is important to establish a natural and intuitive communication approach to achieve successful HRC in the construction industry. In a natural interaction-based workflow, humans can interact efficiently with robots as they communicate with other human workers, and the robots can be endowed with the capability to capture and accurately process human requests and then carry out a series of tasks. Several recent studies have investigated natural HRC in the construction industry using various communication channels such as gesture <cit.>, Virtual Reality (VR) <cit.>, brainwaves <cit.>, and speech <cit.>. Among them, speech interaction has been considered as the most natural and intuitive way of communication in the human-robot interaction field <cit.>. HRC using voice commands helps human operators focus on tasks since hands-free interaction is possible and the operators’ mobility is not restricted <cit.>. In industrial settings including construction sites, noisy environments can affect speech recognition. However, with recent advances in noise-robust speech recognition, it is expected that using voice input commands is feasible in noisy environments <cit.>. Natural language-based interaction, in which speech input is used, has attracted increasing attention with its advantages in the field of robotics <cit.>. Using natural language instructions allows human operators to deliver their requests accurately and efficiently <cit.>. Users’ intents about action, tools, workpieces, and location for HRC can be accurately expressed through natural language without information loss in ways distinct from other simplified requests <cit.>. In addition, users do not need to design informative expressions when communicating through existing languages, making the interaction efficient. Given these advantages, language instructions have been used to make robots perform pick-and-place operations, one of the most common tasks of industrial robots <cit.>. For pick-and-place operations to install or assemble construction workpieces, the workpieces can be described by their IDs or characteristics such as dimension, position, or material available from project information (e.g., Building Information Model). Most of the existing methods for analyzing language instructions for a pick-and-place operation have extracted information about its final location and workpieces described by its color, name, or spatial relationships for household tasks <cit.>. For construction tasks where the same materials are used repeatedly, color or name may not be reliable features for indicating objects without ambiguity. Therefore, it is essential to use precise workpiece descriptions in language instructions for construction tasks, such as quantitative dimensions, IDs, positions, and previous working records. In addition, in robotic planning in construction, orientation of a target object is one of the essential information for automated placement planning of building components <cit.>. §.§ Research Objectives The investigation of HRC in construction field, particularly in relation to natural language usage, requires further exploration. This study aims to bridge this gap by proposing a framework for natural interaction with construction robots through the use of natural language instructions and building component information. Specifically, the focus is on analyzing natural language instructions for pick-and-place construction operations within a low-level HRC context. To address the scarcity of resources in terms of natural language instruction datasets in the construction industry, a fine-grained annotation is created. This annotation enables the identification of unique workpiece characteristics and allows for detailed analysis. By incorporating this detailed annotation, it is anticipated that the quality and depth of the labeled data can be enhanced while it introduces a higher degree of complexity to language understanding. To validate the effectiveness of the proposed approach, this study involves the training and comparison of two existing language models using new datasets. The results obtained from these models are then applied to the building component information available in construction projects. Moreover, a set of experiments on drywall installation is conducted as a case study to demonstrate and evaluate the proposed approach. § LITERATURE REVIEW Through the review of existing works, the need for this study and research gaps are identified. The first section establishes the need for analyzing natural language instructions for HRC of the construction domain. The second section examines the characteristics of data and approach used in other domains in relation to natural language understanding. The third section investigates studies that performed information extraction in the construction industry. §.§ Interaction between human workers and robots in the construction industry Advanced interaction methods for HRC enable human workers to collaborate with robots easily and naturally. In construction, research using gestures, VR, brain signals, and speech has been proposed for interaction with robots. Gesture-based interaction using operators’ body movements can enhance the intuitiveness of communication <cit.> and be used in noisy environments such as construction sites <cit.>. Wang and Zhu <cit.> proposed a vision-based framework for interpreting nine hand gestures to control construction machines. Sensor-based wearable glove systems were proposed to recognize hand gestures for driving hydraulic machines <cit.> and loaders <cit.>. However, when using hand gestures, the operators’ hands are not free, and they have to keep pointing to the endpoint, which may lead to fatigue <cit.>. VR interfaces have been used in the construction industry for visual simulation, building reconnaissance, worker training, safety management system, labor management and other applications (e.g., <cit.>). It can also provide an opportunity for users to control robots without safety risks <cit.>. Regarding interaction with robots, Zhou et al. <cit.> and Wang et al. <cit.> tested VR as an intuitive user interface exploring the virtual scene for pipe operation and drywall installation, respectively. Both studies sent commands to robots by handheld controllers, which determined desired poses and actions of robots. In addition to the purpose of operating robots, Adami et al. <cit.> investigated the impacts of VR-based training for remote-operating construction robots. In the interaction with a demolition robot, operators used the robot’s controller consisting of buttons and joysticks based on digital codes. However, head mounted devices as visual displays may be uncomfortable for operators due to onset of eye strain and hand-held devices may limit the operators in their actions <cit.>. In addition, the connection between the headset and the controllers can be interrupted, and the working space is limited due to cables attached to the computer <cit.>. Recently, brain-control methods have been proposed for HRC in construction, translating the signals into a set of commands for robots. To control robots, users can attempt to convey their intention in a direct and natural way by manipulating their brain activities <cit.>. In construction, Liu et al. <cit.> and Liu et al. <cit.> proposed systems for brain-computer interfaces to allow human workers to implement hands-free control of robots. Users’ brainwaves were captured from an electroencephalogram (EEG) and interpreted into three directional commands (left, right, and stop) <cit.>. In the other study <cit.>, brainwaves were classified into three levels of cognitive load (low, medium, and high), and the results were used for robotic adjustment. This communication using brain signals enables physiologically-based HRC by evaluating workers’ mental states <cit.>. However, systems using brain signals have to overcome challenges of time consumption for user training, non-stationarity of signals affected by the mental status of users, and user discomfort by moist sensors using a gel <cit.>. It is also challenging for users to deliver high-dimensional commands to collaborative robots because of the limited number of classifiable mental states <cit.>. Speech is the most natural way of communication in humans, even if the objects of their communication are not other humans but machines or computers <cit.>. It is a flexible medium for construction workers to communicate with robots, which can be leveraged for hands-free and eyes-free interaction with low-level training <cit.>. Even if noisy construction sites could generate many errors in verbal communication, it has the potential to be used in noisy environments with recent advances in noise-robust speech recognition <cit.>. Natural language is important in human-human interaction during teamwork since it helps seamless communication. Enabling robots to understand natural language commands also facilitates flexible communication in human-robot teams <cit.>. Untrained users can effectively control robots in a natural and intuitive way using natural language. Despite the advantages of the speech channel and natural language in interaction, there are few studies examining natural language instructions for human-robot collaboration in construction. Follini et al. <cit.> proposed a robotic gripper system integrated with voice identification/authentication for automated scaffolding assembly, but it was based on a very limited number of simple voice commands like stop, grip, and release. In the construction industry, speech and natural language-based HRC should be further investigated. §.§ Natural language instructions for Non-Construction HRC Many studies in which humans give instructions to robots using natural language commands have been conducted for manipulation tasks. Regarding the placing task, Paul et al. <cit.> and Bisk et al. <cit.> leveraged spatial relations in natural language instructions to allow robots to move blocks on the table. Paul et al. <cit.> proposed a probabilistic model to ground language commands carrying abstract spatial concepts. A neural architecture was suggested for interpreting unrestricted natural language commands in moving blocks identified by a number or symbol <cit.>. Mees et al. <cit.> developed a network to estimate pixelwise placing probability distributions used to find the best placement locations for household objects. However, in order to make a robot perform various construction tasks, it is necessary to use different kinds of attributes describing objects as well as spatial information of the objects. Several multimodal studies have mapped visual attributes and language information by using two types of input (an image and an instruction). Hatori et al. <cit.> integrated deep learning-based object detection with natural language processing technologies to deal with attributes of household items, such as color, texture, and size. Magassouba et al. <cit.> proposed a deep neural sequence model to predict a target-source pair in the scene from an instruction sentence for domestic robots. Ishikawa and Sugiura <cit.> proposed a transformer-based method to model the relationship between everyday objects for object-fetching instructions. Guo et al. <cit.> developed an audio-visual fusion framework composed of a visual localization model and a sound recognition model for robotic placing tasks. Murray and Cakmak <cit.> and Zhan et al. <cit.> analyzed language instructions about navigation and manipulation tasks to make mobile robots perform various tasks. Murray and Cakmak <cit.> proposed a method that uses visible landmarks in search of the objects described by language instructions for household tasks. Zhan et al. <cit.> combined object-aware textual grounding and visual grounding operations for the tasks in real indoor environments. A combination of linguistic knowledge with visual information can describe targets in many ways. The previous studies were intended for robotic household tasks or indoor navigation. To utilize these methods for assembly tasks at unstructured and complex construction sites, it is necessary to collect and train construction site images and corresponding language instructions. Previous multimodal studies have relied on thousands to tens of thousands of image-text pairs when training and testing their models. For example, Hatori et al. <cit.> used 91,590 text instructions with 1,180 images, Ishikawa and Sugiura <cit.> used 1,246 sentences with 570 images, Murray and Cakmak <cit.> utilized 25k language data with 428,322 images, and Zhan et al. <cit.> used 90 image scenes with 21,702 language instructions. However, limited image datasets of construction sites present challenges in applying previous multimodal studies of HRC to interactions with construction robots based on natural language instructions. Some methods interpreted natural language instructions given to robots without relying on visual information. Language understanding using background knowledge <cit.> and commonsense reasoning <cit.> have been explored to infer missing information from incomplete instructions for kitchen tasks. Nyga et al. <cit.> generated plans for a high-level task in partially-complete workspaces through a probabilistic model to fill the planning gaps with semantic features. Chen et al. <cit.> formalized the task of commonsense reasoning as outputting the most proper complete verb-frame by computing scores of candidate verb frames. However, unlike kitchen tasks, it can be challenging to infer targets in construction activities using general knowledge or pre-defined verb frames. Brawer et al. <cit.> proposed a model to select one target, described in language instructions, among 20 candidates by contextual information such as the presence of objects and the action history. The context information can also be leveraged in HRC for construction activities, but the proposed model is limited to analyzing language instructions for the pick-up action. §.§ Natural language processing in the Construction Industry Natural language processing (NLP) is a research domain exploring how computers can be used to interpret and manipulate natural language text or speech <cit.>. With the advance of machine learning and deep learning, NLP has been increasingly adopted in the construction industry. NLP applications in construction have been explored in many areas, such as knowledge extraction, question-answering system, factor analysis, and checking <cit.>. Various documents, such as accident cases <cit.>, injury reports <cit.>, compliance checking-related documents <cit.>, legal texts <cit.>, and construction contracts <cit.> have been analyzed in construction. Natural language instructions for HRC have not been explored in NLP studies of the construction industry even though HRC through natural language instructions has potential advantages compared with other interfaces such as hand gestures, VR, and brain signals for natural interaction with robots. Collaboration with a construction robot using natural language instructions requires extracting useful information from the instructions so the robot can start working. In NLP, such information commonly takes the form of entities that carry important meanings as a contiguous sequence of n items from a given text <cit.>. Previous studies extracted keywords based on frequency features <cit.> and handcrafted rules <cit.>. These approaches are not robust to unfamiliar input which includes misspelled or unseen words rather than the keywords. To address these challenges, machine learning and deep learning models have been used to extract information about infrastructure disruptions <cit.> and project constraints <cit.>. However, entities used in these studies, such as task/procedures <cit.>, interval times <cit.>, and organization <cit.> are not suitable for identifying important information from natural language instructions for construction activities. A new group of entities should be defined to give essential information to construction robots. For example, entities for pick-and-place tasks are relevant to characteristics of the tasks such as target objects, placement location, and placement orientation. Building Information Modeling (BIM) has been used to visualize and coordinate AEC projects, and can be used for knowledge retrieval since it includes much of the project information <cit.>. The retrieved knowledge from BIM has been applied to plan robot tasks for evaluation of retrofit performance <cit.>, indoor wall painting <cit.> and assembly of wood frames <cit.>. However, the previous works using BIM information did not consider natural language-based communication with construction robots for HRC. Several studies have used natural language queries to change or retrieve BIM data <cit.>. Lin et al. <cit.> retrieved wanted BIM information by mapping extracted keywords from queries and IFC entities. However, the proposed method supported only simple queries such as “quantity of beams on the second story” or “quantity of steel columns in the check-in-zone.” Shin and Issa <cit.> developed a BIM automatic speech recognition (BIMASR) framework to search and manipulate BIM data using a human voice. They conducted two case studies for a building element, a wall, but a quantitative evaluation of the framework was excluded. A question-answering system for BIM consisting of natural language understanding and natural language generation was developed <cit.>. Although the system achieved an 81.9 accuracy score with 127 queries, it has a limitation in recognizing complex queries due to rule-based keyword extraction. For example, users can use natural language questions like “What is the height of the second floor?”, “What is the object of door 302?”, or “what is the model creation date?”, which have a similar pattern. In HRC in construction, robots are expected to perform physical and repetitive tasks as assistants and receive instructions from human workers. Natural language instructions through speech channel can be employed for natural and intuitive interaction. In such scenarios, it is necessary to extract information about construction tasks from the natural language instructions. Previous studies in construction have analyzed text inputs to retrieve useful project information from language queries <cit.>. However, the language queries are different with language commands for construction tasks. There are studies for robot task planning using project information <cit.>, but interaction between human workers and robots were not considered. These studies have limitations in analyzing natural language instructions for robots. There has been no research to plan robot tasks based on natural language commands. To address this research gap, this study proposes a framework for a natural language-enabled HRC method that extracts necessary information from language instructions for robot task planning. In the proposed approach, building component information is used as input to make descriptions of tasks’ attributes more intuitive and simpler. Table 1 shows the main characteristics of this study. Diverse interaction channels have been considered for interaction with construction robots, but no research investigated how to collaborate with the robots using natural language instructions in construction. This study uses natural language instructions and focuses on pick-and-place operations which are the most common tasks of industrial robots including construction robots. The pick-and-place operation is exploited in many construction tasks like assembly of structural steel elements, bricks, wood frames, tiles, and drywalls by changing types of tools. While other language instructions used in the previous studies describe target objects and destination, pick-and-place operation for construction activities require one more piece of information about placement orientation. For the variety of patterns, descriptions based on previous working records are also used. As a result, it is required to generate a new dataset for construction activities, and a language model should be proposed and trained. Target objects and destination are described as their IDs, dimension, position or working records available from the construction project information. Thus, this study uses building component information and previous working records to extract essential information that allows the robot to start construction tasks. § SYSTEM ARCHITECTURE The proposed system aims to make a robot assistant perform construction activities instructed by a human partner using natural language instructions. To this end, a new dataset for pick-and-place construction operations needs to be generated, and a language model trained on this new data should be used, rather than solely relying on the language model used in previous studies. Additionally, the limited availability of image datasets on construction sites can cause difficulties in acquiring surrounding information for robot control. To address this, this study uses building component information available in construction projects to provide robots with the necessary information to execute tasks. Fig. 1 shows critical components and data workflows of the system, which comprises three modules: Natural language understanding (NLU), Information Mapping (IM), and Robot Control (RC). The NLU module takes a natural language instruction as input and employs a trained language model to perform sequence labeling tasks. Subsequently, the IM module utilizes the output of the NLU and building component information through conditional statements to generate final commands for the RC module. The resulting command is stored in the action history, which serves as one of the inputs to the IM module. Finally, the RC module utilizes three types of information (target, final location, and placement method) to control the robot’s movement for pick-and-place tasks. §.§ Natural Language Understanding (NLU) A NLU module aims to predict semantic information from the user’s input which is in natural language. Two main tasks of the NLU are intent classification (IC) predicting the user intent and slot filling extracting relevant slots <cit.>. The NLU module of this study focuses on the slot filling which can be framed as a sequence labeling task to extract semantic constituents. Fig. 2 shows an example of the slot filling for the user command “Pick up the full-size drywall to the stud 500107” on a word-level. The word ‘tag’ is used to refer to the semantic label. In this research, two deep learning architectures are utilized to assign appropriate tags to each word of a user command. The first architecture is the Bidirectional Long Short-Term Memory (BiLSTM) layer <cit.> with a Conditional Random Fields (CRF) layer <cit.>. The second architecture is based on the Bidirectional Encoder Representations from Transformers (BERT) architecture <cit.>. BiLSTM-CRF is a neural network model that has been used for sequence labeling <cit.>. BiLSTM incorporates a forward LSTM layer and a backward LSTM layer in order to leverage the information from both past and future observations of the sequence. A hidden forward layer is computed based on the previous hidden state (h⃗_t-1) and the input at the current position while a hidden backward layer is computed based on the future hidden state (h⃗_t+1) and the input at the current position as shown in Fig. 3. At each position t, the hidden states of the forward LSTM (h⃗_t) and backward LSTM are concatenated as input to the CRF layer. The CRF layer generates the sequence labeling results by adding some effective constraints between tags. Each tag score output by the BiLSTM is passed into the CRF layer, and the most reasonable sequence path is determined according to the probability distribution matrix. The BiLSTM-CRF model consists of the BiLSTM layer and the CRF layer, which can not only process contextual information, but also consider the dependency relationship between adjacent tags, resulting in higher recognition performance. BERT, Bidirectional Encoder Representations from Transformers, is a bidirectional language model that achieves outstanding performance on various NLP tasks <cit.>. The architecture of BERT is a multilayer transformer structure which is based on the attention mechanism developed by Vaswani et al. <cit.>. BERT is trained to predict words from its left and right contexts using Masked Language Modeling (MLM) <cit.> to mask the words to be predicted. The general idea of BERT is to pre-train the model with large-scale dataset, and parameters of the model can be updated for the given tasks during fine-tuning. In this study, pre-trained BERT-base model <cit.> is fine-tuned for sentence tagging tasks. As shown in Fig. 4, the input text is tokenized and special token like [CLS], which stands for classification, is added at the beginning. It is needed to create an attention mask. The input for BERT is the masked sequence and the sum of the token and position embeddings (E_i). Then, the final hidden vector is denoted as T, which is the contextual representation for each token. The token-level classifier is a linear layer using the last state of the sequence as input. In this study, when a word is composed of several tokens and the prediction results of the tokens are different, the class of the word is determined by the token corresponding to more than half of the tokens. In this study, a language instruction has to deliver one of the characteristics of a workpiece, which can be tags of the language models. In this regard, it is assumed that users have access to mobile devices (e.g., tablet) to obtain building component information such as a name, a unique ID, a dimension, and an initial position of each workpiece on a future construction site. Given the potential use of mobile or wearable technologies in the construction industry <cit.>, such technologies could be used to provide project information to construction workers making it easier to unambiguously specify which workpieces are to be installed and corresponding location to the robot assistants. When generating the language instruction, phrases to describe the three elements should be included and they are tagged as corresponding labels. A target workpiece can be described by its attributes such as name, unique identifier, dimension, and position <cit.>. A final location is one of the construction workpieces, which is different from the target workpiece. It can be described with its properties. Placement orientation can be expressed as perpendicular, parallel, or other angles. When working records about previous pick-and-place operations are available, a variety of natural language patterns can be utilized in the dataset. For example, the second target workpiece to be installed can be described as “to the left of the previous one” or “same as the previously installed one.” Information corresponding to the detailed characteristics of the elements is extracted in this NLU module, eliminating the need for additional natural language processing after sequence labeling. Consider the following instruction: "Pick up the panel. The width of the panel is 4 and the length of the panel is 8. Place it vertically to the stud right to the leftmost stud." In this example, the phrase "the width of the panel is 4 and the length of the panel is 8" provides information about the target object while "stud right to the leftmost stud" indicates the destination. In order to align this information with building component information, further natural language processing is required. In this study, language models in the NLU module will be trained with a fine-grained annotated dataset. Consequently, the models can extract corresponding information, including the ID, position, and dimension of the target or destination components. §.§ Information Mapping (IM) The information mapping module aims to generate a final command for the robotic system using output of the NLU module, building component information, and action history. This module is designed to extract three necessary types of information crucial for a wide range of pick-and-place construction operations, including the identification of a target object, its destination, and placement orientation. The IM module maps NLU output and BIM information and the mapping result is recorded in the action history. The action history record includes information about the last selected object, where the object is placed, and how it is placed. The previous action record can be used to find out the target object and its final location for the current action. The final command to be delivered to the RC module is determined with the latest record of the action history. To address inconsistencies in the vocabularies between the NLU output and building component information, the module incorporates a procedure that uses conditional statements to extract information about the target object, destination, and placement method. These conditional statements are designed to utilize the ID, position, and dimension information of each component, which can be obtained from the building component information. The appropriate conditional statement to use is determined based on the tag of each word in the NLU output. For instance, if the NLU output contains a tag ID_target that refers to the target object’s ID, the corresponding word is mapped to the ID in the building component information. The component information associated with that ID is then added to the action history as the target object’s information. Similarly, if the NLU output contains a tag Position_target that refers to the position of the target object, the corresponding word is mapped to a component in the building component information within the conditional statement processing the position information. The information associated with that component is then added to the action history. Various pick-and-place construction operations can be considered for the IM module, including wall tile installation, drywall installation, and bricklaying. For example, in the context of wall tile installation, a command could be "Pick up the 2 by 4 tile and place it horizontally on the lower part of column 300200." In the case of drywall installation, a command could be “Can you hang the panel in the middle to the leftmost stud? Place it to the top part.” Similarly, in bricklaying, a command could be "please put a standard brick vertically next to the previously placed one." These examples highlight the consistent need for precise information about the target object, the destination, and the placement method. The IM module can utilize essential details such as the ID, location, and dimension and generate appropriate commands for the robotic system. The performance of the IM module is closely linked to the output generated by the Natural Language Understanding (NLU) module, as the latter's output serves as the input for the former. This interdependence implies that the accuracy of the IM module depends on the performance of the NLU module. If there are inaccuracies or misinterpretations in the results predicted by the NLU module, it can lead to errors in the conditional statements of the IM module, hence influencing its operational integrity. Therefore, the accuracy of the IM module is essentially equivalent to the instruction-level accuracy of the NLU module. This relationship underscores the importance of precision and reliability in each component of the system, highlighting the interplay of accuracy across modules. Once the action history is updated, the final command for robot control is determined as the target object type, destination ID, and placement methods from the action and transferred to the Robotics Control (RC) module. §.§ Robot Control (RC) This study uses a virtual robot digital twin to verify that natural language instructions can be used to interact with construction robots in the proposed system. The robot in this study is simulated using ROS (Robot Operating System) and Gazebo that is the virtual environment offered by the Open Source Robotics Foundation. Robot motion planning and execution methods are based on a previous study described in Wang et al. <cit.>. While Wang et al. <cit.> used a hand controller to determine the message to be delivered to the robot, this study uses a robotic command generated from the IM module, where the input is natural language instructions. Unlike the previous study, the RC module enables the robot to install the target panel either vertically or horizontally, depending on the placement method information in the input message, and can place it on the middle line or left edge of a stud. The robotic arm movement, which is generated by MoveIt <cit.>, has higher priority than the base movement to reduce localization error. When the robot is carrying a target object, collision checking process is applied while the target is considered as part of the robot, so that the robot and the target object will not collide with their surroundings. A human operator can give the next instructions after target placement is completed. § RESEARCH METHODOLOGY A case study is presented for drywall installation to articulate details of the proposed method for natural interaction with robots. For this case study, a 6 degrees-of-freedom KUKA robotic arm is used, and environments for drywall installation are emulated in the Gazebo simulator. The KUKA robot is positioned between a stud wall and drywall panels and the base of the robot can move in a straight line as shown in Fig. 5(a). The stud wall consists of thirteen studs as illustrated in Fig. 5(b). In this case study, one stud is designated as the final location for place operation and the left edge of a drywall panel is laid on the stud. In general, drywall panels are available in rectangular shapes. Standard panel size is 4 feet wide and 8 feet long and panels of different sizes are cut according to the designed dimensions in construction practice. We use three sizes of panels including the standard ones as well as two unique panel sizes (Fig. 5(c)). The drywall panels will be installed from left to right along the stud wall. The drywall panels can be installed in a vertical or horizontal orientation. Fig. 6 shows examples of how to place drywall panels onto the studs. Examples of vertical placement are shown in Fig. 6(a), and the left edge of the panel can be placed on the center line of a stud or the left side of a stud. When the panels are placed horizontally perpendicular to studs, they can be placed on the top or bottom part of the studs as shown in Fig. 6(b). Therefore, natural language instructions for drywall placement should include how (i.e., in what configuration) to place the drywall panels. §.§ Data Generation and Natural Language Understanding (NLU) A new dataset of natural language instructions for drywall installation was created and annotated. Each instruction, consisting of one or multiple sentences, clarifies a desired drywall as a target, a stud as a final location, and how to place the drywall panel. To achieve a fine-grained annotation, this study utilized 12 tags that enabled the classification of these three essential categories into more detailed categories. These tags include six that describe the characteristics of the target object, three that illustrate the final location, and the remaining three for the placement orientation. Each instruction contains these three pieces of information exactly once. In the dataset, there are co-reference issues, where words referring to a target object, a final location, and a placement method can be included multiple times within a single instruction. However, expressions clearly indicating features related to these three types of information appear only once in each instruction. The final location and the target are one of the building components illustrated in the Fig. 5(b) and Fig. 5(c), respectively. To utilize widely used expressions in language instructions, construction videos about drywall installation <cit.> and other studies exploring pick-and-place language instructions were considered when generating the new dataset. In these language instructions, drywalls and studs are described by combinations of representations related to ID, dimensions, and relative location. A drywall panel is represented by its ID, dimension, or position, while a stud is represented by its ID or position (Fig. 7). BIM models used in previous studies have allocated a five to seven- digit number to every building element <cit.>. Each element ID is represented as a unique 6­digit number in this case study and is tagged with ID_stud and ID_wall for stud and a drywall panel, respectively. A list of digits can be read out in the working environments such as warehouses or factories to increase work performance <cit.>. While it may not be common to utter long digits in today’s construction workers’ practice, this study suggests that using IDs could be one of the effective ways for workers to unambiguously indicate a target object when interacting with robots to ensure accurate selection and installation of workpieces. The dimensions of the target drywalls are labeled with length, width, or dim. When a target object is described in numbers such as “4 by 8” or “the length is 8”, the numbers are annotated as length or width. Dimension of the target object can be expressed with words “full-size”, “standard”, or “full”, and the words representing the size of the target object are annotated as label dim. Both a target drywall panel and a final location (stud) can be described as their locations using one perspective view in this case study. For example, stud 500100 is the leftmost stud and drywall sheets 500300, 500310, and 500320 are the leftmost ones as shown in Fig. 5. The words to indicate locations of the stud and drywall panels are labeled as St_loc1 and Dw_loc1. When describing the locations, the relationship of a place to other places can be used. It means that the location changes based on the secondary location. When a final location of stud is described using relative location, both St_loc1 and St_loc2 are used together while both Dw_loc1 and Dw_loc2 are used together when the target drywall is described. For example, in Fig. 5, the location of the stud 500101 can be expressed as “second left to the stud 500103” or “right to the stud 500100.” In this case, the direction like “second left” or “right” is also annotated as St_loc1 and the word “500103” or “500100”, which is corresponding to the secondary location, is annotated as St_loc2. Finally, regarding how to place drywall panels, there are three labels of Vr_md, Hr_top, and Hr_btm. When a panel is vertically placed on the middle line of the stud, the corresponding words like “middle line” or “center line” is labeled as Vr_md. When a target object is placed horizontally on the top row of a stud or on the bottom row of a stud, the corresponding words are annotated as Hr_top or Hr_btm. Terms like “upper part”, “upper horizontal row”, and “top part” are annotated as Hr_top while terms like “lower part” and “bottom row” are annotated as Hr_btm. Given this variability, the same words should be annotated as different tags, creating a challenge for language models to correctly interpret the intended context. When a placement method is not mentioned in a language instruction, it means that the panel is installed vertically on the left line of the stud. It is considered default in this study and the language instruction does not have a tag about this placement method. There are a total of 13 labels, with 12 of them representing either a target drywall, a final location (stud), or a placement method, as shown in Fig. 7. The remaining label, referred to as ‘O’, is utilized to signify that the corresponding word is not associated with any entity. If a target, a destination, or a placement is mentioned multiple times in a single instruction, words that do not deliver any characteristics of the three information are tagged as ‘O.’ For example, in a three-sentences instruction “Please move the drywall board and drive it vertically in the center line of the stud. The width is 4 and the length is 8. The stud is laying on the left to the 500103”, ‘the drywall board’ and ‘it’ in the first sentence refer to a target object but they do not deliver any important characteristic, so they are tagged as ‘O.’ In total, 1,584 natural language instructions with the 13 labels for drywall installation were generated and manually annotated. These instructions consist of 3,072 sentences and a total word count of 39,841. The dataset was split into three parts: 1,268 instructions for training (80%), 158 instructions for validation (10%), and 158 instructions for test (10%). Table 2 shows annotation results of the 1,584 instructions. The dataset includes fine-grained details of the target objects, expressed through six tags: Dw_loc1, Dw_loc2, ID_wall, dim, length, and width, which account for a total of 2,535 words. Similarly, the destination details are captured using the tags ID_stud, St_loc1, and St_loc2, encompassing 4,166 words. Additionally, the dataset incorporates placement orientation information, classified into three distinct classes, and comprising a total of 2,060 words. Consider the example instruction: " Can you install the piece 500310 vertically in the stud? The stud is laying third to the left from the stud 500105. Please hang the panel into the middle line." This approach allows for extraction of specific details, such as the ID_wall tag for the target, Dw_loc1 and Dw_loc2 tags for the destination, and the Vr_md tag representing a specific placement orientation rather than simply highlighting three main categories. Such granularity can significantly enhance the richness and precision of the data interpretation. While the first author performed the initial manual annotation, two other individuals checked the appropriateness of annotation guidelines by annotating the test dataset in two rounds. Appendix I presents the annotation guidelines used in this study. In the first round, the two annotators labeled the dataset based on the annotation guidelines and several examples. The annotators achieved 96.05% and 89.24% accuracy, respectively. They received feedback on the results of the first-round annotation. In the second round, both annotators achieved 98.15% and 98.56% accuracy in annotation, which are almost 100% accuracy. Any errors in the second round were simple human errors. The validation set is used to compare the performance of different models in the NLU module. The model with the best performance on the validation dataset is used to evaluate the test dataset and the results are delivered to the IM. The specific parameters of the BiLSTM-CRF model used in this case study are determined based on previous studies <cit.> as follows: the number of neural network layers is 2; word embedding size is 50; the number of hidden layer LSTM neurons is 300; batch-size is 16; the dropout is 0.1; the optimizer is set to Adam <cit.> with a learning rate of 0.001; the Adam optimizer trains 20 epochs. The total number of parameters is about 250,000. In the case of BERT, “BertForTokenClassification” class was used to find-tune the BERT-base-uncased model of the original BERT <cit.>. The specific parameters are as follows: the number of encoder layers is 12; the number of attention-heads is 12; the number of hidden units: 768; batch-size is 16; the dropout is 0.1; the optimizer is Adam with a learning rate of 3e-5; the number of training epochs is 5. The total number of parameters is 110 million. Fig. 8 shows network architecture diagrams of BiLSTM-CRF and BERT. §.§ Information Mapping (IM) The IM module utilized several rules to extract final information about a target panel, a stud as destination, and a placement method based on the output of the NLU module and building component information (Fig. 9). The output of this module is recorded in an action history table as nine types of values: stud_id (ID of the stud), installed_x_left (x coordinate of the left side of the installed panel), installed_x_right (x coordinate of the right side of the installed panel), left_cent (if the panel is installed on the left side of the stud or the center line of the stud), ver_hor (if the panel is installed vertically or horizontally), top_btm (if the panel is installed on the top row or the bottom row), drywall_id (ID of the drywall panel), w (width of the drywall panel), and l (length of the drywall panel). The records in the action history table can be used to extract the final command for the robot control. The rules of the IM module about drywall panels are shown in Figs. 10 and 11. The pseudocode in Fig. 9 can be used when a target of pick-and-place operation is described as its dimension. If the dimension of the target drywall panel is described by its length and width values or words like ‘standard’ and ‘full-size’, the target features are extracted by its length and width values in the drywall information table in Fig. 9(b), which is marked as TableD in Fig. 10. When an expression for a previously performed operation is used, such as “previously installed”, the target of the last performed operation is retrieved from the action history table ActHist and the panel with the same characteristics is determined as the target of the current operation. Fig. 11 shows pseudocode for the process used when drywall panels are labeled as their IDs or position. When the tag of ID_wall is included in the output of the NLU, the information of the panel corresponding to that tag is returned. If only Dw_loc1 refers to a workpiece at the output of the NLU module, the target is determined by the x coordinate value for the initial position of drywall panels and the word tag to Dw_loc1. In the case that both of Dw_loc1 and Dw_loc2 are included in the output of NLU, a target panel is explained by its relative location that changes based on the secondary location. The x coordinate of the target panel’s initial position, which is finally used to extract the target information, is determined from the secondary place and the direction tagged with Dw_loc2 and Dw_loc1, respectively. Fig. 12 shows how to extract information for a stud that is a final location for pick-and-place operations. When the tag of ID_stud is included in the output of the NLU, the information of the stud corresponding to that tag is returned. Otherwise, the output of NLU includes St_loc1 or St_loc2, so that the stud is described by its location. When St_loc2 is not included, the stud is either the leftmost one or rightmost one. When both St_loc1 and St_loc2 are extracted, the stud as final location is determined by the spatial relationship described by words tagged by St_loc1 and St_loc2. To start a pick-and-place operation for drywall installation, it is essential to know the placement method as well as the target and final location. Three types of placement methods are used in this study: Vr_md, Hr_top, and Hr_btm. If the output of the NLU module does not contain these three tags, the left edge of the drywall panel is set to be placed vertically to the left of the stud. The three pieces of information about the current job are recorded in the action history table. The installed_x_left value in the action history table is determined according to the combination of the placement method and the final location, and the installed_x_right value is calculated based on the placement method, the target, and the installed_x_left value. § EXPERIMENTAL RESULTS This study trained the BiLSTM-CRF model and BERT by varying the number of training data to see the effects of training data size on the performance of the model. With different amounts of training data, four models with the same architecture were trained for both language models. Fig. 13(a) reports the training accuracy of the four BiLSTM-CRF models across the 20 epochs. The four BERT models were trained across the 5 epochs since they converged quickly as shown in Fig. 13(b). The accuracy of the LSTM-M1 and BERT-M1, which were trained with ample training data, showed a considerably faster increase in the learning progress early in training. The performance of the eight models were evaluated on the validation set and compared in Table 3. In this study, two types of accuracy are computed to measure performance. Word-level accuracy (Acc_word) was computed based on the number of all the words in the dataset, which provides the proportion of words that are correctly predicted. The eight models achieved high Acc_word over 96%. However, even one tag incorrectly predicted in a language command can affect the IM module that derives the final robot command, causing disruptions in the robot’s performance. To address this problem, Instruction-level accuracy (Acc_inst) considers whether all words in each instruction are correctly predicted or not, thus providing the proportion of language instructions in which all words are correctly predicted. For example, as shown in Table 3, Acc_word of LSTM-M4 was measured as high as 96.13%, but Acc_inst of LSTM-M4 showed an accuracy of 48.73%. This means that the robot can accurately perform 48% of the given language instructions. Out of all eight models, BERT-M1 achieved the highest accuracy, with 100.00% accuracy at both the word-level and instruction-level. Generally, model performance increased with larger amounts of training data. BERT models, including BERT-M1, outperformed the BiLSTM-CRF model when trained on equivalent amounts of data. Even with a small dataset (BERT-M4), the model achieved an instruction-level accuracy of 79.11%, demonstrating the effectiveness of fine-tuning pre-trained models in such cases. The study also confirmed that training with a minimal amount of data (equivalent to twice the validation set) resulted in a rapid decline in accuracy compared to the other models. The number of false predictions for the 13 tags is compared in Table 4. LSTM-M1 and LSTM-M2 had two wrong predictions for Dw_loc1 and Vr_md, respectively. As in the example in Fig. 14(a), ‘most left’ was incorrectly predicted as St_loc1 representing a stud instead of Dw_loc1 representing a drywall panel. Within our dataset, the word ‘middle’ is contextually labeled as Vr_md or Dw_loc1, which can occasionally increase the complexity of predictions. Fig. 14(b) shows that the word ‘middle’ was predicted as Dw_loc1 instead of Vr_md indicating the placement method. BERT-M2 also had one error, the word ‘middle’ corresponding to Dw_loc1 was predicted as Vr_md (Fig. 14(c)). These results may be due to the similarity of the words referring to the position and the placement method. Such issues tend to be mitigated when language models are trained with a large amount of data as shown in the previous deep learning-based studies <cit.>. LSTM-M4 and BERT-M4, which were trained with a limited amount of data, had 144 and 43 incorrect predictions, respectively. Most incorrect predictions occurred in the Dw_loc1 category. LSTM-M4 displayed a high number of prediction errors for the Dw_loc1, St_loc1, Vr_md, and width labels. In contrast, BERT-M4 had far fewer prediction errors in these categories, which is attributed to its token-level classification approach and pre-trained BERT original version. However, unlike other models, BERT-M4 exhibited a high error rate in predicting Hr_btm, with all corresponding words being incorrectly predicted as Hr_top. This suggests that when BERT models are trained with small datasets, placement methods may be mispredicted, leading to incorrect positioning of the target panel on the stud by the robot. In the test dataset, BERT-M1, which exhibited the best performance, achieved a word-level accuracy of 99.95% with two incorrect predictions and an instruction-level accuracy of 99.37% with one error. The error occurred when the values corresponding to width and length were incorrectly predicted as length and width, respectively. In the test using the BERT-M1 on the Google Colab platform, which offers the use of free GPU, the results showed that the average prediction time for one instruction was about 0.025 seconds. The 158 test data can be categorized into four groups based on the number of sentences: 46 one-sentence instructions, 74 two-sentences instructions, 27 three-sentences instructions, and 11 four-sentences instructions. The average prediction time of each group was 0.0224 seconds, 0.0176 seconds, 0.0324 seconds, and 0.0606 seconds, respectively. As the number of sentences in a single instruction increased, the analysis time tended to increase as well. In other words, time performance is better when the number of sentences is smaller. However, the absolute value was negligible across all sentence groups, showing the effectiveness of the NLU module. Using studs and drywall panels introduced in the case study, drywall panels can be placed in three different types as shown in Fig. 15. The layouts in Fig. 15(a) and Fig. 15(b) use one unique panel A and one unique panel B, and two standard panels installed vertically and horizontally, respectively. In the layout in Fig. 15(c), two types of distinct panels are placed vertically. Drywall installation is demonstrated based on the outputs of the NLU module and the IM module for three drywall layouts. The input data of the NLU module were selected from the test dataset. Demonstration results for the layout 1 are shown in the Fig. 16. Figs. 16(a)-(d) show a pair of a natural language instruction and how the KUKA robot successfully placed a panel for each instruction. As a result of IM for the instruction in Fig. 16(a), the drywall panel 500320 and the stud 500100 were determined as the target and the final location, respectively. The target panel was installed perpendicular to the left line of the stud. The first row of the action history table in Fig. 16(c) shows this result. As shown in Fig. 16(b), the drywall panel was installed vertically on the center line of the stud because Vr_md was predicted as a result of the NLU module for the second sentence of the language instruction. The second row of the fourth and fifth columns in Fig. 16(e) shows this result. In Fig. 16(c) and Fig. 16(d), “second to the left” and “left” were tagged as St_loc1, and “500109” and “500111” were tagged as St_loc2 in the NLU module. The rules of the IM module shown in Fig. 11 determined the stud 500107 and the stud 500110 as the final location for the third and fourth instructions, respectively. According to the action history table about the output of the IM, the robot installed drywall panels onto the stud walls. Fig. 17 and Fig. 18 show the natural language instructions and demonstration results for layout 2 and layout 3. As shown in both figures, the robot successfully installed drywall panels by extracting correct information for pick-and-place operations from the NLU and IM modules. §.§ Co-reference issue This study focused on words distinctly characterizing targets and destinations when establishing annotation rules, rather than all words denoting the targets and destinations. This annotation strategy was chosen due to the insufficiency of generic words like drywall, stud or pronouns in clearly distinguishing among multiple panels or studs. However, co-reference issues are crucial for robots to thoroughly interpret human instructions. Thus, additional experiments addressing co-reference issues were conducted using BERT to evaluate the impacts of the co-reference issues in this study. The dataset was re-annotated with two additional labels: Trg and Dst, representing a target and destination, respectively. For instance, in a three-sentences instruction “Please move the wall panel and move it on the stud 500100. Place it to the upper horizontal row. The dimension of the drywall is 4 by 8”, ‘wall panel’ in the first sentence, ‘it’ in the second sentence, and ‘drywall’ in the third sentence were annotated as Trg while ‘stud’ in the first sentence was annotated as Dst. BERT was trained following the same procedure as the prior experiments with variations in the volume of training data. Fig. 19 presents the training accuracy for the re-annotated datasets comprising 316, 632, 948, and 1,268 instructions. The insights from Fig. 13(b) and Fig. 19 reveal that the impact of the co-reference issue on training accuracy is not significant in this study. Initially, in epoch 1, the BERT-C models exhibited lower accuracy in comparison to the BERT-M models. However, as training progressed up to epoch 5, the training accuracy of both BERT-C and BERT-M models converged and became similar. Table 5 presents a comprehensive summary of the performance of the trained models on the validation dataset. It can be observed that BERT-C models, which considered co-reference issues, displayed slightly lower performance compared to the BERT-M models, which did not consider co-reference. However, with a large amount of training data, both BERT-C1 and BERT-C2 achieved accuracy close to 100%. These findings indicate that while co-reference issues may have a minor impact on performance, the BERT models trained with co-reference consideration can still achieve high accuracy when provided with a large amount of training data. § DISCUSSION This paper presented a framework of a natural language-enabled HRC system that consists of three steps: natural language understanding, information mapping, and robot control. The proposed approach enables human workers to interact with construction robots using natural language instructions and building component information. The proposed system was validated through a case study on drywall installation and BERT-M1 achieved a highest accuracy of 99.37% at instruction-level for the 158 test data in the NLU module. Even with a small amount of training data, BERT achieved an instruction-level accuracy close to 80%, suggesting that it is an effective approach for analyzing natural language instructions in the context of construction robotics. However, it should be noted that BERT-based models may require more training time compared to BiLSTM-based models <cit.>. Therefore, if the amount of available data is sufficient, it may be worthwhile to consider using the BiLSTM-CRF model, which has shown similar performance to BERT for tagging tasks in this study. In the IM and RC module, it is observed that drywall installation tasks were performed successfully through natural interaction using language instructions. This study clearly demonstrates that the proposed system has significant potential for field implementation to achieve natural interaction with robots in construction. Even though the proposed method achieved high performance on the given datasets, there are still some challenges that must be addressed. The proposed method used text data as input and a virtual robot digital twin in the experiments instead of using voice data with a real robot deployed on an actual worksite. In the real world, the background noise on-site can interfere in the recognition of the spoken language instructions, which can result in low accuracy in the sequence labeling tasks. In addition, Hatori et al. <cit.> found that the grasping ability of a robot introduced some problems even though the detection of a destination box and a target object achieved high accuracy. In a future study, the authors will explore how spoken language instructions and a physical robot affect the result of the communication between human workers and robots on construction sites. Second, the proposed framework relies entirely on the output of the NLU module to generate the final command in the IM module. Consequently, if the NLU module's prediction is incorrect, the IM module's output will be incorrect as well. Future studies can explore integrating the NLU and IM modules and utilizing natural language instructions and building component information as inputs for training together. This could potentially improve the framework's overall accuracy and robustness. Third, the case study was conducted in a single stud structure. In the environment setting, a fixed perspective was used to describe locations of the panels and studs. In future work, the proposed approach can be improved by updating the proposed system for complex structures and changing the perspective of human workers. Finally, bidirectional communication was not considered in the proposed system. It implies that human workers are unable to intervene in robot tasks or provide new plans when the robot encounters difficulties for higher level of HRC. This limitation highlights the need for more complicated communication protocols that require a deeper understanding of human-robot interaction. To address this, the authors will consider bidirectional communication in a future study to improve the proposed system and increase the level of natural interaction with construction robots. § CONCLUSION This study made several contributions: the research laid the foundation for natural interaction with construction robots by using natural language instructions. To our best knowledge, it is the first study to demonstrate interaction with construction robots using natural language instructions and building component information. A demonstration of the proposed system using natural language instructions showed the potential of HRC through speech channels in construction. We extracted information about target objects, destinations, and placement orientation that can be applied to other pick-and-place operations in construction tasks, such as ceiling tile installation, wall tile installation, or bricklaying. Even though, the application of the framework we proposed was demonstrated through a drywall installation, the framework itself consisting of three modules (NLU, IM, and RC) is generalizable and adaptable to any pick-and-place construction task making this technical contribution broadly applicable. Second, to address the lack of an existing dataset suitable for drywall installation, a natural language instruction dataset was created based on human interactions and work observed in construction videos and related studies. The dataset stands out due to its fine-grained annotation as it was meticulously annotated to deal with the necessary information for pick-and-place operations including unique characteristics such as IDs, dimensions, or locations. This annotation process enhanced the quality and depth of the labeled data, making our dataset a valuable resource for advancing research in the field of construction-related natural language processing. Third, the proposed system facilitates interaction with the robot by using the information available in the construction projects. By mapping building component information and analyzed language instructions, human operators can give language instructions to a robot in a shorter or more intuitive way. We believe that this approach significantly contributes to the development of a practical and efficient human-robot collaboration system on construction sites. Finally, two different language models, which are BiLSTM-CRF and BERT, were trained by labels reflecting characteristics of construction activities. The results from two different language models were compared and the resulting insights were discussed. It was found that both existing language models worked well with the newly generated dataset. In addition, BERT was an effective approach even when there is limited training data available. Even when trained with 632 instructions, it achieved an instruction-level accuracy of 96% in validation set. This has important implications for the construction industry, where there is a lack of data for natural language instructions. By leveraging pre-trained models like BERT and fine-tuning them, it is possible to overcome this challenge and achieve high levels of accuracy. In addition, this study showed that BERT achieved high accuracy when trained with a large amount of data while taking co-reference issues into account. Overall, the proposed system demonstrated significant potential in utilizing natural interaction using spoken language instructions in human robot collaboration in construction. It can allow human workers to easily learn how to collaborate with robots through the natural and intuitive interface. § ACKNOWLEDGMENTS The work presented in this paper was supported financially by two United States National Science Foundation (NSF) Awards: 2025805 and 2128623. The support of the NSF is gratefully acknowledged. unsrt
http://arxiv.org/abs/2307.04451v1
20230710100450
Globally linked pairs of vertices in generic frameworks
[ "Tibor Jordán", "Soma Villányi" ]
math.CO
[ "math.CO", "math.MG" ]
Analyzing the Evolution of Inter-package Dependencies in Operating Systems: A Case Study of Ubuntu Victor Prokhorenko1,2 Chadni Islam3 Muhammad Ali Babar1,2 August 12, 2023 ================================================================================================== A d-dimensional framework is a pair (G,p), where G=(V,E) is a graph and p is a map from V to ℝ^d. The length of an edge xy∈ E in (G,p) is the distance between p(x) and p(y). A vertex pair {u,v} of G is said to be globally linked in (G,p) if the distance between p(u) and p(v) is equal to the distance between q(u) and q(v) for every d-dimensional framework (G,q) in which the corresponding edge lengths are the same as in (G,p). We call (G,p) globally rigid in ^d when each vertex pair of G is globally linked in (G,p). A pair {u,v} of vertices of G is said to be weakly globally linked in G in ^d if there exists a generic framework (G,p) in which {u,v} is globally linked. In this paper we first give a sufficient condition for the weak global linkedness of a vertex pair of a (d+1)-connected graph G in ^d and then show that for d=2 it is also necessary. We use this result to obtain a complete characterization of weakly globally linked pairs in graphs in ^2, which gives rise to an algorithm for testing weak global linkedness in the plane in O(|V|^2) time. Our methods lead to a new short proof for the characterization of globally rigid graphs in ^2, and further results on weakly globally linked pairs and globally rigid graphs in the plane and in higher dimensions. § INTRODUCTION A d-dimensional framework is a pair (G,p), where G=(V,E) is a graph and p is a map from V to ℝ^d. We also say that (G,p) is a realization of G in ℝ^d. The length of an edge uv∈ E in (G,p) is ||p(u)-p(v)||, where ||.|| denotes the Euclidean norm in ℝ^d. Two frameworks (G,p) and (G,q) are equivalent if corresponding edge lengths are the same, that is, ||p(u)-p(v)||=||q(u)-q(v)|| holds for all pairs u,v with uv∈ E. The frameworks (G,p) and (G,q) are congruent if ||p(u)-p(v)||=||q(u)-q(v)|| holds for all pairs u,v with u,v∈ V. A d-dimensional framework (G,p) is called globally rigid if every equivalent d-dimensional framework (G,q) is congruent to (G,p). This is the same as saying that the edge lengths of (G,p) uniquely determine all the pairwise distances. It is NP-hard to test whether a given framework in ^d is globally rigid, even for d=1 <cit.>. This fundamental property of frameworks becomes more tractable if we consider generic frameworks. A framework (G,p) (and the set {p(v):v∈ V(G)}) is said to be generic if the set of its d|V(G)| vertex coordinates is algebraically independent over ℚ. It is known that in a given dimension the global rigidity of a generic framework (G,p) depends only on G: either every generic realization of G in ^d is globally rigid, or none of them are <cit.>. Thus, we say that a graph G is globally rigid in ^d if every (or equivalently, if some) d-dimensional generic realization of G is globally rigid in ^d. For d=1,2, combinatorial characterizations and corresponding deterministic polynomial time algorithms are known for (testing) global rigidity in ^d. The case d=1 is a folklore result: it is not hard to see that a graph G on at least three vertices is globally rigid in ^1 if and only if it is 2-connected. The necessary and sufficient conditions for d=2 are stated as Theorem <ref> in the next section. The existence of such a characterization (or algorithm) for d≥ 3 is a major open question. For more details on globally rigid graphs and frameworks see e.g. <cit.>. In this paper we consider a refined, local version, in which we are interested in whether the edge lengths of a framework uniquely determine the distance between a given pair of vertices, rather than all pairs of vertices. We shall need the following notions. Following <cit.>, we say that a pair of vertices {u,v} in a d-dimensional framework (G,p) is globally linked in (G,p) if for every equivalent d-dimensional framework (G,q) we have ||p(u)-p(v)||=||q(u)-q(v)||. Global linkedness in ^d is not a generic property (for d≥ 2): a vertex pair may be globally linked in some generic d-dimensional realization of G without being globally linked in all generic realizations. See Figure <ref>. We say that a pair {u,v} is globally linked in G in ^d if it is globally linked in all generic d-dimensional frameworks (G,p). We call a pair {u,v} weakly globally linked in G in ^d if there exists a generic d-dimensional framework (G,p) in which {u,v} is globally linked. If {u,v} is not weakly globally linked in G, then it is called globally loose in G. It is immediate from the definitions that G is globally rigid in ^d if and only if each vertex pair is globally linked in G in ^d. As we shall see, the global rigidity of G already follows from the (seemingly weaker) condition that each vertex pair is weakly globally linked in G (see Lemma <ref>(c)). The case d=1 is exceptional and well-understood. Global linkedness in ^1 is a generic property: a pair {u,v} is globally linked in G in ^1 if and only if there is a cycle in G that contains both u and v. Otherwise {u,v} is globally loose. For d≥ 2 no combinatorial (or efficiently testable) characterization has previously been found for globally linked or weakly globally linked pairs in graphs in ^d. These problems belong to the few major problems in combinatorial rigidity which have remained unsolved for d=2. The main result of this paper is a solution for the weakly globally linked pairs problem in two dimensions. We shall first give a sufficient condition for the weak global linkedness of a vertex pair of a (d+1)-connected graph G in ^d (Theorem <ref>) and then show that in a sense the condition is also necessary in the case of 3-connected graphs in ^2 (Theorem <ref>). The general case of the two-dimensional problem is reduced to the 3-connected case by a sequence of lemmas that describe how global linkedness is affected by cutting a graph along a separating pair. These results lead to the main result (Theorem <ref>), which gives a characterization of weakly globally linked pairs of vertices in ^2 and gives rise to an O(|V|^2) algorithm for the corresponding decision problem. Our methods and results lead to a new short proof for the sufficiency part of Theorem <ref>. We also obtain a number of other structural results on weakly globally linked pairs and globally rigid graphs in ^2 and in higher dimensions. Even though most of the known results (and conjectures) on global linkedness are concerned with globally linked pairs of graphs in ^2, their characterization remains open. Globally linked pairs in two dimensions have been characterized in minimally rigid graphs <cit.>, braced maximal outerplanar graphs <cit.>, and in R_2-connected graphs <cit.>. In the latter two cases global linkedness turns out to be a generic property. Hence these two results give rise to the characterization of weakly globally linked pairs, too, in the corresponding families of graphs. A conjectured characterization of globally linked pairs in ^2 can be found in <cit.>. A few partial results in higher dimensions are also available, see <cit.>. The rest of the paper is organized as follows. In Section <ref> we introduce the necessary notions concerning rigid graphs and frameworks. In Section <ref> we prove some simple but fundamental lemmas on weakly globally linked pairs in ^d. Section <ref> contains most of the d-dimensional results (two key geometric lemmas and a sufficient condition for weak global linkedness), and the new proof for Theorem <ref>. In Section <ref> we state and prove our main result, a complete characterization of the weakly globally linked pairs in ^2. In Section <ref> we discuss the algorithmic aspects and collect a few concluding remarks and questions. § PRELIMINARIES In this section we introduce the notions and results from the theory of (globally) rigid frameworks and graphs that we shall use. §.§ Rigid graphs and the rigidity matroid In the structural results on global rigidity and global linkedness the notions of rigid frameworks, rigid graphs and the rigidity matroid play a key role. The d-dimensional framework (G,p) is rigid if there exists some ε >0 such that, if (G,q) is equivalent to (G,p) and ||p(v)-q(v)||< ε for all v∈ V, then (G,q) is congruent to (G,p). This is equivalent to requiring that every continuous motion of the vertices of (G,p) in ^d that preserves the edge lengths takes the framework to a congruent realization of G. It is known that in a given dimension the rigidity of a generic framework (G,p) depends only on G: either every generic realization of G in ^d is rigid, or none of them are <cit.>. Thus, we say that a graph G is rigid in ^d if every (or equivalently, if some) d-dimensional generic realization of G is rigid in ^d. For d=1,2, combinatorial characterizations and corresponding deterministic polynomial time algorithms are known for (testing) rigidity in ^d, see e.g. <cit.>. The existence of such a characterization (or algorithm) for d≥ 3 is a major open question. The following elementary result is well-known. For the proof of the two-dimensional case see <cit.>. Suppose that (G,p) is a rigid generic framework. Then the number of distinct congruence classes of frameworks which are equivalent to (G,p) is finite. The rigidity matroid of a graph G is a matroid defined on the edge set of G which reflects the rigidity properties of all generic realizations of G. For a general introduction to matroid theory we refer the reader to <cit.>. Let (G,p) be a realization of a graph G=(V,E) in ^d. The rigidity matrix of the framework (G,p) is the matrix R(G,p) of size |E|× d|V|, where, for each edge uv∈ E, in the row corresponding to uv, the entries in the d columns corresponding to vertices u and v contain the d coordinates of (p(u)-p(v)) and (p(v)-p(u)), respectively, and the remaining entries are zeros. The rigidity matrix of (G,p) defines the rigidity matroid of (G,p) on the ground set E by linear independence of the rows. It is known that any pair of generic frameworks (G,p) and (G,q) have the same rigidity matroid. We call this the d-dimensional rigidity matroid R_d(G)=(E,r_d) of the graph G. We denote the rank of R_d(G) by r_d(G). A graph G=(V,E) is R_d-independent if r_d(G)=|E| and it is an R_d-circuit if it is not R_d-independent but every proper subgraph G' of G is R_d-independent. We note that in the literature such graphs are sometimes called M-independent in ^d and M-circuits in ^d, respectively. An edge e of G is an R_d-bridge in G if r_d(G-e)=r_d(G)-1 holds. Equivalently, e is an R_d-bridge in G if it is not contained in any subgraph of G that is an R_d-circuit. The following characterization of rigid graphs is due to Gluck. <cit.> Let G=(V,E) be a graph with |V|≥ d+1. Then G is rigid in ^d if and only if r_d(G)=d|V|-d+12. A graph is minimally rigid in ^d if it is rigid in ^d but G-e is not rigid in ^d for every edge e of G. By Theorem <ref>, minimally rigid graphs in ^d on at least d+1 vertices have exactly d|V| - d+12 edges. Let G=(V,E) be a graph and {u,v} be a pair of vertices of G. An induced subgraph G[X] (and the set X), for some X⊆ V, is said to be (u,v)-rigid in ^d (or simply (u,v)-rigid, if d is clear from the context), if G[X] is rigid in ^d and u,v∈ X. We say that a (u,v)-rigid subgraph G[X] is vertex-minimally (u,v)-rigid, if G[X'] is not (u,v)-rigid for all proper subsets X'⊂ X. The pair {u,v} is called linked in G in ^d if r_d(G+uv)=r_d(G) holds. It is known that a pair {u,v} is linked in G in ^2 if and only if there exists a (u,v)-rigid subgraph of G. A graph G with at least three edges is called redundantly rigid in ^d if G-e is rigid in ^d for all e∈ E(G). Let M be a matroid on ground set E. We can define a relation on the pairs of elements of E by saying that e,f∈ E are equivalent if e=f or there is a circuit C of M with {e,f}⊆ C. This defines an equivalence relation. The equivalence classes are the connected components of M. The matroid is connected if it has only one connected component. A graph G=(V,E) is R_d-connected if R_d(G) is connected. We shall use the well-known fact that if v is a vertex of degree at most d in G, then every edge incident with v is an R_d-bridge in G. Hence the addition of a new vertex of degree d to a rigid graph G in ^d preserves rigidity. For more details on the 2-dimensional rigidity matroid, see <cit.>. §.§ Globally rigid graphs The following necessary conditions for global rigidity are due to Hendrickson. <cit.> Let G be a globally rigid graph in ^d on at least d+2 vertices. Then G is (d+1)-connected and redundantly rigid in ^d. For d=1,2 the conditions of Theorem <ref> together are sufficient to imply global rigidity. It is not the case for d≥ 3. The characterization of globally rigid graphs in ^2 is as follows. <cit.> Let G be a graph on at least four vertices. Then G is globally rigid in ^2 if and only if G is 3-connected and redundantly rigid in ^2. An equivalent characterization of global rigidity, in terms of the rigidity matroid of G, follows from the next lemma. <cit.> Let G be a graph with at least two edges. If G is R_2-connected, then G is redundantly rigid in ^2. Furthermore, if G is 3-connected and redundantly rigid in ^2, then G is R_2-connected. We shall also use the following lemma. <cit.> Let G be a rigid, but not redundantly rigid graph in ^2, and suppose that all R_2-bridges of G are edges of the same triangle in G. Then G is not 3-connected. § PROPERTIES OF WEAKLY GLOBALLY LINKED PAIRS IN ^D We first collect some basic properties that hold in ^d for all d≥ 1. The following lemma was stated for d=2 in <cit.> but the proof works for all d≥ 1. An edge e of a globally rigid graph H is critical if H-e is not globally rigid. <cit.> Let G=(V,E) be a graph and u,v∈ V. Suppose that uv∉ E, and that G has a globally rigid supergraph in ^d in which uv is a critical edge. Then {u,v} is globally loose in G in ^d. We shall frequently use the next key lemma. For a graph G=(V,E) and integer d≥ 1 let J_d(G)={uv : u,v∈ V, uv∉ E, {u,v} is weakly globally linked in G in ^d}. Let G=(V,E) be a graph and let F be a set of edges on vertex set V. Then the following hold. (a) If G+J_d(G)+F is globally rigid in ^d, then G+F is globally rigid in ^d. (b) If G+uv is globally rigid in ^d for some uv∈ J_d(G), then G is globally rigid in ^d. (c) G is globally rigid in ^d if and only if all pairs of vertices in G are weakly globally linked in ^d. Let us fix d and put J=J_d(G). (a) Suppose, for a contradiction, that G+J+F is globally rigid and G+F is not. Then there is a (possibly empty) subset J'⊂ J and an edge uv∈ J-J' for which G+J'+F is not globally rigid, but G̅=G+J'+F+uv is globally rigid. Then uv is a critical edge in G̅, and hence {u,v} is globally loose in G by Lemma <ref>, a contradiction. (b) If G+uv is globally rigid for some uv∈ J then G+J is globally rigid. Thus putting F=∅ and applying (a) gives that G is globally rigid. (c) Necessity is obvious. If all pairs of vertices in G are weakly globally linked, then G+J is a complete graph, which is globally rigid. Again, putting F=∅ and applying (a) gives that G is globally rigid. It is well-known that if {u,v} is not linked in G in ^d, then every generic d-dimensional realization (G,p) has a flex (i.e. a continuous motion of the vertices that preserves the edge lengths) to another framework (G,q), for which ||p(u)-p(v)||≠ ||q(u)-q(v)||. This implies the next lemma. Let G=(V,E) be a graph and let {u,v} be a non-adjacent vertex pair. If {u,v} is not linked in G in ^d then {u,v} is globally loose in G in ^d. Let H=(V,E) be a graph and x,y∈ V. We use κ_H(x,y) to denote the maximum number of pairwise internally disjoint xy-paths in H. Note that if xy∉ E then, by Menger's theorem, κ_H(x,y) is equal to the size of a smallest set S⊆ V-{x,y} for which there is no xy-path in H-S. The following lemma is the d-dimensional and slightly stronger version of <cit.>. Let G=(V,E) be a graph and let {u,v} be a non-adjacent vertex pair with κ_G(u,v)≤ d. Then {u,v} is globally loose in G in ^d. Let G_i=(V_i,E_i) be a graph, t ≥ 1 an integer, and suppose that K_i is a complete subgraph of G_i on t vertices, for i=1,2. Then the t-clique sum operation on G_1,G_2, along K_1,K_2, creates a new graph G by identifying the vertices of K_1 with the vertices of K_2, following some bijection between their vertex sets. The clique sum operation is a t-clique sum operation for some t≥ 1. In the following lemma sufficiency follows from the simple obervation that if a vertex pair is weakly globally linked in a subgraph of G, then it is also weakly globally linked in G. Necessity follows from the fact that the clique sum operation is performed along a complete (and hence globally rigid) subgraph. Suppose that G is the clique sum of G_1 and G_2 and let u,v∈ V(G_1). Then {u,v} is weakly globally linked in G in ^d if and only if {u,v} is weakly globally linked in G_1 in ^d. § A SUFFICIENT CONDITION FOR WEAK GLOBAL LINKEDNESS IN ^D In this section we provide a new sufficient condition for the weak global linkedness of a pair of vertices of a (d+1)-connected graph in ^d. An important ingredient in our proof is a geometric lemma (Lemma <ref>) presented in the next subsection. In Subsection <ref> we prove the aforementioned sufficient condition and in Subsection <ref> we show how it can be used to prove the sufficiency part of Theorem <ref>. In the last subsection we shall see that an appropriate reverse of Lemma <ref> is also true (Lemma <ref>). This lemma will be used in the next section where we characterize weak global linkedness in two dimensions. Roughly speaking these two lemmas show that if a vertex pair {u,v} belongs to a rigid subgraph H of G, then the contraction of a connected subgraph of G-V(H) does not change the weak global linkedness properties of {u,v}. §.§ The first contraction lemma A basic graph operation is the contraction of a subset V_0 of V in the graph G=(V,E). This operation, which is denoted by G/V_0, identifies the vertices of V_0 and removes the loops and parallel copies of the edges of the resulting graph that it may create. The contraction of an edge e=xy is the contraction of the set {x,y} and it is denoted by G/e. Let G=(V,E) be a graph, u,v∈ V, and suppose that G[V_0] is a (u,v)-rigid subgraph of G. Let e=(s_1,s_2)∈ E-E(G[V_0]) be an edge. If {u,v} is weakly globally linked in G/e in ^d, then {u,v} is weakly globally linked in G in ^d. We may assume that G is connected and s_2∉ V_0. Let s denote the vertex of G/e obtained by identifying s_1 and s_2 in G. Note that we may have s_1∈ V_0. In this case we shall simply identify s with s_1 for notational convenience. Let (G/e,p) be a generic realization of G/e in which {u,v} is globally linked. Let (G,p_i) be a sequence of generic realizations of G, for which p_i|_V-s_1-s_2=p|_V-s, p_i(s_1)=p(s), and p_i(s_2)→ p(s). Suppose, for a contradiction, that {u,v} is globally loose in G. Then {u,v} is not globally linked in (G,p_i) for all i≥ 1. Hence for all i≥ 1 there exists a realization (G,p_i'), equivalent to (G,p_i), for which ||p_i'(u)-p_i'(v)||≠ ||p_i(u)-p_i(v)||=||p(u)-p(v)||. Since G[V_0] is rigid and p|_V_0=p_i|_V_0, it follows from Proposition <ref> that there is an ϵ >0 such that for all i≥ 1, |||p_i'(u)-p_i'(v)||-||p(u)-p(v)|||≥ϵ. Since G is connected, we can translate each framework, if necessary, so that for all i≥ 1, (G,p_i') is in the interior of a ball of radius K, centered at the origin, for some fixed positive real number K. Thus there is a convergent subsequence p_i_k'→ p'. Since (s_1,s_2)∈ E, we must have p'(s_1)=p'(s_2). By extending p'|_V-s_1-s_2 with p'(s)=p'(s_1), we obtain a realization (G/e,p') which is equivalent to (G/e,p). Furthermore, we have |||p'(u)-p'(v)||-||p(u)-p(v)|||≥ϵ, which contradicts the fact that {u,v} is globally linked in (G/e,p). Thus {u,v} is weakly globally linked in G. We obtain the following sufficient (but not necessary, see Figure <ref>) condition for weak global linkedness as a corollary. Let G=(V,E) be a graph, u,v∈ V. Suppose that there is some V_0⊂ V such that G[V_0] is a (u,v)-rigid subgraph of G in ^d, and there is a uv-path in G that is internally disjoint from V_0. Then {u,v} is weakly globally linked in G in ^d. Corollary <ref>, together with Lemma <ref>, leads to short proofs for some previous results on globally rigid graphs. We illustrate this by the following theorem. <cit.> Let G_1 and G_2 be two globally rigid graphs in ^d on at least d+2 vertices, with exactly d+1 vertices in common. Suppose that e is a common edge. Then G=G_1∪ G_2-e is globally rigid in ^d. Let e=uv. Theorem <ref> implies that G_1-e is rigid. Since G_2 is (d+1)-connected, there is a path from u to v in G that is internally disjoint from G_1. Thus {u,v} is weakly globally linked in G by Corollary <ref>. It is easy to see that G+uv is globally rigid. Hence G is also globally rigid by Lemma <ref>. By using the same proof idea we obtain a simple proof of the “rooted minor" theorem of Tanigawa <cit.>. §.§ The sufficient condition Let G=(V,E) be a graph, ∅≠ X⊆ V, and let V_1,V_2,…, V_r be the vertex sets of the connected components of G-X. The graph Con(G,X) is obtained from G by contracting each vertex set V_i into a single vertex v_i, 1≤ i≤ r. The graph Clique(G,X) is obtained from G by deleting the vertex sets V_i, 1≤ i≤ r, and adding a new edge xy for all pairs x,y∈ N_G(V_i), xy∉ E, for 1≤ i≤ r. See Figure <ref>. Let G=(V,E) be a (d+1)-connected graph. Suppose that G[V_0] is a rigid subgraph of G for some V_0⊆ V. Then Clique(G,V_0) is globally rigid in ^d if and only if Con(G,V_0) is globally rigid in ^d. Let E' be the set of those edges in Clique(G,V_0) that are not in G[V_0]. Let H= Con(G,V_0)+E'. It follows from Corollary <ref> that {u,v} is weakly globally linked in Con(G,V_0) for all uv∈ E'. Hence, by Lemma <ref>, Con(G,V_0) is globally rigid if and only if H is globally rigid. H can be obtained from Clique(G,V_0) by adding new vertices and joining them to cliques of size at least d+1. Thus H is globally rigid if and only if Clique(G,V_0) is globally rigid. We are ready to state the main result of this section. Let G=(V,E) be a (d+1)-connected graph and u,v∈ V. Suppose that G[V_0] is a (u,v)-rigid subgraph of G in ^d. If Clique(G,V_0) is globally rigid in ^d, then {u,v} is weakly globally linked in G in ^d. Suppose that Clique(G,V_0) is globally rigid in ^d. Then so is Con(G,V_0) by Lemma <ref>. In particular, {u,v} is weakly globally linked in Con(G,V_0). Since Con(G,V_0) can be obtained from G by contracting edges not induced by V_0, Lemma <ref> gives that {u,v} is weakly globally linked in G. §.§ Globally rigid graphs - a new proof Theorem <ref> and Lemma <ref> lead to a new short proof of the sufficiency part of Theorem <ref>, which only uses the simple combinatorial Lemmas <ref> and <ref> and the fact that the global rigidity of graphs in ^2 is a generic property. The original proof in <cit.> relies on an inductive construction of 3-connected R_2-connected graphs. (of sufficiency in Theorem <ref>) The proof is by induction on |V|. If |V|=4 then G is a complete graph on four vertices, which is globally rigid. So we may suppose that |V|≥ 5. First, we show that for all non-adjacent pairs u,v, there is a (u,v)-rigid proper induced subgraph G[X] of G. To see this consider two edges e,f∈ E incident with u and v, respectively. Since G is 3-connected and redundantly rigid, it is R_2-connected by Lemma <ref>. Hence there is an R_2-circuit C in G with e,f∈ E(C). Since |E(C)|=2|V|-2 and d_C(v)≥ 3 for all v∈ V(C), it follows that C has at least four vertices of degree three. Thus there is a vertex w∈ V(C) with w∉{u,v} and d_C(w)=3. Now X=V(C)-w induces the desired (u,v)-rigid subgraph. In the rest of the proof we show that every non-adjacent vertex pair {u,v} of G is weakly globally linked in G. The theorem will follow from this by Lemma <ref>(c). Let us fix u,v and consider a (u,v)-rigid proper induced subgraph G[X] of G. As we have shown above, such a subgraph exists. By Theorem <ref> it suffices to show that Clique(G,X) is globally rigid. Let D be the vertex set of a component of G-X and let H be obtained from G-D by adding a new edge xy for each non-adjacent pair x,y∈ N_G(D). Since G can be obtained from H by attaching a graph along a complete subgraph, and removing edges, the 3-connectivity of G implies that H is 3-connected. A similar argument shows that H is rigid, and so is H-e for every edge e in H not induced by N_G(D). Thus if H has some R_2-bridges, then they are all induced by N_G(D). If |N_G(D)|≥ 4, then each edge induced by N_G(D) belongs to a K_4 in H, so H cannot have R_2-bridges at all. If |N_G(D)|=3, then every R_2-bridge in H belongs to the same triangle, on the vertices of N_G(D). But that is impossible by Lemma <ref>. Therefore H is a rigid graph with no R_2-bridges, and hence it is redundantly rigid. By repeated applications of this argument we obtain that Clique(G,X) is 3-connected and redundantly rigid. Since |X|≤ |V|-1, we can now use induction to deduce that Clique(G,X) is globally rigid. This completes the proof. We remark that a different proof for the sufficiency part in Theorem <ref> was also given by Tanigawa <cit.>. The high level ideas of his proof and the proof given in this subsection are similar. By using our notation the main lemma <cit.> can be stated as follows: if v is a vertex of degree at least d+1 in G, G-v is rigid in ^d, and Clique(G,V-{v}) is globally rigid in ^d, then G is globally rigid in ^d. This statement is a special case of the “only if" direction of our Lemma <ref>. §.§ The second contraction lemma As a corollary of Lemma <ref>, it can be deduced that if G[V_0] is a (u,v)-rigid subgraph of a graph G, V_1 is the vertex set of a component of G-V_0 and {u,v} is weakly globally linked in G/V_1, then {u,v} is weakly globally linked in G. In this subsection we shall prove the converse of this statement, see Lemma <ref> below. We shall need some new notions and an auxiliary lemma. A configuration of a set U is a function that maps U into ^d. Two configurations p_1,p_2 of U are said to be congruent if ||p_1(u)-p_1(v)||=||p_2(u)-p_2(v)|| for all u,v∈ U. Suppose that p and q are two incongruent configurations of a set U. We call a point x∈^d (q,p)-feasible if there exists a point y∈^d such that ||p(u)-x||=||q(u)-y|| for all u∈ U. We then call y a (q,p)-associate of x. Observe that if π is an isometry of ^d, then the set of (q,p)-feasible points is equal to the set of (π∘ q, p)-feasible points. The affine hull of set X⊆^d will be denoted by Aff(X). Let p be a configuration of a set U. Suppose that Q={q_1,…,q_k} is a non-empty set of configurations of U such that q_i is not congruent to p, for all 1≤ i≤ k. Let F_i be the set of (q_i,p)-feasible points, 1≤ i≤ k. Then ^d-⋃_i=1^k F_i is a non-empty open set. Let S=^d-⋃_i=1^k F_i. We claim that F_i is closed for every i∈{1,…, k}, which will imply that S is open. Let x_j→ x be a convergent sequence with x_j∈ F_i, j∈ℕ, and let y_j be a (q_i,p)-associate of x_j. The set {y_j:j∈ℕ} is bounded. Hence there exists a convergent subsequence y_j_ℓ→ y. Then y is a (q_i,p)-associate of x, which gives x∈ F_i. This proves the claim. In the rest of the proof we show that S is non-empty. Notice that |U|≥ 2 must hold. We shall prove the following stronger statement by induction on |U|: for every a∈ U we have S∩Aff(p(U-{a}))≠∅ First suppose that |U|=2, and let U={a,b}. Then we have ||q_i(a)-q_i(b)||≠ ||p(a)-p(b)||, since q_i and p are not congruent. Thus p(b)∈ S, and hence (<ref>) follows. Next suppose that |U|≥ 3. Let Q'={q∈ Q: p|_U-{a} is not congruent to q|_U-{a}}, and let Q”=Q-Q'. By putting F'=⋃_q_i∈ Q'F_i and F”=⋃_q_i∈ Q”F_i, we have S=^d-F'-F”. By induction, the set Aff(p(U-{a,b}))-F' is non-empty for every b∈ U-{a}. Since F' is closed, this implies that Aff(p(U-{a}))-F' is non-empty and relatively open in Aff(p(U-{a})). We claim that for all q_i ∈ Q” the set F_i∩ Aff(p(U-{a})) is either empty or a proper affine subspace of Aff(p(U-{a})). To prove the claim, let q_i∈ Q”. By replacing q_i with π∘ q_i, where π is an appropriate isometry of ^d, we may assume that p|_U-{a}= q_i|_U-{a}. Then it follows from the incongruency of p and q_i that p(a)≠ q_i(a). Suppose that x∈ Aff(p(U-{a})) and y is a (q_i,p)-associate of x. Then there exists an isometry that fixes each point of p(U-{a}) and maps x to y. This isometry fixes each point of Aff(p(U-{a})), therefore, y=x. So the only possible (q_i,p)-associate of x is x itself. It follows that x is (q_i,p)-feasible if and only if ||x-p(a)||=||x-q_i(a)||, that is, if x is in the bisector hyperplane H of p(a) and q_i(a). Since q_i and p are not congruent, we obtain Aff(p(U-{a}))⊈H. This proves the claim. The lemma follows by noting that (<ref>) and (<ref>) yield that the set S∩Aff(p(U-{a}))=( Aff(p(U-{a}))-F')-F” is non-empty, and hence (<ref>) holds. Let G=(V,E) be a graph, u,v∈ V, and suppose that G[V_0] is a (u,v)-rigid subgraph of G. Let V_1 be the vertex set of some component of G-V_0. Then {u,v} is weakly globally linked in G in ^d if and only if {u,v} is weakly globally linked in G/V_1 in ^d. Since G/V_1 can be obtained from G by contracting edges not induced by V_0, the “if" direction follows by repeated applications of Lemma <ref>. To prove the “only if" direction suppose that {u,v} is weakly globally linked in G and let (G,p) be a generic realization of G in which {u,v} is globally linked. Let v_1 be the vertex of G/V_1 obtained by the contraction of V_1 in G. We shall prove that p|_V-V_1 has an extension to (V-V_1)∪{v_1} that is a generic realization of G/V_1 in which {u,v} is globally linked. We may assume that {u,v} is not globally linked in (G-V_1,p|_V-V_1), for otherwise we are done by choosing an arbitrary generic extension. Let q_1,…,q_k be a maximal set of pairwise incongruent configurations of V_0 such that ||q_i(u)-q_i(v)||≠ ||p(u)-p(v)|| and q_i is a restriction of some realization of G-V_1 which is equivalent to (G-V_1,p|_V-V_1), for 1≤ i≤ k. By our assumption k≥ 1. Proposition <ref> implies that k is finite, since G[V_0] is rigid, p is generic and (G[V_0],q_i) is equivalent to (G[V_0],p|_V_0). For all i∈{1,…,k} the configurations q_i|_N_G(V_1) and p|_N_G(V_1) are incongruent, for otherwise q_i would be extendible to a configuration q_i' so that (G,q_i') is equivalent to (G,p), contradicting the assumption that {u,v} is globally linked in (G,p). Applying Lemma <ref> to N_G(V_1), p|_N_G(V_1) and the set Q={q_1|_N_G(V_1),…, q_k|_N_G(V_1)} gives that there is some x=(x_1,…, x_d)∈^d for which x is not (q_i|_N_G(V_1),p|_N_G(V_1))-feasible for all i∈{1,…,k} and for which p(V-V_1)∪{x} is generic. We can now complete the proof of the lemma by considering the generic realization (G/V_1,p), where p|_V-V_1=p|_V-V_1 and p(v_1)=x. Then {u,v} is globally linked in (G/V_1,p). Indeed, the existence of an equivalent realization (G/V_1,q) with ||q(u)-q(v)||≠ ||p(u)-p(v)|| would imply that q|_V_0=q_i for some 1≤ i≤ k and that x is (q_i|_N_G(V_1),p|_N_G(V_1))-feasible, contradicting the choice of x. Let G=(V,E) be a graph, u,v∈ V, and let G[V_0] be a (u,v)-rigid subgraph of G. Let e=(s_1,s_2)∈ E be an edge with s_1,s_2∉ V_0. Notice that Lemma <ref> implies that {u,v} is weakly globally linked in G in ^d if and only if {u,v} is weakly globally linked in G/e in ^d: each of these two conditions is equivalent to the condition that {u,v} is weakly globally linked in G/V_1, where V_1 is the connected component of G-V_0 that contains e. By the same argument, for any connected subgraph G_1 of G-G_0, {u,v} is weakly globally linked in G in ^d if and only if {u,v} is weakly globally linked in G/V(G_1) in ^d. § WEAKLY GLOBALLY LINKED PAIRS IN ^2 In this section we focus on the d=2 case. Thus, we shall occasionally write that a graph is (globally) rigid to mean that it is (globally) rigid in ^2, and we may similarly omit the dimension when referring to global linkedness of vertex pairs in graphs. This section contains one of our main results, a characterization of weakly globally linked pairs in graphs. §.§ Weakly globally linked pairs in 3-connected graphs We start with the special case of 3-connected graphs. By Lemma <ref> it suffices to consider non-adjacent linked pairs {u,v} of G, or equivalently, pairs {u,v} for which there exists some subgraph G_0=(V_0,E_0) of G with u,v∈ V_0 such that G_0+uv is an R_2-circuit. Let G=(V,E) be a 3-connected graph and u,v∈ V with uv∉ E. Suppose that G_0=(V_0,E_0) is a subgraph of G with u,v∈ V_0 such that G_0+uv is an R_2-circuit. Then {u,v} is weakly globally linked in G in ^2 if and only if Clique(G,V_0) is globally rigid in ^2. Since G[V_0] is rigid, sufficiency follows from Theorem <ref>. To prove the other direction suppose, for a contradiction, that {u,v} is weakly globally linked in G but Clique(G,V_0) is not globally rigid. Since G_0+uv is redundantly rigid, so is Clique(G,V_0)+uv. The 3-connectivity of G implies that Clique(G,V_0) is 3-connected. Thus Clique(G,V_0)+uv is globally rigid by Theorem <ref>. Hence {u,v} is globally loose in Clique(G,V_0) by Lemma <ref>. As G can be obtained from Clique(G,V_0) by clique sum operations and removing edges, Lemma <ref> implies that {u,v} is globally loose in G, a contradiction. §.§ Weakly globally linked pairs and 2-separators In this subsection we shall prove some lemmas that describe, among others, how weak global linkedness is affected when the graph is cut into two parts along a separating pair of vertices. These lemmas will enable us to reduce the question of whether a linked pair of vertices in a graph G is weakly globally linked to the case when G is 3-connected. We shall also need the following extension of <cit.>. Let G=(V,E) be a rigid graph, ab∈ E an R_2-bridge in G, and u,v∈ V with uv∉ E. Suppose that G has no (u,v)-rigid proper induced subgraph. Then {u,v} is globally loose in G. Let H=(V,B) be a minimally rigid spanning subgraph of G. Since ab is an R_2-bridge, we have ab∈ B. The graph H+uv contains a unique R_2-circuit C with u,v∈ V(C). Since G has no (u,v)-rigid proper induced subgraph, C=H+uv must hold. Let (G,p_0) be a generic realization of G. By <cit.> the generic framework (H-ab,p_0) has an equivalent realization (H-ab,p_1), which can be obtained by a flexing, and for which ||p_0(u)-p_0(v)||≠ ||p_1(u)-p_1(v)|| and ||p_0(a)-p_0(b)||= ||p_1(a)-p_1(b)||. Consider an edge xy∈ E-B. Since H is rigid, xy belongs to an R_2-circuit C' of H+xy. Moreover, ab is an R_2-bridge in G (as well as in its subgraph H+xy), and hence C' does not contain ab. Thus there is a rigid subgraph of H-ab (namely, C'-xy), which contains x and y. Hence the flexing does not change the distance between x and y. Therefore (G,p_1) is equivalent to (G,p_0). Since the distances between u and v are different in these realizations, it follows that {u,v} is globally loose in G. Let G=(V,E) be a rigid graph, let z∈ V with N_G(z)={x,y}, and let u,v∈ V-{z}. Then {u,v} is weakly globally linked in G-z+xy if and only if {u,v} is weakly globally linked in G. Let G_1=G-z+xy. Observe that G_1 is isomorphic to G/zx. Since G-z is rigid, we can use Lemma <ref> to deduce that if {u,v} is weakly globally linked in G_1, then {u,v} is weakly globally linked in G. To prove the other direction suppose that {u,v} is weakly globally linked in G. Then it is also weakly globally linked in G+xy. Since G+xy is the clique sum of G_1 and a copy of K_3, Lemma <ref> implies that {u,v} is weakly globally linked in G_1. A pair (a,b) of vertices of a 2-connected graph H=(V,E) is called a 2-separator if H-{a,b} is disconnected. Let G=(V,E) be a rigid graph with |V|≥ 4 and (a,b) be a 2-separator in G. Let C be a connected component of G-{a,b} and let V_0=V(C)∪{a,b}. Suppose that u,v∈ V_0. Then {u,v} is weakly globally linked in G if and only if {u,v} is weakly globally linked in G[V_0]+ab. If {u,v} is weakly globally linked in G, then it is easy to see, by using Lemma <ref>, that {u,v} is weakly globally linked in G[V_0]+ab. To prove that if {u,v} is weakly globally linked in G[V_0]+ab, then {u,v} is weakly globally linked in G, we use induction on |V|. If |V|=4, then we must have G=K_4-e and uv∈ E, so the statement is obvious. Suppose that |V|≥ 5. If there exists a (u,v)-rigid subgraph of G[V_0], then, since G[V_0]+ab can be obtained from G by a sequence of edge contractions, we can use Lemma <ref> to deduce that {u,v} is weakly globally linked in G. So in the rest of the proof we may assume that G[V_0] has no (u,v)-rigid subgraph. In particular, G[V_0] is not rigid. Hence, by the rigidity of G, it follows that {a,b} is not linked in G[V_0] and ab is an R_2-bridge in G[V_0]+ab. Since {u,v} is weakly globally linked in G[V_0]+ab, Lemma <ref> implies that there exists a (u,v)-rigid proper induced subgraph G'=(V',E') of G[V_0]+ab. Suppose that G' is vertex-minimal. By (<ref>) we obtain ab∈ E' and a,b⊂ V'. We consider three cases depending on the structure of G[V_0]-V'. Since G' is a proper induced subgraph, we have V_0-V'≠∅. See Figure <ref>. Case 1: G[V_0]-V' has a component Z with |V(Z)|≥ 2. By Lemma <ref> {u,v} is weakly globally linked in (G[V_0]+ab)/Z. Since G and G' are rigid, G-V(Z) is also rigid, and Z has at least two neighbours in G. Hence G/Z is rigid. Thus we obtain, by induction, that {u,v} is weakly globally linked in G/Z. By using that G-V(Z) is rigid, Lemma <ref> gives that {u,v} is weakly globally linked in G. Case 2: Each component of G[V_0]-V' is a singleton and there exists a vertex z∈ V_0-V' with d_G(z)=2. Let N_G(z)={x,y}. By Lemma <ref> {u,v} is weakly globally linked in (G[V_0]+ab)-z+xy. If {u,v}={a,b} and |V_0|=3, then {u,v} is weakly globally linked in G by Lemma <ref>. So we may assume that |V_0|≥ 4, and hence (a,b) is a 2-separator of the rigid graph G-z+xy. Hence {u,v} is weakly globally linked in G-z+xy by induction. By using that G is rigid, Lemma <ref> implies that {u,v} is weakly globally linked in G. Case 3: Each component of G[V_0]-V' is a singleton and for each z∈ V_0-V' we have d_G(z)≥ 3. We claim that for each z∈ V_0-V' and x,y∈ N_G(z) there is a rigid subgraph of G'-ab which contains x and y. To see this let w be another neighbour of z, different from x,y, and let G” be obtained from G'-ab by adding vertex z and edges zx,zy,zw. The three edges incident with z in G” cannot be R_2-bridges, since it would imply, by using the rigidity of G' and computing ranks, that G” is (u,v)-rigid, contradicting (<ref>). Thus there is an R_2-circuit C in G” containing z. Then C must contain x and y, too, and C-z is a rigid subgraph of G'-ab which contains x and y, as claimed. The minimality of G' implies that it has no (u,v)-rigid proper induced subgraph. Let (G[V_0]+ab,p) be a generic realization. By (the proof of) Lemma <ref> (G'-ab,p|_V') has an equivalent realization (G'-ab,q), for which ||p(u)-p(v)||≠ ||q(u)-q(v)||, ||p(a)-p(b)||= ||q(a)-q(b)||, and such that the distances between the linked pairs of G'-ab are the same in the two realizations. Then, since each pair of neighbours of every z∈ V_0-V' is linked in G'-ab, it follows that (G'-ab,q) can be extended to a realization (G[V_0]+ab,q') that is equivalent to (G[V_0]+ab,p). Hence {u,v} is globally loose in G[V_0]+ab, a contradiction. This completes the proof. We next extend Lemma <ref> to 2-connected graphs. Let G=(V,E) be a 2-connected graph and {u,v} be a linked pair of vertices of G. Suppose that (a,b) is a 2-separator of G. Let C be a connected component of G-{a,b}, and let V_0=V(C)∪{a,b}. Suppose that u,v∈ V_0. Then {u,v} is weakly globally linked in G if and only if {u,v} is weakly globally linked in G[V_0]+ab. If {u,v} is weakly globally linked in G, then it follows from Lemma <ref> that {u,v} is weakly globally linked in G[V_0]+ab. To prove the “if" direction suppose that {u,v} is weakly globally linked in G[V_0]+ab. Since {u,v} is a linked pair, there is a (u,v)-rigid induced subgraph G[U] of G. If {a,b}⊈U, then U is a subset of V_0 and G[V_0]+ab can be obtained from G by contracting edges which are not induced by U. Thus {u,v} is weakly globally linked in G by Lemma <ref>. So we may suppose that {a,b}⊆ U. Let A_1,…, A_k be the components of G-U contained in V_0, and let B_1, …, B_l be the components of G-U not contained in V_0. Observe that the rigidity of G[U] and the 2-connectivity of G imply that G/A_1/… /A_k/B_1/…/ B_k is rigid. Hence we have that {u,v} is weakly globally linked in G ⇔ {u,v} is weakly globally linked in G/A_1/… /A_k/B_1/…/ B_k ⇔ {u,v} is weakly globally linked in G[V_0]/A_1/…/A_k + ab ⇔ {u,v} is weakly globally linked in G[V_0]+ab, where the first and third equivalence follows from Lemma <ref> and the second equivalence follows from Lemma <ref>, using the rigidity of G/A_1/… /A_k/B_1/…/ B_k. The next lemma on the weak global linkedness of linked separating pairs follows from Lemma <ref> by putting {a,b}={u,v}. It can also be deduced from Lemma <ref> by using that there is some component C of G-{u,v} for which {u,v} is linked in G[V(C)∪{u,v}]. Let G=(V,E) be a 2-connected graph, and u,v∈ V be a linked pair of vertices for which (u,v) is a 2-separator in G. Then {u,v} is weakly globally linked in G. We use the following operation to eliminate 2-separators. Let G=(V,E) be a 2-connected graph, let (a,b) be a 2-separator in G, and let C be a connected component of G-{a,b}. We say that the graph G[V(C)∪{a,b}]+ab (when ab∉ E) or G[V(C)∪{a,b}] (when ab∈ E) is obtained from G by a cleaving operation along (a,b). The graph G̅ obtained from G by adding every edge ab, for which ab∉ E and (a,b) is a 2-separator of G, is called the augmented graph of G. The following lemma is easy to show by induction, using the cleaving operation. Let G=(V,E) be a 2-connected graph and let {u,v} be a non-adjacent vertex pair in G with κ_G(u,v)≥ 3. Then either (u,v) is a separating pair in G or there is a unique maximal 3-connected subgraph B of G̅ with {u,v}⊂ V(B). In the latter case the subgraph B can be obtained from G by a sequence of cleaving operations. Furthermore, uv∉ E(B), and if the pair {u,v} is linked in G then it is also linked in B. The subgraph B in Lemma <ref> is called the 3-block of {u,v} in G. We are ready to state the main result of this section: a complete characterization of the non-adjacent weakly globally linked pairs in a graph G. By Lemma <ref> and Lemma <ref> we may assume that {u,v} is linked and κ_G(u,v)≥ 3 (for otherwise {u,v} is globally loose). By Lemma <ref> we may also assume that G is 2-connected. Let G=(V,E) be a 2-connected graph and let {u,v} be a non-adjacent linked pair of vertices with κ_G(u,v)≥ 3. Then {u,v} is weakly globally linked in G if and only if either (i) (u,v) is a separating pair in G, or (ii) Clique(B,V_0) is globally rigid, where B is the 3-block of {u,v} in G, and B_0=(V_0,E_0) is a subgraph of B with u,v∈ V_0 such that B_0+uv is an R_2-circuit. The proof is by induction on the number h of vertex pairs x,y∈ V with κ_G(x,y)=2. If h=0, then B=G and (ii) holds by Theorem <ref>. Suppose that h≥ 1 and let (a,b) be a 2-separator in G. If {a,b}={u,v} then Lemma <ref> applies and (i) holds. Otherwise we can use Lemmas <ref>, <ref>, and induction, to complete the proof. See Figure <ref> for an illustration of Theorem <ref>. § CONCLUDING REMARKS §.§ Algorithmic aspects Theorem <ref> and its proof shows that weak global linkedness of a vertex pair {u,v} in a graph G=(V,E) can be tested in O(|V|^2) time, as efficient algorithms are available for each of the required subroutines. Basic graph algorithms can be used to test, in linear time, whether κ_G(u,v)≥ 3 holds and to find the maximal 2-connected block that contains u,v. After reducing the problem to the 2-connected case, the linear time algorithm of <cit.> can be applied to check whether (u,v) is a separating pair and (when it is not) to identify the 3-block B of {u,v}. (Note that B coincides with one of the so-called cleavage units of G.) Computing Clique(G,X) for a given X⊆ V is also easy. Testing whether {u,v} is linked, and (when it is linked) finding an R_2-circuit of G+uv containing uv can be done in O(|V|^2) time <cit.>. Within the same time bound, we can test whether a graph is globally rigid, see e.g. <cit.>. §.§ Higher dimensions Most questions concerning the higher dimensional versions of (weak) global linkedness are open. Partial results can be found in <cit.>. A fairly natural new question is whether the sufficient condition of weak global linkedness given in Theorem <ref> is also necessary for d≥ 3, in the sense that for every weakly globally linked pair {u,v} of a graph G there exists a (u,v)-rigid induced subgraph G[X] of G such that Clique(G,X) is globally rigid. We are not aware of any counter-examples. Finding extensions and stronger versions of our results is another promising research direction. Here we mention one example. If we use Lemma <ref> (suggested by D. Garamvölgyi) in place of Proposition <ref> in the proof, we obtain the following strengthening of Lemma <ref> to linked pairs: if G[V_0] is a subgraph of G in which {u,v} is linked, e∈ E-E(G[V_0]) and {u,v} is weakly globally linked in G/e, then {u,v} is weakly globally linked in G. Note that a pair {u,v} may be linked in a graph G_0 in ^d, d≥ 3, even if G_0 contains no (u,v)-rigid subgraph. For the definitions of the new notions appearing in the next proof see e.g. <cit.>. Let {u,v} be a linked pair in a graph G in ^d and let (G,p) be a generic realization of G in ^d. Then the set { ||q(u) - q(v)|| : (G,q) is equivalent to (G,p) } is finite. Suppose, for a contradiction, that there exists an infinite sequence of frameworks (G,q_i), i≥ 1, equivalent to (G,p), in which the distances ||q_i(u) - q_i(v)|| are pairwise different. We may assume that G is connected and q_i(u) is the origin for all i≥ 1. Then each (G,q_i) is in the interior of a ball of radius K, for some constant K. Thus, by choosing a subsequent, if necessary, we may assume that (G,q_i) is convergent, with limit (G,q). Since (G,q) is equivalent to (G,p), and (G,p) is generic, the two frameworks (G,p) and (G,q) have the same equilibrium stresses by <cit.>. In particular, the rank of the rigidity matrix of (G,q) is equal to the maximum (generic) rank of G. This fact, and the linkedness of {u,v} imply that the ranks of the rigidity matrices of (G+uv,q) and (G,q) are the same. So their kernels are the same, too. Thus every infinitesimal motion x:V→^d of (G,q) satisfies (q(u)-q(v))^T(x(u)-x(v)) = 0. By continuity this holds for all frameworks in a small enough neighbourhood of (G,q). Consider the frameworks q'_i = (q_i+1 + q_i)/2. They converge to q, and the well-known avaraging technique shows that x_i = (q_i+1 - q_i) is an infinitesimal motion of q'_i for all i≥ 1 (for a proof see e.g. <cit.>). The same calculations show that, since ||q_i+1(u)-q_i+1(v)||≠ ||q_i(u)-q_i(v)||, we have (q'_i(u) - q'_i(v))^T(x_i(u)-x_i(v))≠ 0, a contradiction. §.§ Minimally globally rigid graphs A graph G=(V,E) is called minimally globally rigid in ^d if it is globally rigid in ^d and for every edge e∈ E the graph G-e is not globally rigid in ^d. Garamvölgyi and Jordán <cit.> proved that if G=(V,E) is minimally globally rigid in ^d and |V|≥ d+1, then |E|≤ (d+1)|V|-d+22. Moreover, as it is noted in <cit.>, for every globally rigid graph G in ^d on at least d+1 vertices, and for every minimally rigid spanning subgraph G_0 of G, there exists a globally rigid spanning subgraph of G that contains G_0 and has at most (d+1)|V|-d+22 edges. Furthermore, the authors conjecture that a minimally globally rigid graph in ^d is in fact R_d+1-independent, see <cit.>. The truth of this conjecture would imply that a minimally globally rigid graph G=(V,E) in ^d is not only sparse, but every subgraph of G is sparse: for each U⊆ V with |U|≥ d+1 we have |E(U)|≤ (d+1)|U|- d+22. This conjecture was verified for d=2 in <cit.>. Next we prove this upper bound for all d, in the special case when the subgraph induced by U is rigid. Let G=(V,E) be a minimally globally rigid graph in ^d. Suppose that U⊆ V, |U|≥ d+1 and G[U] is rigid. Then |E(U)|≤ (d+1)|U|- d+22. Let G_0=(U,E_0) be a minimally rigid spanning subgraph of G[U]. Since G is globally rigid, so is Clique(G,U). Thus, by the results of <cit.>, there is a globally rigid spanning subgraph G'=(U,E') of Clique(G,U) that contains G_0 and has at most (d+1)|U|-d+22 edges. Suppose, for a contradiction, that there is some edge e=uv∈ E(U)-E'. Note that G[U]-e is rigid. Then G' is a subgraph of Clique(G-e,U), and hence {u,v} is weakly globally linked in G-e by Theorem <ref>. Since e is critical in G, this contradicts Lemma <ref>. It follows that G[U] is a subgraph of G'; therefore |E(U)|≤ |E'|≤ (d+1)|U|-d+22. For d=2 we can extend Theorem <ref> to all subsets U⊆ V with |U|≥ d+1. As we noted above, this two-dimensional result is not new, as it follows from <cit.>. Here we give a new proof in order to illustrate how Theorem <ref> might be applied to attack the d-dimensional case. <cit.> Let G=(V,E) be a minimally globally rigid graph in ^2. Suppose that U⊆ V and |U|≥ 3. Then |E(U)|≤ 3|U|- 6. By Theorem <ref> the statement is true if G[U] is rigid. Suppose that G[U] is not rigid, that is, r_2(G[U])≤ 2|U|-4. We may assume that G[U] has no isolated vertices. It is well-known (see e.g. <cit.>) that for the collection G_i=(V_i,E_i), 1≤ i≤ k, of the maximal rigid subgraphs of G[U] we have ∑_i=1^k (2|V_i|-3)=r_2(G[U]). For an integer h≥ 2 let f(h) = 3h-6, if h≥ 3, and let f(h)=1 otherwise. Then we have |E(U)|= ∑_i=1^k |E_i| ≤∑_i=1^k f(|V_i|)≤∑_i=1^k 3/2(2|V_i|-3)=3/2r_2(G[U])≤ 3|U|-6, where the first inequality follows from Theorem <ref>. § ACKNOWLEDGEMENTS This research has been implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the ELTE TKP 2021-NKTA-62 funding scheme. The first author was also supported by the Hungarian Scientific Research Fund grant no. K135421, and the MTA-ELTE Momentum Matroid Optimization Research Group. We thank Dániel Garamvölgyi for several useful remarks, and for suggesting Lemma <ref>. We also thank Csaba Király for his comments. 99 AR L. Asimow and B. Roth, The rigidity of graphs, Trans. Amer. Math. Soc., 245 (1978), pp. 279-289. BJ A.R. Berg and T. Jordán, Algorithms for graph rigidity and scene analysis, Proc. 11th Annual European Symposium on Algorithms (ESA) 2003, (G. Di Battista and U. Zwick, eds) Springer LNCS 2832, pp. 78-89, 2003. Con R. Connelly, Generic global rigidity, Discrete Comput. Geom. 33:549-563 (2005). Conmerge R. Connelly, Combining globally rigid frameworks, Proc. of the Steklov Institute of Mathematics, 275, 191-198, 2011. coning R. Connelly and W. Whiteley, Global rigidity: the effect of coning, Discrete Comput Geom (2010) 43: 717–735. GJunitball D. Garamvölgyi and T. Jordán, Global rigidity of unit ball graphs, SIAM J. Discrete Math. 34:1, pp. 212-229, 2020. GJcccg D. Garamvölgyi and T. Jordán, Globally linked pairs in braced maximal outerplanar graphs, Proc. CCCG 2022, Toronto, August 2022, pp. 162-168. GJpartial D. Garamvölgyi and T. Jordán, Partial reflections and globally linked pairs in rigid graphs, arXiv:2305.03412, May 2023. GJmgr D. Garamvölgyi and T. Jordán, Minimally globally rigid graphs, European J. Combin., Vol. 108., 103626, 2023. Gluck H. Gluck, Almost all simply connected closed surfaces are rigid, Geometric topology (Proc. Conf., Park City, Utah, 1974), pp. 225–239. Lecture Notes in Math., Vol. 438, Springer, Berlin, 1975. GHT S. Gortler, A. Healy, and D. Thurston, Characterizing generic global rigidity, American Journal of Mathematics, Volume 132, Number 4, August 2010, pp. 897-939. hend B. Hendrickson, Conditions for unique graph realizations, SIAM J. Comput. 21 (1992), no. 1, 65-84. HT J.E. Hopcroft and R.E. Tarjan, Dividing a graph into triconnected components, SIAM J. Comput. 2 (1973), 135–158. JJconnrig B. Jackson and T. Jordán, Connected rigidity matroids and unique realizations of graphs, J. Combin. Theory Ser. B, Vol. 94, 1-29, 2005. JJS B. Jackson, T. Jordán, and Z. Szabadka, Globally linked pairs of vertices in equivalent realizations of graphs, Discrete Comput. Geom., Vol. 35, 493-512, 2006. JJS2 B. Jackson, T. Jordán, and Z. Szabadka, Globally linked pairs of vertices in rigid frameworks, in: Rigidity and Symmetry, Fields Institute Communications, Vol. 70, R. Connelly, A. Ivic Weiss, W. Whiteley (Eds.) 2014, pp. 177-203. JmemoirsT. Jordán, Combinatorial rigidity: graphs and matroids in the theory of rigid frameworks. In: Discrete Geometric Analysis, MSJ Memoirs, vol. 34, pp. 33-112, 2016. JKTT. Jordán, Cs. Király, and S. Tanigawa, Generic global rigidity of body-hinge frameworks, J. Combin. Theory, Series B 117, 59-76, 2016. JW T. Jordán and W. Whiteley, Global rigidity, in J. E. Goodman, J. O'Rourke, and C. D. Tóth (eds.), Handbook of Discrete and Computational Geometry, 3rd ed., CRC Press, Boca Raton, pp. 1661-1694, 2018. JT T. Jordán and S.Tanigawa, Global rigidity of triangulations with braces, J. Comb. Theory Ser. B., 136, pp. 249-288 (2019). KM Cs. Király and A. Mihálykó, Fast algorithms for sparsity matroids and the global rigidity augmentation problem, Egerváry Research Group, Budapest, TR-2022-05, 2022. laman G. Laman, On graphs and rigidity of plane skeletal structures, J. Engineering Math. 4 (1970), 331-340. oxley J.G. Oxley, Matroid theory, Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1992. xii+532 pp. Saxe J.B. Saxe, Embeddability of weighted graphs in k-space is strongly NP-hard, Technical report, Computer Science Department, Carnegie-Mellon University, Pittsburgh, PA, 1979. SW B. Schulze and W. Whiteley, Rigidity and scene analysis, in J.E. Goodman, J. O'Rourke, C.D. Tóth (eds.), Handbook of Discrete and Computational Geometry, 3rd ed., CRC Press, Boca Raton, 2018. Tani S. Tanigawa, Sufficient conditions for the global rigidity of graphs, J. Combin. Theory, Ser. B., Vol. 113, July 2015, Pages 123-140.
http://arxiv.org/abs/2307.05949v2
20230712063143
Newell's theory based feature transformations for spatio-temporal traffic prediction
[ "Agnimitra Sengupta", "S. Ilgin Guler" ]
cs.LG
[ "cs.LG", "stat.ML" ]
Exploring the Effectiveness of LLMs in Automated Logging Generation: An Empirical Study Yichen Li12, Yintong Huo12, Zhihan Jiang2, Renyi Zhong2, Pinjia He3^**, Yuxin Su4, and Michael R. Lyu2 2Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China. Email: {ycli21, ythuo, zhjiang22, ryzhong22, lyu}@cse.cuhk.edu.hk 3The Chinese University of Hong Kong, Shenzhen, China. Email: [email protected] 4Sun Yat-sen University, Guangzhou, China. Email: [email protected] August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Deep learning (DL) models for spatio-temporal traffic flow forecasting employ convolutional or graph-convolutional filters along with recurrent neural networks to capture spatial and temporal dependencies in traffic data. These models, such as CNN-LSTM, utilize traffic flows from neighboring detector stations to predict flows at a specific location of interest. However, these models are limited in their ability to capture the broader dynamics of the traffic system, as they primarily learn features specific to the detector configuration and traffic characteristics at the target location. Hence, the transferability of these models to different locations becomes challenging, particularly when data is unavailable at the new location for model training. To address this limitation, we propose a traffic flow physics-based feature transformation for spatio-temporal DL models. This transformation incorporates Newell's uncongested and congested-state estimators of traffic flows at the target locations, enabling the models to learn broader dynamics of the system. Our methodology is empirically validated using traffic data from two different locations. The results demonstrate that the proposed feature transformation improves the models' performance in predicting traffic flows over different prediction horizons, as indicated by better goodness-of-fit statistics. An important advantage of our framework is its ability to be transferred to new locations where data is unavailable. This is achieved by appropriately accounting for spatial dependencies based on station distances and various traffic parameters. In contrast, regular DL models are not easily transferable as their inputs remain fixed. It should be noted that due to data limitations, we were unable to perform spatial sensitivity analysis, which calls for further research using simulated data. § INTRODUCTION Short-term traffic forecasts play a crucial role in supporting operational network models and facilitating Intelligent Transportation Systems (ITS) applications. Researchers have long recognized the importance of accurately predicting future traffic conditions for proactive traffic management <cit.> and comprehensive traveler information services <cit.>. These forecasts enable ITS applications to provide drivers with precise and timely information, allowing them in making informed decisions. As a result, this enables effective congestion mitigation, reduced travel time, and an enhanced overall travel experience. The success of these strategies relies heavily on the quality and accuracy of the traffic forecasts, which are obtained through sophisticated time-series forecasting models. The prediction of traffic conditions, including flow, occupancy, and travel speed, is essentially a time-series forecasting problem that relies on input data from a sufficient number of spatially distributed sensors throughout the network. Therefore, in addition to considering temporal dependencies, it is crucial to incorporate the spatial correlations among various sensor data within the network to effectively capture the intricate dynamics of traffic flow. Various parametric techniques have been used to model temporal dependence in traffic time series, including historical average algorithms, smoothing techniques, and autoregressive integrated moving average (ARIMA) models <cit.>. However, studies have indicated that ARIMA models have limitations in capturing complex traffic patterns, especially during congested conditions <cit.>. Additionally, the development of multi-variable prediction models incorporating flow, speed, and occupancy for traffic forecasting has yielded mixed results <cit.>. These findings highlight the need for more advanced and effective modeling approaches in traffic prediction. Alternately, state-space models have been shown to be an excellent foundation for modeling traffic data since they are multivariate in nature and can describe simpler univariate time series <cit.>. Following advances in computer efficiency and capacity to handle large data quantities, non-parametric techniques, which, unlike parametric methods, do not specify any functional form, was used to model traffic data with greater transferability and robustness across datasets <cit.>. These approaches, in contrast to parametric methods, have more degrees of freedom, allowing them to better adapt to non-linearities and capture spatio-temporal features of traffic. For small scale traffic scenarios, methods like nearest neighbors <cit.>, support vector machine <cit.>, and Bayesian network <cit.> have proven to be beneficial. However, these methods become limited in their ability to extract effective features for large-scale traffic analysis <cit.>. To the contrary, neural networks (NN) and deep learning (DL) can use a multi-layer architecture to capture complex relations <cit.> in traffic data. Use of NNs to capture temporal patterns in traffic time series have been demonstrated in <cit.>. In this backdrop, numerous variants of NN were developed, such as backpropagation neural networks <cit.>, the modular neural network <cit.> and radial basis function neural network <cit.>. However, for sequential data like traffic time series, recurrent neural networks (RNN) <cit.> and its variants like long short-term memory (LSTM) <cit.> have been specifically introduced to preserve temporal correlations between observations to predict future states. For example, Ma et al. <cit.> used an LSTM architecture to make short-term travel speed predictions. Yu et al. <cit.> applied LSTM and autoencoder to capture the sequential dependency for predicting traffic under extreme conditions, particularly for peak-hour and post-accident scenarios. Cui et al. <cit.> proposed a deep stacked bidirectional and unidirectional LSTM network for traffic speed prediction. To account for the spatial aspects of traffic, data from multiple locations across the network need to be utilized in predicting future traffic states. For instance, Stathopoulos and Karlaftis <cit.> incorporated data from upstream detectors to improve predictions at downstream locations using a multivariate time-series state-space model. DL models like deep belief networks <cit.> and stacked autoencoder <cit.> have also been considered for multi-sensor information fusion - however, because of the underlying structure of the input data, the spatial relationship could not be efficiently captured with these models. To this end, convolutional neural network (CNN) <cit.> has emerged as one of the most successful deep NNs to model topological locality or spatial correlation by use of filters or kernels to extract local features. For example, <cit.> developed a DL model, ST-ResNet which used convolutions for city-wide crowd flow prediction. However, CNNs are particularly successful when dealing with data in which there is an underlying Euclidean structure. For generalized, non-Euclidean data, graph convolutional neural network (GCNN) <cit.> have been used for traffic network modeling and prediction tasks <cit.>. However, while these studies explicitly model temporal or spatial dependency, they do account for their interaction effects. Recent studies proposed integrating CNN and LSTM <cit.> to jointly model spatial and temporal patterns in traffic. Several other hybrid models like Sparse Autoencoder LSTM <cit.>, DeepTransport <cit.> - hybrid CNN and RNN with an attention mechanism, ST-TrafficNet <cit.> have been recently demonstrated to efficiently capture spatio-temporal features of traffic data. In modelling traffic data, DL algorithms have shown considerable potential. The success of data-driven models can be linked to the data quality and quantity. However, limited data availability poses a significant restriction to model development for many freeway corridors. Transfer learning (TL) is a promising method for avoiding data scarcity, training, and deployment issues. In this method, a model learned for one task is reused and/or altered for a similar task. TL is commonly employed for image classification, sentiment analysis, and document classification <cit.>, but it has received less attention in the traffic forecasting domain<cit.>. Further, physics-informed models may uncover additional dynamics of the system, that might not be observable from the limited data. Physics-informed deep learning (PIDL) models usually comprise of a model-driven component (a physics-informed NN for regularization) and a data-driven component (a NN for estimation) to integrate the advantages of both components. Using this framework, recent researches <cit.> have demonstrated the superiority of the PIDL in traffic state estimations over purely data-driven or traffic flow physics-based approaches. This paper presents yet another approach to incorporate traffic flow physics to a DL model that uses a physics-based feature enhancement to perform the task of traffic flow prediction along a freeway corridor. Our approach is particularly advantageous due to its ability to be transferred to new locations where data is not available. The remainder of the paper is organised as follows: first, we present the conceptual background of Newell's solution to Lighthill-Whitham-Richards (LWR) model and the DL model, followed by the physics-based feature enhancement to the model. Next, the problem setup is described, including the modelling results. Finally, some concluding remarks are presented. § BACKGROUND In this section, we provide a brief introduction to Newell's simplified solution to the Lighthill-Whitham-Richards (LWR) model - which forms the basis of the physics-based feature transformation, followed by an overview of the DL model considered. §.§ Newell's simplified theory estimations Traffic state evolution on a roadway using the basic principle of LWR continuum model requires identification of `shockwaves' and their interactions on the time-space diagram. A simplified approach - Newell's solution <cit.> - can be used to estimate traffic dynamics at a particular location in terms of cumulative vehicle counts, i.e., the cumulative number of vehicles observed at that location. Using vehicle counts at a detector location, this method aims to estimate cumulative vehicle counts at some other location along the homogeneous freeway assumed to exhibit a triangular fundamental diagram. Cumulative vehicle counts at detector stations can be appropriately translated using fundamental traffic parameters (defined in Table <ref>) to give an estimate of cumulative counts at the target site consistent with traffic flow theory. The relevant components of Newell's solution are summarized below. The basic idea is to determine the change in vehicle counts along characteristics of free-flow from upstream and congested flow from downstream. Therefore, this approach assumes upstream is in free-flow state, whereas the downstream is in congestion. Newell's simplified theory states that the real state at the target site would be determined as the minimum of what would be expected along the free-flow characteristic and the congested characteristic. §.§.§ Along the free-flow characteristic Assume a point A in a free-flow state (F) with vehicle count N_A. Consider a target location B downstream to location A. As we move from A to B, which lies on the characteristic that emanates from A, the change in vehicle counts is given by Δ N_A→ B. Δ N_A→ B = ∂ N∂ xΔ x + ∂ N∂ tΔ t Δ N_A→ BΔ x = ∂ N∂ x + ∂ N∂ tΔ t Δ x = -k + qv_f Δ N_A→ B = 0(sinceq=kv) From Equation <ref>, the change in vehicular count along the interface or signal travelling at free flow speed is zero. However, the time for the signal to travel from A to B is d_A→ B/v_f, where d_A→ B is the distance between locations A and B. Therefore, the counts at B as a function of counts at A can be given by: N_f(t,B)=N( t-d_A→ Bv_f, A ) Conversely, if the location B is upstream to location A, and free-flow regime persists, then Equation <ref> can be suitably modified as, N_f(t,B)=N(t+d_A→ Bv_f, A ) §.§.§ Along the congested flow characteristic Consider a target location Y upstream to location X, that has a known congested state (C) with vehicle counts N_X. As we move from X to Y that lies on the characteristic that emanates from X, the change in vehicle counts is given by Δ N_X→ Y. Δ N_X→ Y = ∂ N∂ xΔ x + ∂ N∂ tΔ t Δ N_X→ YΔ x = ∂ N∂ x + ∂ N∂ tΔ t Δ x = -k - qw Δ N_X→ Y = -Δ x·(k_j) = d_X→ Y(k_j) Therefore, in case of congestion, there is a finite change in the vehicle counts equal to d_X→ Y k_j that occurs in time d_X→ Y/w. Thus, the counts at Y as a function of counts at X is given by: N_c(t,Y)=N( t-d_X→ Yw, X ) + d_X→ Y k_j The free-flow and congested estimators of cumulative counts at a target location using cumulative counts from another location along the homogeneous roadway are represented by Equations <ref>, <ref> and <ref>, which we suitably utilize in designing the physics-based model. §.§ Spatio-temporal model In this study, we employ a combination of convolutional neural network (CNN) <cit.> and long short-term memory (LSTM) <cit.>, referred to as CNN-LSTM, as the baseline model for jointly capturing the spatio-temporal aspects of traffic states. CNN and LSTM extract information from the input data from two different perspectives - learning the time-invariant spatial characteristics in CNN; and short- and long-term temporal patterns in LSTM. Specifically, the convolutional layer in the model helps learn an internal representation of a two (or higher)-dimensional input through a feature learning process using kernel functions or filters. Notably, the convolution allows the model to learn features that are invariant across the time dimension. These features are then passed through an activation function to introduce non-linearities into the mapping function. Other layers, such as pooling layer are also used to reduce the number of parameters, while dropout layer prevents the model from overfitting. As a result, it generates more accurate feature representations, allowing the LSTM layers to learn temporal patterns with greater accuracy. LSTM uses feedback mechanism and several gating mechanisms where output from a previous time step is fed as input to the current step such that selective information from the past can propagate into the future states – allowing them to persist, which makes it suitable for capturing the temporal evolution of traffic states. A schematic of the model architecture considered in the study is shown in Figure <ref>. The architecture comprises a convolutional layer that processes the spatio-temporal input to extract relevant features using filters. The input to the model is structured as an N × n array that is constructed using data from N fixed-location stations, with each row representing flows from a particular station over a period of time, n. Notably, the pooling layers were not utilized since the spatial dimension of traffic data is limited. After the convolutional layer, a flattening layer is employed to reshape the outputs appropriately. The resulting feature outputs from the convolutional operation are then fed into an LSTM, which generates outputs through a sequence of densely connected layers. § METHODOLOGY §.§ Feature transformation DL models learn the spatial and temporal characteristics specific to a detector configuration for which it has been trained using the traffic flows from neighboring locations. In other words, it is trained to map temporal patterns from the given spatial configuration to its target location. However, this mapping is specific to the exact location of detectors from which data is used (e.g., the distance between input and target detectors). Therefore, these models are difficult to adapt to varied configurations and hence cannot perform suitably, i.e. they are not transferable. To improve the transferability of the models, we aim to utilize knowledge from traffic flow theory and propose a physics-based feature transformation to the model inputs. Notably, we explicitly assume that the data available at the transfer location is limited and hence, cannot be used to train a new model. In our proposed approach, instead of using the flow data from the input detectors, the inputs are modified using Newell's transformations. The specific modification depends on whether the target location is upstream or downstream of the location from which input data is utilized. Hence, it is expected to learn generalized features independent of the distance between the detectors and hence be suitable for transfer learning. The physics-based modified version in this study includes three approaches: `Physics FC', `Physics FF' and `Hybrid'. In all three approaches, it is assumed that free-flow conditions prevail at an upstream station (e.g., location A) and the cumulative counts for the target or transfer station (X) are estimated using Equation <ref> based on the cumulative counts at station A. The three approaches differ in how they treat input from a downstream station B. The `Physics FC' approach assumes that congestion prevails at a station downstream of the target or transfer stations, and hence uses Equation <ref> on the cumulative counts at station B to determine the cumulative counts for the target or transfer station, X. To the contrary, the `Physics FF' approach assumes that free-flow prevails at the downstream station, and hence uses Equation <ref> on the cumulative counts at station B to determine the cumulative counts for the target or transfer station, X. The `Hybrid' approach, on the other hand, does not make explicit assumptions regarding the prevailing conditions (free-flow or congestion) at the station downstream of the target or transfer locations. Instead, it considers two inputs for the target or transfer station, X – one calculated using Equation <ref> based on the cumulative counts at station B, and the other calculated using Equation <ref> based on the cumulative counts at station B. This approach utilizes both pieces of information to estimate the cumulative counts for the target or transfer station, thereby combining aspects of both free-flow and congestion conditions. Once the cumulative counts are computed for each case, the corresponding flows are then calculated by taking the differences between consecutive observations of the cumulative counts. It is important to note that accurate estimation of fundamental traffic parameters such as free-flow speed (v_f), congestion wave speed (w), and jam density (k_j) is crucial for the proper functioning of these transformations. The estimation of these parameters will be discussed in the following section. §.§ Estimation of traffic parameters using detector data In our methodology, we demonstrate the estimation of traffic parameters using data from multiple loop detectors. For a stretch of freeway, we combine data from multiple detectors to obtain mean estimates of these parameters. This aggregation of detector data assumes homogeneity within the freeway section, implying that the traffic parameters do not significantly vary within the section. This assumption aligns with the assumptions made in Newell's solutions. However, it is important to note that for a larger number of detectors with wider spatial coverage, this assumption may not hold true. The data provided includes measurements of flow (q), occupancy (o), and speed (v) recorded at 5-minute intervals from each station. Assuming a triangular fundamental diagram, we use the fundamental equation of traffic, q = k × v, to calculate the corresponding densities (k). To determine the capacity (q_c) and free-flow speed (v_f), we consider the 95^th percentile values of flow and speed, respectively. This is done because the maximum flow and speed values do not indicate a stable state and persist for only a short period of time. The critical density (k_c) is then evaluated using the fundamental equation of traffic as k_c = q_c/v_f. These parameters provide essential insights into the traffic conditions and are fundamental for our subsequent analysis. Figure <ref> illustrates the flow versus density relationship for four stations along the SR04 California highway. The freeway operates in free-flow conditions for most of the day with congested states appearing only during peak hours. Therefore, congested states (k>k_c) are not easily observable in the flow-density curve, which do not allow us to directly estimate the full congested branch of the fundamental diagram, including congestion wave speed (w) and jam density (k_j). Hence, we assume a fixed value of w=14 for our analysis. § DATA The traffic data used in this study for model training and evaluation is obtained from the California Department of Transportation's Performance Measurement System (PeMS), which is widely used in similar research <cit.>. The PeMS captures real-time traffic data from sensors along the freeway and ramps at 30 second intervals, which are aggregated every 5 minutes. The dataset used for model training and evaluation consists of flow, occupancy, and speed, collected from a series of vehicle detection sensors (VDS). For this work, we use two datasets - (1) Dataset 1: California SR04 Delta Highway, collected over a period of 56 consecutive days from July 1, 2021 to August 25, 2021 including weekdays and weekends, and (2) Dataset 2: California Interstate-05 NB in District 11, collected for one year in 2019. In Dataset 1, the distances between consecutive stations are 0.5 mi, 0.3 mi, and 0.5 mi, respectively. On the other hand, for Dataset 2, the distances between consecutive stations are 1.6 mi, 0.6 mi, and 0.9 mi. These datasets provide an understanding of how the model can scale spatially by considering the proximity and distances between monitoring stations along the freeways. Figures <ref> and <ref> depicts the map of the station configurations, illustrating the locations and distances between monitoring stations for both Dataset 1 (California SR04 Delta Highway) and Dataset 2 (California Interstate-05 NB in District 11). In addition, Figure <ref> displays the temporal patterns of flows at various stations in both Dataset 1 and Dataset 2. It is important to note that on- and off-ramp traffic volumes were not always available. Hence, the vehicle inflows and outflows associated with on-ramps and off-ramps could not be adjusted to maintain vehicle conservation along the freeway. As a result, the analysis and modeling in this study focus solely on the traffic data collected from the detection sensors along the main freeway lanes, without accounting for the ramp movements. § RESULTS AND DISCUSSIONS The goal is to predict traffic flow 5 minutes into the future at a station where data is not available by utilizing input data from neighboring detectors for the the last 50 minutes for Dataset 1 and 100 minutes for Dataset 2. Sensitivity analysis on the performance for longer prediction time periods (from 5 minutes to 25 minutes) are discussed later. §.§ Prediction scenario To evaluate and compare the performance of the models, different scenarios are considered based on the relative positions of the target and transfer stations. This relative positioning determines which Newell's transformation is applied to the target and transfer locations. Table <ref> presents these scenarios, labeled A1, A2, B1, B2, C1, C2, D1, and D2, where the model's performance is assessed at both the target (where it has been trained) and the transfer locations without further retraining. It is assumed that up-to-date data is available at the two source locations, while no data is available at the target or transfer locations for prediction purposes. For each scenario, a regular model and three physics-modified models are considered. Different scenarios are considered to better understand when the proposed models can improve predictions. For instance, let us consider a scenario where the model is trained using traffic information from an upstream and downstream station to predict flows at an intermediate target location (Scenario A1 and A2). In this case, the physics-modified feature inputs consist of Newell's uncongested flow estimate (from the upstream station) and either an uncongested or congested flow estimate (from the downstream station). If the same model is applied to another intermediate location within the bounds of the upstream and downstream stations, the model will still receive similar flow estimates as inputs. However, in other scenarios (e.g., D1 or D2), the models are trained to predict flow at an upstream (or downstream) location relative to both stations but are transferred to predict flow at a downstream (or upstream) location. Since the input features in the target and transfer domains differ significantly, it is important to investigate the model's performance under these circumstances as well. It is worth noting that Case B always utilizes free-flow features both in the target and transfer locations, since both stations are upstream of target and transfer. Therefore, we have performance recorded only for the `Physics FF' method in this case. Additionally, a hybrid model could not be employed for Case D since the dimensions of the input features in the target and transfer locations would be different, making it challenging to maintain consistency in the model architecture. §.§ Evaluation metrics We evaluate the model performances for both single- and multi-step prediction horizons, by comparing the prediction mean with the corresponding true flow values using three metrics: root mean squared error (RMSE), mean absolute percentage error (MAPE) and R^2 as defined below. RMSE=√(1n∑_i=1^N [y_i-ŷ_̂î]^2) MAPE=100%N∑_i=1^N|y_i-ŷ_̂îy_i| R^2=1-∑_i=1^N [y_i-ŷ_̂î]^2∑_i=1^N [y_i-y̅_̅i̅]^2 where y_i represents the true value of the observation i, ŷ_̂î is the predicted value of y_i for i=1,2,… T. Both RMSE and mean absolute error (MAE) measures the error of the predictions, however, RMSE is often preferred due to its higher penalization to outliers compared to MAE, where all errors are weighed equally. On the other hand, R^2 measures the performance of the model in terms of proportion of the variance in data that could be explained by the regression model. §.§ Model training In this study, two different model architectures are used due to variations in the volume of the two datasets. For Dataset 1, the model architecture comprises a 2-dimensional convolutional layer with 12 filters and a kernel size of (3, 2), activated by the rectified linear unit (ReLU) activation function. The outputs from the convolutional layer are then flattened and reshaped appropriately to be fed into two LSTM layers with 10 and 6 units respectively. Finally, a single-unit dense layer is utilized to predict flows. On the other hand, for Dataset 2, the model architecture consists of a 2-dimensional convolutional layer with 16 filters and a kernel size of (3, 2), activated by the rectified linear unit (ReLU) activation function. This is followed by two LSTM layers with 10 and 6 units, respectively. The outputs from the LSTM layers are then passed through a dense layer with 6 units, activated by the ReLU activation function. Finally, the prediction layer is added to generate the traffic flow predictions for the specified future time point. To ensure generalizability and prevent overfitting, the models are trained, validated, and tested using three distinct sets. The dataset is partitioned into three parts: 60% for model training, 15% for validation, and 25% for testing. The model parameters are tuned throughout the training process based on their performance on the validation set. The mean squared error (MSE) loss function is minimized during the training process, and the model with the lowest validation error is selected as the final model. For the models trained on Dataset 1, a batch size of 10 and a temporal lag of 10 are used. In contrast, the models trained on Dataset 2 uses a batch size of 20 and a temporal lag of 20. The Adadelta <cit.> optimizer is employed with a learning rate of 0.10, a rho value of 0.95, and an epsilon value of 1e-7 to train the models. §.§ Dataset 1: California SR04 Prediction performances of the models trained on Dataset 1 are provided in Tables <ref>, <ref> and <ref> corresponding to different performance metrics – RMSE, R^2 and MAPE. The performance results for Cases A, C, and D demonstrate that Newell's assumption of congested states prevailing downstream does not hold true, or the interference of ramp flows are too large for Newell's conservation assumptions to hold. As a result, the `Physics FC' method, which relies on this assumption, does not outperform the regular model either at the target or transfer locations in the cases where the source station is downstream of either the target or transfer location, or both. In contrast, we find that the assumption of free-flow states performs better in the majority of these scenarios. The hybrid approach demonstrates superior performance in both Cases A and C compared to other approaches. In Cases A1 and A2, the feature inputs consist of three dimensions: one from the upstream station and two from the downstream stations. This configuration allows the model to leverage the free-flow characteristics from both upstream and downstream stations, as well as the congested information from downstream, thereby enhancing its prediction capabilities. Similarly, in Cases C1 and C2, the input feature size is expanded to four dimensions, with two dimensions from both the upstream and downstream stations. This enables the model to incorporate the combined information from both directions, taking into account the variations in traffic conditions along the freeway section. By considering the inputs from both upstream and downstream stations, the model can better capture the complex interactions and correlations between different segments of the freeway, leading to improved prediction accuracy. §.§ Dataset 2: Interstate-05 NB Here, we present the modeling results on Dataset 2 in Table <ref>, which displays the RMSE values. Other evaluation measures exhibit similar trends and are not included in the table for brevity. In this larger dataset, we also evaluate the accuracy of the model predictions separately during free flow and congestion to better understand the benefits of the proposed approaches. To distinguish between free-flow and congestion states, we divide the daily traffic into two categories based on a speed threshold: if the speed drops below 50 mph it is assumed that that location is in congestion (as shown in blue dashed lines), See Figure <ref>. The results for Cases A1 and A2, where both the target and transfer locations are in between the source detector stations, again indicate that the use of Newell's modification for congestion conditions does not improve predictions, likely due to lack of congestion or interference of traffic dynamics due to ramp flows. This finding aligns with the observations from Dataset 1. Interestingly, assuming free-flow conditions leads to only minor improvements in some cases. However, when we employ the hybrid approach that combines information from both upstream (free-flow) and downstream (congested and uncongested) stations, we observe significant improvements in model performance. This hybrid approach outperforms both the regular model and other versions of the physics-based approaches. It is important to note that during the model training, the optimization is still based on the combined mean squared error (MSE) loss, meaning that the model does not individually optimize for losses during free-flow and congestion. The superior performance of the hybrid approach highlights the effectiveness of incorporating both free-flow and congestion information to enhance the model's predictive capabilities. In Case C1, we observe that Newell's assumption of congestion prevailing downstream (Physics FC) works well at the target location (S2), where only congestion characteristics from downstream locations are used during model training. However, when this assumption is transferred to S1, it does not perform well. In contrast, the assumption of free-flow conditions (Physics FF) shows poor performance at the target location but demonstrates better performance at the transfer location in Case C1. Surprisingly, the hybrid approach also shows poor overall performance in both the target and transfer locations. But it still provides better performance during congestion at the target location. The discrepancy in performance can be attributed to location-specific differences, such as variations in traffic patterns, congestion levels, or spatial configurations of the freeway section. In this case, the target station for Case C1 (S2) is located 0.6 mi and 1.5 mi away from the source stations, while the transfer location for Case C1 (S1) is located 2.2 mi and 3.1 mi away. These differences in spatial distances can significantly impact the traffic dynamics and patterns observed at each location. To further support this observation, we examine the flow vs. speed trends for both S2 (target in Case C1) and S1 (transfer in Case C1) as depicted in Figure <ref>. When comparing these trends, we find that S1 operates predominantly in free-flow conditions with only short-lived congested states. This aligns with the better transferability of the FF model, which leverages the free-flow characteristics, as opposed to the hybrid model that combines both free-flow and congested information. In Case C2, we observe a different pattern compared to Case C1. The hybrid approach performs better than both the regular model and the `Physics FF' approach in both the target and transfer locations. The `Physics FF' model also demonstrates good performance, consistent with the assumption that the target station mostly operates in free-flow conditions. However, the hybrid approach excels by accurately accounting for the dependencies between the source and target station. In Cases B1 and B2, we observe that while the regular model performs better at the target location, its transferability is compromised, indicating that it does not generalize well to different locations. On the other hand, the physics-based FF model demonstrates better transferability in these cases. This suggests that the `Physics FF' model captures the underlying dynamics of traffic more effectively, allowing it to perform well even at locations it has not been explicitly trained on. In Cases D1 and D2, we consistently observe that the `Physics FF' model outperforms the `Physics FC' model, indicating that free-flow characteristics play a significant role in these scenarios. However, it is important to note that there are some instances where the regular model shows minor advantages. These inconsistencies in performance could be attributed to several factors. Firstly, the spatial distances between stations in this dataset are larger, which may violate the assumption of homogeneous traffic parameters over the entire section. Secondly, the presence of major interchanges and ramps between the mainline detectors can significantly impact traffic patterns. When ramps experience high volumes of vehicles trying to enter or exit the highway, they can disrupt the smooth flow of traffic and lead to localized congestion or bottlenecks on the mainline. For example, congestion on exit ramps can lead to queue spillbacks, causing disruptions in traffic flow on the mainline. These disruptions can result in variations in traffic conditions that may not be accurately captured by the models. Overall, while the `Physics FF' and hybrid models (where applicable) generally outperform the regular model and `Physics FC' in terms of transferability, the inconsistencies in trends compared to Dataset 1 highlight the influence of spatial distances, non-homogeneous traffic parameters, and the presence of interchanges and ramps on the accuracy of the models. These factors should be carefully considered when applying and interpreting the modeling results in such complex freeway environments. §.§ Time sensitivity Performing a sensitivity analysis along the time dimension is crucial to assess the robustness of the model when predicting traffic flows further into the future. In this case, both the regular model and the physics-based hybrid model were trained on Dataset 1 for Case A. The models were evaluated to predict 25 minutes into the future using a temporal history of 50 minutes, which consists of 10 time steps. The analysis reveals that the prediction errors for both models increase as the prediction horizon becomes longer. This is expected, as longer prediction horizons introduce more uncertainties and make it more challenging to accurately forecast traffic flows. However, it is observed that the physics-based hybrid model consistently outperforms the regular model at all prediction horizons. The performance gap between the two models tends to widen as the prediction horizon increases, indicating the superiority of the physics-based approach in capturing the dynamics of traffic and improving the accuracy of predictions. Furthermore, the physics-based hybrid model demonstrates relatively higher benefits at the transfer location compared to the target location. This suggests that incorporating physics-based principles and combining information from both upstream and downstream stations allows for a more comprehensive representation of the traffic conditions and enhances the model's predictive capabilities. § CONCLUSIONS Real-time traffic prediction on freeways plays a crucial role in the effective functioning of Intelligent Transportation Systems. Extensive research has been conducted to enhance the accuracy of traffic prediction models, including statistical parametric and non-parametric approaches that provide improved flexibility in capturing the spatial and temporal aspects of traffic patterns. However, despite these advancements, state-of-the-art spatio-temporal models such as CNN-LSTM often learn case-specific features that lack generalizability. These models uncover only limited information about the underlying traffic dynamics, thereby limiting their usability to specific training conditions. In this study, we propose a feature transformation approach for traffic flow prediction models based on traffic flow theory. The proposed model demonstrates the ability to transfer its learned features to different settings, thereby enhancing its generalizability. The results indicate that the physics-based model can learn more universal features by mapping true traffic flows using estimates of both congested and uncongested flows. When compared to the regular model, the physics-based models exhibits improved prediction performance at both the target and transfer locations across various prediction scenarios. This improvement can be attributed to the inherent ability of the proposed model's feature inputs to account for spatial shifts, making them more transferable. Our analysis reveals that physics-based FC models trained using congested state estimators from downstream locations do not perform well. This lack of performance could be attributed to the absence of congested states downstream or the significant interference of ramp flows, which violate Newell's conservation assumptions. On the other hand, FF and hybrid models generally outperform the regular model and FC model in terms of transferability. Minor discrepancies in trends for larger datasets are observed, likely due to location-specific attributes. Additionally, the physics-based hybrid model consistently outperforms the regular model across all prediction horizons in a multi-step prediction setting. However, due to limitations in the available data, we were unable to conduct spatial sensitivity analysis, highlighting the need for further research using simulated data.
http://arxiv.org/abs/2307.04400v1
20230710080159
ARK: Robust Knockoffs Inference with Coupling
[ "Yingying Fan", "Lan Gao", "Jinchi Lv" ]
stat.ME
[ "stat.ME", "math.ST", "stat.ML", "stat.TH" ]
ARK: Robust Knockoffs Inference with Coupling Yingying Fan is Centennial Chair in Business Administration and Professor, Data Sciences and Operations Department, Marshall School of Business, University of Southern California, Los Angeles, CA 90089 (E-mail: [email protected]). Lan Gao is Assistant Professor, Business Analytics and Statistics Department, Haslam College of Business, University of Tennessee, Knoxville, TN 37996 (E-mail: [email protected]). Jinchi Lv is Kenneth King Stonier Chair in Business Administration and Professor, Data Sciences and Operations Department, Marshall School of Business, University of Southern California, Los Angeles, CA 90089 (E-mail: [email protected]). This work was partially supported by NIH R01 Grant 1R01GM131407-01 and NSF Grants DMS-1953356 and EF-2125142. Yingying Fan^1, Lan Gao^2 and Jinchi Lv^1 University of Southern California^1 and University of Tennessee^2 July 8, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We investigate the robustness of the model-X knockoffs framework with respect to the misspecified or estimated feature distribution. We achieve such a goal by theoretically studying the feature selection performance of a practically implemented knockoffs algorithm, which we name as the approximate knockoffs (ARK) procedure, under the measures of the false discovery rate (FDR) and family wise error rate (FWER). The approximate knockoffs procedure differs from the model-X knockoffs procedure only in that the former uses the misspecified or estimated feature distribution. A key technique in our theoretical analyses is to couple the approximate knockoffs procedure with the model-X knockoffs procedure so that random variables in these two procedures can be close in realizations. We prove that if such coupled model-X knockoffs procedure exists, the approximate knockoffs procedure can achieve the asymptotic FDR or FWER control at the target level. We showcase three specific constructions of such coupled model-X knockoff variables, verifying their existence and justifying the robustness of the model-X knockoffs framework. Running title: ARK Key words: Knockoffs inference; High dimensionality; Feature selection; False discovery rate control; Family-wise error rate control; Coupling; Robustness § INTRODUCTION The knockoffs inference framework <cit.> is a powerful innovative tool for feature selection with controlled error rates. In particular, the model-X knockoffs <cit.> achieves the false discovery rate (FDR) control at a predetermined level in finite samples without requiring any specific model assumptions on how the response depends on the features, making it an attractive option for feature selection in a wide range of statistical applications. The fundamental idea of the knockoffs procedure is to construct knockoff variables that are exchangeable in distribution with the original features but are independent of the response conditional on the original variables. These knockoff variables serve as a control group for the original features, allowing researchers to identify relevant original features for the response. The model-X knockoffs inference has gained increasing popularity since its inception and there have been flourishing developments and extensions of the knockoffs framework and spirits, such as the k-familywise error rate (k-FWER) control with knockoffs <cit.>, power analysis for knockoffs procedure <cit.>, derandomized knockoffs <cit.>, knockoffs inference for time series data <cit.>, kernel knockoffs procedure <cit.>, and FDR control by data splitting or creating mirror variables <cit.>. A key assumption in the model-X knockoffs inference is that the joint distribution of features is known. However, such information is almost never available in practice. There has been overwhleming empirical evidence that the model-X knockoffs framework is robust to misspecified or estimated feature distributions <cit.>. Yet, the theoretical characterization of its robustness is still largely missing. A notable exception is the recent work of <cit.>, where it was formally and elegantly shown that the knockoffs data matrix collecting the knockoff variables can be generated from a distribution, which we name as the working distribution for the ease of presentation, that is different from the true underlying feature distribution, and that the resulting FDR inflation can be measured by the empirical Kullback–Leibler (KL) divergence between the true conditional distribution X_j | X_-j and the working conditional distribution. Here, X_j∈ℝ stands for the jth feature, X_-j∈ℝ^p-1 stands for the feature vector with the jth feature removed, and p is the feature dimensionality. Two important assumptions in their analyses for ensuring the asymptotic FDR control are 1) the working distribution should be learned independently from the training data used for feature selection and 2) the empirical KL divergence between the two knockoffs data matrices (of diverging dimensionalities) generated from the working and true distributions, respectively, needs to vanish as the sample size increases. Although their results are general and apply to arbitrary dependence structure of the response on features, these two assumptions do not always describe the practical implementation. Our results in the current paper are free of the two assumptions discussed above. To put more content into our statements above, especially the one about assumption 2), let us consider the scenario when the true feature matrix has independent and identically distributed (i.i.d.) entries from the t-distribution with ν degrees of freedom, but we misspecify it and use the Gaussian distribution as a working distribution to generate the knockoff variable matrix ∈ℝ^n× p, where n is the sample size. It can be calculated that the empirical KL divergence between and the model-X knockoff variable matrix ∈ℝ^n× p defined in <cit.> has mean and variance both at order np/ν (ν + p). Thus, only when ν^2 ≫ n min (n, p) (which is equivalent to np/ν (ν + p)→ 0), the FDR inflation as derived in <cit.> can vanish asymptotically. In contrast, our theory shows that as long as ν^2≫ s^4 (log p)^4 + 4/γ for some γ∈ (0, 1) with s≪ n^1/2 a sparsity parameter, the knockoffs procedure based on the working distribution can achieve the asymptotic FDR control. More details for our results and model assumptions are summarized formally in Section <ref>. We provide additional comparisons of our results with those of <cit.> in various parts of the paper where more specifics can be discussed. We emphasize and acknowledge that <cit.> established general robustness results without specific model assumptions, while some of our results rely on certain specific model assumptions. The main point we advocate here is that a different notion of closeness than the KL divergence can be advantageous in studying the robustness of the model-X knockoffs. The major goal of our paper is to establish a general theory on the robustness of the model-X knockoffs framework for the FDR and FWER control. We approach the problem by studying the performance of the approximate knockoffs (ARK) procedure, an algorithm that is most popularly implemented in practice when applying the knockoffs framework. The approximate knockoffs procedure differs from the model-X knockoffs in that the former generates the knockoff feature matrix from a working distribution that can be misspecified or learned from the same training data for feature selection. By showing that the approximate knockoffs inference procedure achieves the asymptotic FDR and FWER control as sample size increases, we can verify the robustness of the model-X knockoffs. An important idea in our technical analyses is coupling, where we pair the approximate knockoffs procedure with the model-X knockoffs procedure in such a way that random variables in these two paired procedures are close in realizations with high probability. Hereafter, we will refer to the model-X knockoffs as the perfect knockoffs procedure to emphasize its difference from the approximate knockoffs procedure. It is important to emphasize that we require the realizations of random variables in the paired procedures to be close, instead of the corresponding distributions being close. This is a major distinction from the assumption in <cit.>. Our new notion of closeness allows us to justify the robustness of the model-X knockoffs in some broader contexts not covered by studies in the existing literature. We also emphasize that although our conditions are imposed on the perfect knockoff variables, we do not need to know or construct them in implementation; the existence of such variables is sufficient for our theoretical robustness analyses. We present our theory by first stating the general conditions on the existence of the coupled perfect knockoff statistics and their closeness to the approximate knockoff statistics in Section <ref>, and then provide examples justifying these general conditions in Sections <ref> and <ref>. More specifically, our theory has three layers, related to different stages in applying the knockoffs inference procedure. Our general theory presented in Section <ref> directly makes assumptions on the quality of the approximate knockoff statistics (cf. (<ref>)) by requiring the existence of the perfect knockoff statistics that are sufficiently close to the approximate knockoff statistics. Then under some regularity conditions imposed on the distribution of these perfect knockoff statistics, we prove that the FDR and FWER are controlled asymptotically using the approximate knockoff statistics. This lays the theoretical foundation for our subsequent analyses that are developed by verifying these general conditions in various more specific scenarios. The second layer of our theory, presented in Section <ref>, delves deeper and replaces the coupling condition imposed on the knockoff statistics in Section <ref> with a coupling condition on the approximate knockoff variables generated from some misspecified or estimated feature distribution. Similar in nature to the coupling condition in our general theory, this new condition assumes that there exist perfect knockoff variables that can be coupled with approximate knockoff variables so that their realizations are close to each other with high probability. Since knockoff statistics are known functions of knockoff variables, such alternative condition intuitively and naturally leads to the verification of the coupling condition on knockoff statistics in our general theory. Indeed, we showcase using two commonly used knockoff statistics, namely the marginal correlation statistics and the regression coefficient difference statistics, that the coupling condition on knockoff variables can guarantee the coupling condition on knockoff statistics. We also verify that for each of these two constructions of knockoff statistics, the other regularity conditions in our general theory also hold, ensuring the asymptotic FDR and FWER control. The last layer of our theory is presented in Section <ref> and showcases three specific constructions of the coupled perfect knockoff variables. By imposing conditions on the misspecified or estimated feature distribution, we construct explicitly the coupled perfect knockoff variables and prove that the coupling conditions in the first and second layers of our general theory are satisfied. This gives us a complete theory with conditions imposed on the working distribution for generating knockoff variables and verifies the robustness of the model-X knockoffs inference procedure. Our theory allows high dimensionality of features and does not require an independent learning data set for estimating the feature distribution. There exist some other less related works in the literature that contribute to relaxing the assumption of fully known feature distribution in the model-X knockoffs framework. For instance, <cit.> relaxed such assumption via assuming the existence of sufficient statistic for the model and proposing an alternative conditional exchangeablity for knockoffs given the sufficient statistic. <cit.> investigated the robustness of knockoffs inference with estimated feature distribution in terms of the FDR control in the linear model setting where the features follow a latent factor model with parametric idiosyncratic noise. <cit.> provided theoretical guarantee of the asymptotic FDR control for the approximate knockoffs procedure under an assumption that the FDR function is Lipschitz with respect to feature covariance matrix when the features have the joint Gaussian distribution. The rest of the paper is organized as follows. Section <ref> first introduces the approximate knockoffs procedure and then presents the general conditions and theory for the asymptotic FDR and k-FWER control. We also introduce the coupling idea, a key technique in our theoretical analyses. We illustrate our general theory using two commonly used constructions of knockoff statistics in Section <ref>. Section <ref> further provides three specific constructions of the coupled perfect knockoff variables. We conclude our paper by summarizing the key results and discussing some future research directions in Section <ref>. All the proofs and technical details are provided in the Supplementary Material. To facilitate the technical presentation, let us introduce some notation that will be used throughout the paper. We use a_n ≪ b_n or a_n = o(b_n) to represent a_n / b_n → 0, a_n ≫ b_n to represent a_n / b_n →∞, and a_n ≲ b_n or a_n = O(b_n) to represent a_n ≤ C b_n for an absolute constant C > 0. Let a b and a b be the minimal and maximal values of a and b, respectively. For a vector ∈ℝ^p, denote by _1, _2, and _0 the ℓ_1-norm, ℓ_2-norm, and ℓ_0-norm, respectively. For 1 ≤ j ≤ p, _j is the jth component of and _-j is a subvector of with the jth component removed. For a matrix ∈ℝ^n × p, denote by _i, j the (i, j)th entry of , _j the jth column of , and _A_1, A_2 a submatrix of consisting of (_i, j)_i ∈ A_1, j ∈ A_2 for sets A_1 ⊂{1, …, n } and A_2 ⊂{1, …, p }. Let _max and _2 be the maximum norm and spectral norm of a matrix , respectively. For 1 ≤ j ≤ p, -j represents the set {1, …, p }∖{j}, and denote by |𝒜| the cardinality of set 𝒜. For a positive definite matrix , let λ_min() and λ_max() be the smallest and largest eigenvalues of , respectively. § GENERAL THEORY ON ROBUST KNOCKOFFS INFERENCE WITH COUPLING §.§ Model setup and model-X knockoffs framework Assume that we have n i.i.d. observations { (_i, y_i)}_i = 1^n from the population (X, Y), where X = (X_1, …, X_p)^⊤ is the p-dimensional feature vector and Y ∈ℝ is a scalar response. Here, the feature dimensionality p can diverge with the sample size n. Adopting the matrix notation, the n i.i.d. observations can be written as the data matrix = (_i, j ) ∈ℝ^n × p collecting the values of all the features and vector = (y_1,⋯, y_n)^⊤∈ℝ^n collecting the values of the response. A feature X_j is defined as null (or irrelevant) if and only if it is independent of the response conditional on all the remaining features; that is, Y X_j | X_-j, where X_-j is a subvector of X with the jth component removed. Denote by ℋ_0 = {1 ≤ j ≤ p: X_j  } the set of null features and ℋ_1 = ℋ_0^c that of nonnull (or relevant) features. To ensure the model identifiability and interpretability, we follow <cit.> and assume that ℋ_1 exists and is unique. Further assume that the subset of relevant features is sparse such that p_1 = | ℋ_1 | = o(n p), where |𝒜| stands for the cardinality of a given set. The goal is to select as many relevant features as possible while controlling some error rate measure at the prespecified target level. Two commonly used measures for evaluating the feature selection performance are the FDR <cit.> and k-FWER <cit.>, where the FDR is defined as the expectation of the fraction of false discoveries among all the discoveries and the k-FWER is defined as the probability of making k or more false discoveries. Specifically, for an outcome Ŝ of some feature selection procedure, the FDR and k-FWER are defined as = [] with = | Ŝ∩ℋ_0 | / | Ŝ | k- = ℙ ( |Ŝ∩ℋ_0| ≥ k ), respectively. The model-X knockoffs framework provides a flexible way for controlling the FDR at some prespecified target level in finite samples <cit.>, allowing arbitrary dimensionality of X and arbitrary dependence between response Y and feature vector X. The knockoffs method has also been explored in the context of the k-FWER control by <cit.>. A key step of the model-X knockoffs inference <cit.> is to generate the model-X knockoff variables X = (X_1, …, X_p)^⊤ such that X Y | X and (X, X)_(S)d= (X, X) S ⊂{1, …, p}, where (X, X)_(S) is obtained by swapping the components X_j and X_j in (X, X) for each j ∈ S. The construction of the model-X knockoff variables, which we will refer to as the perfect knockoff variables in the future presentation, requires the exact knowledge of the distribution of feature vector X. For example, Algorithm 1 in <cit.> provided a general approach to generating the perfect knockoff variables when such information is available. However, the exact knowledge of feature distribution is usually unavailable in real applications. Thus, in practical implementation, the problem becomes identifying the relevant subset ℋ_1 with the approximate knockoff variables generated from a feature distribution that can be different from the true underlying one. As stated in the Introduction, we study the robustness of the model-X knockoffs procedure by investigating the feature selection performance of its practical implementation, which we name as the approximate knockoffs procedure and formally present it in the next section for completeness. §.§ Feature selection with approximate knockoffs In practice, the approximate knockoffs inference procedure below is implemented popularly for controlling the FDR or k-FWER. 1) Generating approximate knockoff variables. Since the true underlying feature distribution F(·) is generally unavailable, we generate the knockoff variables from some user-specified feature distribution F̂(·), which may depend on the sample (, ), using the same algorithm proposed for generating the perfect knockoff variables (e.g., Algorithm 1 in <cit.>). Denote by = (_i, j) ∈ℝ^n× p the resulting approximate knockoffs data matrix. 2) Constructing approximate knockoff statistics. Pretend that were perfect knockoffs data matrix and follow the same procedure as in <cit.> to calculate the knockoff statistics Ŵ_j with j=1,⋯, p. Specifically, we first compute the feature importance statistics (Z_1, …, Z_p, Ẑ_1, …, Ẑ_p)^⊤ = t((, ), ), where t(·) is a measurable function of input ((, ), ), and Z_j and Ẑ_j measure the importance of the jth feature and its approximate knockoff counterpart relative to the response, respectively. Then the approximate knockoff statistic Ŵ_j for the jth feature is defined as Ŵ_j = f_j(Z_j, Ẑ_j), where f_j (·, ·) is an antisymmetric function satisfying f_j(x, y) = - f_j (y, x). See <cit.> for examples and characterizations on the valid construction of knockoff statistics. 3) Selecting relevant features. Calculate a data-driven threshold T for the knockoff statistics {Ŵ_j}_j=1^p and select the set of important features as Ŝ = {1≤ j ≤ p: Ŵ_j ≥T}. The thresholds for the FDR control and k-FWER control are different. Specifically, denoting 𝒲̂ = {|Ŵ_1|, …, |Ŵ_p|}, the thresholds for the FDR and k-FWER control are defined as T = min{t ∈𝒲̂: #{j: Ŵ_j ≤ -t}/#{Ŵ_j ≥ t} 1 ≤ q} and T_v = sup{t ∈𝒲̂: #{j: - Ŵ_j ≥ t} = v } with v the largest integer such that ∑_i = k^∞ 2^-(i+v)i+v-1 i≤ q, respectively, where q∈ (0,1) is the prespecified level for the FDR or k-FWER. It is seen that the only difference of the algorithm above from the perfect knockoffs procedure is how the knockoffs data matrix is generated. The perfect knockoffs procedure based on the true underlying feature distribution F(·) has been shown to control the FDR or k-FWER at the target level <cit.>. For the approximate knockoffs inference, however, it is reasonable to expect some inflation in the FDR and k-FWER control, and the inflation level depends on the qualities of the approximate knockoff variable matrix and the resulting knockoff statistics {Ŵ_j}_j=1^p. A desired property is that as the approximate knockoff statistics “approach" the perfect knockoff statistics, the level of inflation also vanishes. One contribution of our paper is to formally introduce a notion of closeness measuring the qualities of the approximate knockoff statistics {Ŵ_j}_j=1^p and knockoff variable matrix . As stated in the Introduction, our technical analyses have three layers, corresponding reversely to the different steps of the approximate knockoffs inference procedure described above. To put it into more content, note that the set of selected features Ŝ is defined directly as a function of the approximate knockoff statistics {Ŵ_j}_j=1^p. Hence, given {Ŵ_j}_j=1^p feature selection can be conducted without the knowledge of the approximate knockoff variable matrix or the feature distribution F(·). For this reason, the first layer of our analysis concerns the quality of {Ŵ_j}_j=1^p and characterizes what kind of approximate knockoff statistics can yield the asymptotic FDR or k-FWER control. The second layer of our analysis studies the quality of and is built on the first layer. We characterize what kind of approximate knockoff variable matrix can lead to valid knockoff statistics {Ŵ_j}_j=1^p in the sense of achieving the asymptotic FDR or k-FWER control. The third layer of our analysis goes all the way to the root of the knockoffs inference and provides specific examples and conditions on F̂(·) for ensuring the asymptotic FDR or k-FWER control. The key idea empowering our analyses is variable coupling behind the approximate knockoffs (ARK) procedure, which we formally introduce in the next section. §.§ Robust knockoffs inference with coupling An important observation is that the perfect knockoff variables in the model-X knockoffs framework <cit.> are not unique. Consequently, the knockoff statistics are not unique either. Indeed, even with the same algorithm (e.g., Algorithm 1 in <cit.>), the knockoff variables generated from different runs of the algorithm are only identically distributed. Our coupling idea is deeply rooted on such observation. Let us introduce some additional notation to facilitate our formal presentation of the general theory. Following the model-X knockoffs framework, for a realization of the perfect knockoff variable matrix generated from the true feature distribution F(·), we let (Z_1^*,…, Z_p^*, Z_1, …, Z_p)^⊤ = t((, ), ) and define the perfect knockoff statistics W_j = f_j (Z_j^*, Z_j) for 1 ≤ j≤ p, where functions t(·) and f_j(·) are chosen to be the same as in the approximate knockoffs inference procedure. We next establish a general theory on the asymptotic FDR control and k-FWER control for the approximate knockoffs inference procedure with regularity conditions imposed on Ŵ_j's. [Coupling accuracy] There exist perfect knockoff statistics {W_j}_j = 1^p such that for some sequence b_n → 0, ℙ( max_1 ≤ j ≤ p | Ŵ_j - W_j | ≥ b_n ) → 0. Conditions on the convergence rate b_n for ensuring the asymptotic FDR or k-FWER control will be specified in the subsequent assumptions. Condition <ref> above couples each realization of the approximate knockoff statistics {Ŵ_j}_j=1^p with a realization of the perfect knockoff statistics {W_j}_j=1^p, and they need to be sufficiently close to each other with high probability. Note that the existence of such {W_j}_j=1^p is required only for the theory, whereas the implementation uses only {Ŵ_j}_j=1^p. We will provide examples in later sections verifying the existence of such coupled {W_j}_j=1^p. The two conditions below are on the quality of the ideal knockoff statistics {W_j}_j=1^p and the signal strength in the data as measured by W_j's. [Average concentration of W_j] There exist deterministic quantities { w_j }_j=1^p such that p^-1∑_j = 1^pℙ ( | W_j - w_j | ≥δ_n ) = o(p^-1), where δ_n → 0 is a sequence satisfying δ_n ≥ b_n. [Signal strength] Let 𝒜_n = {j ∈ℋ_1: w_j ≥ 5 δ_n }. It holds that a_n = | 𝒜_n | →∞ and w_j > - δ_n for j ∈𝒜_n^c. As discussed in <cit.> and <cit.>, a desired property of the knockoff statistics is to have a large and positive value for W_j if j∈ℋ_1, and a small and symmetric around zero value for W_j if j∈ℋ_0. Conditions <ref> and <ref> above together formalize this property in the average probability sense. Note that there is no requirement that each individual w_j with j∈ℋ_1 is positive and large; we only need that there exist enough number (i.e., a_n) of w_j's with j∈ℋ_1 that are positive and large enough. Implicitly, a_n→∞ requires that the number of relevant features |ℋ_1| diverges with sample size as well. Condition <ref> requires that each perfect knockoff statistic W_j is concentrated around its corresponding w_j with rate δ_n in an average probability sense. Let us define p_0 = |ℋ_0| and G(t) = p_0^-1∑_j∈ℋ_0ℙ (W_j ≥ t). By <cit.>, the perfect knockoff statistics W_j with j∈ℋ_0 are symmetrically distributed around zero. It follows that G(t) = p_0^-1∑_j∈ℋ_0ℙ (W_j ≤ -t). We need to impose the technical conditions below on the distribution of the perfect knockoff statistics for our robustness analysis. [Weak dependence] For some constants 0 < γ < 1, 0< c_1 < 1, C_1 > 0, and a positive sequence m_n = o(a_n), it holds that ( ∑_j ∈ℋ_01 (W_j > t) ) ≤ C_1 m_n p_0 G(t) + o ( ( log p )^ - 1/γ [p_0 G (t)]^2 ) uniformly over t ∈ (0, G^-1 ( c_1 q a_n / p ) ]. [Distribution of W_j] Assume that G(t) is a continuous function. For the same constants γ and c_1 as in Condition <ref>, it holds that as n →∞, (log p)^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] G(t - b_n ) - G(t + b_n) / G(t) → 0 and a_n^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + b_n ) → 0. Condition <ref> above can be easily satisfied if W_j's with j∈ℋ_0 are independent of each other. At the presence of dependence, it imposes an assumption on the strength of correlation among the indicator functions 1(W_j>t) with j∈ℋ_0. The ratio G(t - b_n ) - G(t + b_n) / G(t) in Condition <ref> above is closely related to the hazard rate function in survival analysis if G(t) has a probability density function. Loosely speaking, assumption (<ref>) requires that the hazard rate function should have enough smoothness and be more or less bounded uniformly over the range t ∈ (0, G^-1 ( c_1 q a_n / p ) ]; it plays an essential role in determining the coupling accuracy b_n. Assumption (<ref>) allows a small fraction of W_j's for important features to take negative values with nonvanishing probabilities. We are now ready to present our first general theorem on the FDR control for the approximate knockoffs inference procedure. Under Conditions <ref>–<ref>, we have lim sup_n →∞≤ q. The main idea of the proof is to show that ∑_j ∈ℋ_01 (Ŵ_j ≥ t) ≈∑_j ∈ℋ_01 (W_j ≥ t) ≈∑_j∈ℋ_0ℙ (W_j ≥ t) and similarly ∑_j ∈ℋ_01 (Ŵ_j ≤ - t) ≈∑_j∈ℋ_0ℙ (W_j ≤ -t) uniformly over 0 < t ≤ G^-1 (c_1 q a_n/p) with asymptotic probability one, under Conditions <ref>, <ref>, and <ref>. In addition, we will show that threshold T falls into the range (0, G^-1 (c_1 q a_n/p)] with asymptotic probability one under Conditions <ref>–<ref>. Thus, ∑_j ∈ℋ_01 (Ŵ_j ≥ T) ≈∑_j ∈ℋ_01 (Ŵ_j ≤ - T) with asymptotic probability one by the symmetry of {W_j}_j ∈ℋ_0. Consequently, the FDR of the approximate knockoffs procedure is asymptotically the same as that of the perfect knockoffs procedure, where the latter has been proved to be controlled at the target level. This ensures that the FDR of the approximate knockoffs procedure can be controlled asymptotically. Next we will establish the companion result on the k-FWER control for the approximate knockoffs inference procedure. Recall that the selected subset is Ŝ_2 := {1 ≤ j ≤ p: Ŵ_j ≥ T_v} with threshold T_v given in (<ref>). Denote by V̂ = | Ŝ_2 ∩ℋ_0| the number of false discoveries. Similar to the FDR analysis, we assume that |ℋ_1|→∞ as n→∞. Further, we consider the scenario when k diverges very slowly with n as well. Our theorem will again be built on the key Condition <ref> that there exist coupled perfect knockoff statistics that are sufficiently close to the approximate knockoff statistics. However, different from the FDR study when Conditions <ref>–<ref> are needed, we assume instead the two technical conditions below for the distribution of the perfect knockoff statistics, which can be interpreted intuitively similar to Conditions <ref>–<ref>. [Weak dependence] For constants 0 < γ < 1 and C_2 > 0, and a positive sequence m_n = o(k), it holds that ( ∑_j ∈ℋ_01 (W_j > t) ) ≤ C_2 m_n p_0 G(t) + o( (log k )^ - 1/γ [p_0 G (t)]^2 ) uniformly over t ∈ (G^-1 ( 3 k / 2 p ), G^-1 ( k / 2 p ) ). Assume that G(t) is a continuous function. It holds that as n →∞, sup_t ∈( G^-1 (3k/2p), G^-1 (k/2p)) G (t - b_n) - G(t + b_n)/G(t)→ 0 and k^-1∑_j ∈ℋ_1ℙ(W_j < - G^-1 (3 k /2 p) ) → 0 Now we are ready to present our general theorem on the k-FWER control for the approximate knockoffs procedure. Assume that Conditions <ref>, <ref>, and <ref> are satisfied, k →∞, and m_n / k → 0 as n→∞. Then for each ε > 0, we have lim sup_n →∞ ℙ ( V̂≥ k(1 + ε) ) ≤ q. <cit.> showed that the perfect knockoffs inference procedure for the k-FWER control provides precise finite-sample control on the k-FWER. The main idea for proving Theorem <ref> is to compare the approximate knockoff statistics with their coupled perfect counterparts and show that the approximate threshold T_v satisfies |T_v - T_v| ≤ b_n as long as max_1 ≤ j ≤ p |Ŵ_j - W_j| ≤ b_n, where T_v is the corresponding threshold from the perfect knockoff statistics. Moreover, we can show that for each > 0, with high probability it holds that T_v + M_v + 1 < T_v - 2b_n ≤T_v + M_v for M_v ≤ v. Therefore, the probability of the approximate knockoffs inference procedure making at least k false discoveries can be related to that of the FWER control with the perfect knockoff statistics, which establishes the desired result in Theorem <ref>. § ILLUSTRATION OF THE GENERAL THEORY §.§ Characterization of approximate knockoff variables We have established in Section <ref> a general theory on the asymptotic control of the FDR and k-FWER for the approximate knockoffs inference. The key assumption for ensuring the asymptotic FDR and k-FWER control is Condition <ref>. Since the knockoff statistics are intermediate results calculated from the knockoff variables, it is important to provide a characterization on the quality of the approximate knockoff variable matrix that can guarantee Condition <ref>. The assumption below is imposed for such a purpose. For constructed from the approximate knockoffs inference procedure, there exists a perfect knockoff data matrix and an asymptotically vanishing sequence Δ_n such that ℙ(max_1 ≤ j ≤ p n^-1/2_j - _j _2 ≥Δ_n) → 0, where _j and _j are the jth columns of the approximate and perfect knockoff data matrices and , respectively. Condition <ref> above couples each approximate knockoff variable _j with a perfect knockoff variable _j. Similar to Condition <ref>, we need the realizations instead of the distributions of _j and _j to be close, which is a major distinction from the assumption in <cit.>. We next show that the closeness between and can lead to the closeness between Ŵ_j's and W_j's as required by Condition <ref>. Since different construction of the knockoff statistics depends on the feature matrix differently, we showcase the theory using two popularly used constructions of the knockoff statistics: the marginal correlation knockoff statistics and the regression coefficient difference (RCD) knockoff statistics. §.§ Marginal correlation knockoff statistics The marginal correlation is a commonly used measure on variable importance for feature screening due to its simplicity. Given an approximate knockoff variable matrix and its coupled perfect counterpart satisfying Condition <ref>, the approximate knockoff statistics based on the marginal correlation difference are defined as Ŵ_j = ( √(n)_2 )^-1 ( | _j^⊤ | - | _j^⊤ | ) for 1 ≤ j ≤ p, and the coupled perfect knockoff statistics are given by W_j = ( √(n)_2 )^-1 ( | _j^⊤ | - | _j^⊤ | ) for 1 ≤ j ≤ p. Observe that W_j - Ŵ_j = ( √(n)_2 )^-1 ( | _j^⊤ | - | _j^⊤ | ) and thus under Condition <ref>, we have that with asymptotic probability one, max_1≤ j≤ p|W_j-W_j|≤Δ_n. This result is summarized formally in Lemma <ref> in Section <ref> of the Supplementary Material. We consider the flexible nonparametric regression model Y = f(X_ℋ_1) + ε, where f is some unknown regression function, X_ℋ_1 = (X_j)_j ∈ℋ_1 contains all the relevant features for response Y, and ε is the model error satisfying that ε X and (ε) = 0. Assume that feature vector X = (X_1, …, X_p)^⊤d∼ N(0, ) with the positive definite covariance matrix. Moreover, let the distribution of the perfect knockoff variables X = (X_1, ⋯, X_p)^⊤ satisfy that (X, X) = (X_1, ⋯, X_p, X_1, ⋯, X_p) d∼ N (0, [ - r I_p; - r I_p ; ]), where r > 0 is a constant such that the above covariance matrix is positive definite. Here, we consider the equicorrelated construction for simpler presentation and the diagonal matrix r I_p can be replaced with a general version (r_1, …, r_p) with possibly distinct diagonal entries { r_j}_j = 1^p. The above construction of the perfect knockoff variables has been discussed in <cit.>. Note that the Gaussian distribution assumption is imposed mainly to verify the general Conditions <ref> and <ref>. If one assumes directly these two conditions, the Gaussian distribution assumption can be removed. Furthermore, we make the additional technical assumptions below on the generative model (<ref>) to verify the conditions in our general theory presented in Section <ref>. Y is a sub-Gaussian random variable with sub-Gaussian norm Y _ψ_2. Define 𝒜_n = { j ∈ℋ_1:( Y^2)^ - 1/2 (| (X_j Y) | - | ( X_j Y) |)| ≥ 5 δ_n } with δ_n = C_X,Y√(log p/n), where C_X,Y:=max_1 ≤ j ≤ p{ 16 √(2) X_j _ψ_2 Y _ψ_2/ ( Y^2)^1/2 8√(2) |w_j| Y _ψ_2^2 / Y^2 }. It holds that a_n:= |𝒜_n| →∞ and C_X,Y is a positive constant that is independent of p and n. Denote by (^-1)_j the jth column of matrix ^-1, _i, j the (i, j)th entry of matrix , and _ℋ_1, j a vector given by (_i, j)_i ∈ℋ_1. Recall the definition G(t) = p_0^-1∑_j∈ℋ_0ℙ (W_j ≥ t). Matrices ^-1 and are sparse in the sense that max_1≤ j ≤ p (^-1)_j_0 ≤ m_n and ∑_j ∈ℋ_01 ( _ℋ_1, j≠ 0 ) ≤ m_n. In addition, C_1 < r < min_1 ≤ j ≤ p_j, j≤max_1 ≤ j ≤ p_j, j < C_2 for some constants C_1> 0 and C_2 > 0. It holds that |ℋ_1|^-1∑_j ∈ℋ_1ℙ (W_j < - t ) ≤ G(t) for all t ∈ (0, C_3√(n^-1log p)) with C_3>0 some large constant. Under Conditions <ref>–<ref>, we can verify that Conditions <ref>–<ref> are satisfied. This together with Condition <ref> and our general theorem on the FDR control (cf. Theorem <ref>) leads to the theorem below. Assume that Conditions <ref>–<ref> are satisfied. In addition, assume that for some constant 0 < γ < 1, (log p)^1/γ m_n / a_n → 0 and the coupling accuracy Δ_n in Condition <ref> satisfies √(n)Δ_n (log p)^1/2 + 1/γ→ 0. Then for the approximate knockoffs inference based on the marginal correlation, we have lim sup_n →∞≤ q. Let us make a few remarks on the conditions and result presented in Theorem <ref> above. Condition <ref> verifies the signal strength assumption in Condition <ref> in the specific context of model (<ref>) and marginal correlation knockoff statistics. We show in Lemma <ref> in Section <ref> of the Supplementary Material that Condition <ref> holds with δ_n = O(√(n^-1log p )). Since we assume Gaussian feature distribution in this section, the dependence among the indicator functions as required by Condition <ref> is determined by covariance matrix . Hence, Condition <ref> is imposed to justify the validity of Condition <ref>. It is worth mentioning that the sparse dependence structure assumed in Condition <ref> can be replaced with a general assumption that the conditional distribution X_ℋ_0 | X_ℋ_1 has sparse pairwise dependency and the sequence { h_j(t; X_ℋ_1): = ( 1 (W_j ≥ t) | X_ℋ_1 ) }_j ∈ℋ_0 has sparse pairwise correlation for each given t > 0. Condition <ref> is a technical assumption that is intuitive and requires that on average, the probability of a relevant feature having a negative valued W_j is smaller than the corresponding probability of an irrelevant feature. Such condition is compatible with our requirement that relevant features should have positive and larger magnitude for W_j. The convergence rate assumption √(n)Δ_n (log p)^1/2 + 1/γ→ 0 in Theorem <ref> indicates that Δ_n ≪δ_n ∼√(n^-1log p), where δ_n is the concentration rate of individual W_j to w_j. In view of (<ref>), the requirement of Δ_n≪δ_n indeed constrains that the quality of the approximate knockoff statistics, as measured by Δ_n, is of an order smaller than the concentration rate δ_n; this is a general condition we need and not unique to the marginal correlation knockoff statistics. It is worth mentioning that the bound obtained in (<ref>) may be improved under more specific model assumptions. For instance, if (X̂_j, X_j) Y for j ∈ℋ_0 and Y is a sub-Gaussian random variable with (Y) = 0, then under Condition <ref> we can show that max_1 ≤ j ≤ p | Ŵ_j - W_j| ≤ C Δ_n √(n^-1log p). Later in Section <ref>, we will provide extensive analysis on the coupling order Δ_n using some specific examples of feature distributions. Next we will present the parallel result on the k-FWER control. Assume that Conditions <ref>, <ref>, and <ref> are satisfied, k →∞, m_n / k → 0, and Δ_n √(n log p)→ 0. Then for each ε > 0, we have lim sup_n →∞ ℙ ( V̂≥ k(1 + ε) ) ≤ q. The interpretations of the assumptions in the context of the k-FWER control are similar and thus omitted here for simplicity. §.§ Regression coefficient difference with debiased Lasso Another popularly used construction of the knockoff statistics is the regression coefficient difference (RCD). Let us consider the linear regression model = + , where = (β_j)_1 ≤ j ≤ p∈ℝ^p is the true regression coefficient vector, d∼ N(0, σ^2 I_n) is the model error vector, and . Assume that feature vector X = (X_1, …, X_p)^⊤ has mean 0_p ∈ℝ^p and covariance matrix ∈ℝ^p × p. Denote by ^ = (^⊤ , 0_p^⊤)^⊤∈ℝ^2p the augmented true parameter vector. Let =(β̂_j)_1 ≤ j ≤ 2p∈ℝ^2p be the debiased Lasso estimator (<cit.>) based on the augmented design matrix ^ := [, ], where is the approximate knockoff variable matrix. Assume that Condition <ref> is satisfied and is the coupled perfect knockoffs variable matrix. Similarly, define ^ := [, ]. Then can be coupled with the debiased Lasso estimator denoted as =(β_j)_1 ≤ j ≤ 2p∈ℝ^2p based on ^. Then the regression coefficient difference knockoff statistics can be defined as Ŵ_j = |β̂_j| - |β̂_j+p| and W_j = |β_j| - |β_j+p| for the approximate and perfect knockoffs procedures, respectively, for 1 ≤ j ≤ p. We provide the explicit definition of the debiased Lasso estimator to assist future presentation. For 1≤ j ≤ 2p, the debiased Lasso estimator is a one-step bias correction from some initial estimator ^=(β̂_j^)_1 ≤ j ≤ 2p∈ℝ^2p and is defined as β̂_j = β̂^_j + _j^⊤( - ^^) /_j^⊤^_j, where _j is the score vector defined as _j = ^_j - _-j_j with _j := _b{^_j - ^_-jb_2^2 /2n + λ_j b_1 } and {λ_j}_j = 1^2p the nonnegative regularization parameters. We construct the initial estimator as ^ := _b{ - ^b_2^2 /2n + λb_1 } with λ = C √(n^-1log (2p)) the regularization parameter and C>0 some constant. Analogously, the coupled debiased Lasso estimator can be defined componentwisely as β_j = β^_j + _j^⊤( - ^^) /_j^⊤^_j 1 ≤ j ≤ 2p, where ^ = (β_j^)_1 ≤ j ≤ 2p := _b{ - ^b_2^2 /2n + λb_1 } and _j = ^_j - ^_-j_j _j := _b{^_j - ^_-jb_2^2 /2n + λ_j b_1 }. It is important to emphasize that the same regularization parameters λ and λ_j's in defining should be used as in defining in (<ref>) so that their constructions differ only by the used feature matrix; this plays a key role in applying our coupling technique. Indeed, we prove in Lemma <ref> in Section <ref> of the Supplementary Material that the coupling technique together with Condition <ref> and some other regularity conditions ensures that with asymptotic probability one, max_1≤ j≤ 2p|β_j-β̂_j|≲Δ_ns√((log p)/n). The above result guarantees that Ŵ_j's and W_j's are also uniformly close over 1≤ j≤ p with max_1≤ j≤ p|Ŵ_j - W_j|≲Δ_ns√((log p)/n). As long as sΔ_n→ 0, this upper bound has a smaller order than the concentration rate δ_n of W_j (cf. Condition <ref>), because here δ_n ∼√(n^-1log p) as shown in our Lemma <ref> in Section <ref>. As commented after Theorem <ref>, the assumption that the coupling rate of max_1≤ j≤ p|W_j-Ŵ_j| is of a smaller order than the concentration rate δ_n plays a key role in establishing our theory on the asymptotic FDR control. We next introduce some additional notation and formally present the regularity conditions specific to this section. Observe that by symmetry, the augmented feature vector with the perfect knockoff variables has covariance matrix ^A = [ - D; - D ; ], where D is a diagonal matrix such that matrix ^A is positive definite. Let ^A = (^A)^-1 and _j = (_j, l)_l ≠ j with _j, l = - ^A_j, l /^A_j, j. It has been shown in <cit.> that the residuals e_j = X_j^ - X^_-j_j satisfy that ( e_j, X_-j^ ) = 0, ( e_j ) = 1/ ^A_j, j, and (e_j, e_l ) = ^A_j, l/^A_j, j^A_l, l. For 1 ≤ j ≤ 2p, denote by 𝒮_j = (_j) ∪(_j) ∪(_j). Let J = (^) ∪() ∪() and s := ^_0 = _0 = o(n). We make the technical assumptions below. a) For some constant C_4 > 0, ℙ (|J| ≤ C_4 s) → 1. b) For some sequence m_n ≲ s, it holds that max_1 ≤ j ≤ 2 p^A_j _0 ≤ m_n and ℙ ( max_1 ≤ j ≤ 2p |𝒮_j| ≤ C_5 m_n ) → 1 with some constant C_5>0. c) max_1 ≤ j ≤ 2p_j _2 ≤ C_6 and C_7 < λ_min (^A ) ≤λ_max (^A) < C_8 with some positive constants C_6, C_7, and C_8. [Restrictive eigenvalues] Assume that with probability 1 - o(1), min_δ_0 ≤ C_9 s δ^⊤[^ ] ^⊤^δ/ nδ_2^2 ≥ c_1 for some large enough constant C_9>0 and a small constant c_1 > 0. The features X_j's and errors e_j's are sub-Gaussian with sub-Gaussian norms X_j _ψ_2≤ϕ and e_j _ψ_2≤ϕ for some constant ϕ > 0. Let 𝒜_n = {j ∈ℋ_1: |β_j| ≫√(n^-1log p)} and it holds that a_n := |𝒜_n| →∞. The features X_j's and the errors e_j's are sub-Gaussian satisfying X_j _ψ_2≤ϕ and e_j _ψ_2≤ϕ for some constant ϕ > 0 and with probability 1 - O( p^-c), _j - _j _1 ≤ C n^-1/2 a_n,1,  [, ]_-j (_j - _j ) _2^2 ≤ C a_n,2 n^-1 [, ]^⊤_j _max≤ C,  ^init - _1 ≤ C s √(log p/n), where a_n,1 and a_n,2 are two possibly diverging sequences. Moreover, |σ^jk|/√(σ^jjσ^kk) < c for some constant 0< c < 1. We are now ready to state our results on the FDR control for the approximate knockoffs inference based on the debiased Lasso coefficients. Assume that Conditions <ref> and <ref>–<ref> hold, m_n / a_n → 0, and m_n^1/2 s (log p)^3/2 + 1/γ/√(n) + Δ_n s (log p)^1 + 1 /γ→ 0 for some constant 0 < γ < 1. Then we have lim sup_n →∞≤ q. Similarly as discussed in the last section, Condition <ref> is used to verify the weak dependence assumption in Condition <ref>. Conditions <ref> and <ref> are two regularity assumptions imposed for proving (<ref>). Condition <ref> contributes to verifying the general signal strength requirement in Condition <ref>. We have the parallel theorem for the k-FWER control below. Assume that Conditions <ref> and <ref>–<ref> are satisfied, k →∞, m_n / k → 0, and m_n^1/2 s (log p)^3/2 (log k)^1/γ/√(n) + Δ_n s log p → 0 for some constant 0 < γ < 1. Then for each ε > 0, we have lim sup_n →∞ ℙ ( V̂≥ k(1 + ε) ) ≤ q. Assume Conditions <ref> and <ref>-<ref> are satisfied. When (log p)^1/γ + 1/2 ( n^1/2 b_n + n^-1/2 s log p) → 0, we have lim sup_(n, p) →∞≤ q. We start with verification of Condition <ref>. Let J = (_0) ∪() ∪(). Assume Condition <ref> is satisfied. Moreover, with asymptotic probability one, it holds that |J| ≤ m = o(n) and the restricted eigenvalue of n^-1 [,]^⊤_J' [,]_J' is lower bounded by κ_c for any J' with |J'| ≤ m. Then we have ℙ( max_1 ≤ j ≤ p |Ŵ_j - W_j |≲κ_c^-1 m^3/2Δ_n (n^-1log p)^1/2 + m^1/2Δ_n n^-1/2) ≥ 1 - o(1). This result can be improved if we apply another version of condition (take advantage of the specific structure in the distance between approximate and perfect knockoff variables). Cite Lv and Fan's JASA paper on asymptotic equivalence.... We continue to verify Condition <ref>. let w_j = |β_j^0| for 1 ≤ j ≤ p. Then we can obtain the following result. Assume that ^init - ^0 _1 = o_p(1). Then we have ∑_j = 1^p ℙ( |W_j - w_j| > C √(log p/n)) → 0. Next we turn to verification of Condition <ref>. Suppose that the features X_j's and the errors e_j's are sub-Gaussian satisfying X_j _ψ_2≤ϕ and e_j _ψ_2≤ϕ for some constant ϕ > 0 and with probability 1 - O( p^-c), _j - _j _1 ≤ C n^-1/2 a_n,1,  [, ]_-j (_j - _j ) _2^2 ≤ C a_n,2 n^-1 [, ]^⊤_j _max≤ C,  ^init - _1 ≤ C s √(log p/n), where a_n,1 and a_n,2 are two possibly diverging sequences. Moreover, |σ^jk|/√(σ^jjσ^kk) < c for some constant 0< c < 1. If (log p)^1/γ + 1/2 [ n^1/2 b_n + n^-1/2 s log p ] → 0, then we have (<ref>) in Condition <ref> is satisfied. Here we assume both the features X_j and the errors e_j are sub-Gaussian for simpler presentation. In fact, if we assume X_j_ψ_2≤ϕ and sparsity of correlations between features, that is, _j_0 ≤ s = o(n). In addition, suppose the coefficients γ_j_max≤ c are bounded. Then the errors e_j is also sub-Gaussian with e_j _ψ_2≤ c s ϕ. (This results follows from the fact: If X_1, X_2 are random variables such that X_i is b_i-sub-Gaussian, then X_1 + X_2 is (b_1 + b_2)-sub-Gaussian). § KNOCKOFF VARIABLE COUPLING In this section, we present three specific constructions for the coupled perfect knockoff variables and verify that they satisfy Condition <ref> with the desired convergence rate. §.§ Knockoffs for multivariate t-distribution In this example, we will construct knockoffs for multivariate t-distributed features by leveraging only information of the first two moments; the knowledge of the t-distribution will not be utilized in the approximate knockoffs construction. Assume that the underlying true feature distribution for X = (X_1, …, X_p)^⊤ is the multivariate centered t-distribution t_ν (0, ^-1) with unknown parameters ν and ^-1. We construct the approximate knockoff variables from the Gaussian distribution with the attempt to match the first two moments of feature vector X. It is seen that the working distribution F̂ is misspecified. It has been a common practice to use the multivariate Gaussian distribution to construct knockoff variables in practice; see, e.g., <cit.>. Assume that there is an effective estimator constructed using data matrix for the precision matrix := [(X)]^-1 = ν - 2/ν. We construct the approximate knockoffs data matrix from the Gaussian distribution as = (I_p - r ) + (2 r I_p - r^2 )^1/2, where r is a constant such that 2 r I_p - r^2 is positive definite, and ∈ℝ^n × p is independent of (, ) and consists of i.i.d. standard normal entries. Before suggesting our coupled perfect knockoff variables, it is necessary to review some properties of the multivariate t-distribution. Note that an alternative representation of X is given by X = η/√(Q / ν), where ν > 0 is the degrees of freedom, ηd∼ N( 0, ^-1), Q d∼χ_ν^2, and η Q. Here, χ_ν^2 is the chi-square distribution with ν degrees of freedom. When ν is large, the distribution of X is close to the Gaussian distribution N( 0, ^-1). We are ready to introduce our construction of the coupled perfect knockoff variable matrix = (I_p - r ) + (1/√( /ν) ) (2 r I_p - r^2 )^1/2, where (1/√( /ν)) = ( 1/√(Q_1 / ν),1/√(Q_2 / ν), …, 1/√(Q_n / ν)) with {Q_i}_i = 1^n i.i.d. random variables sampled from the conditional distribution Q|X. Let η = (η_1, …, η_n) be sampled from the conditional distribution η | X, and r and the identical realizations to those used in (<ref>). By construction, we can see that (, ) = (1/√( /ν)) (η, η(I_p - r ) + (2 r I_p - r^2 )^1/2) d=(1/√( /ν)) ( η, η ), where (η, η) have i.i.d. rows that follow a common Gaussian distribution N( 0, ^) with ^ = [ ^-1 ^-1 - r I_p; ^-1 - r I_p ^-1 ]. Thus, this verifies that forms a perfect knockoff data matrix for . The proposition below verifies that the coupling assumption in Condition <ref> holds. Assume that C_l≤^-1_2 ≤ C_u and (2 r I_p - r^2 )^-1_2≤ C_u for some constants C_u> 0 and C_l > 0. Assume further that and are both sparse in the sense that max_1 ≤ j ≤ p (_j _0 + _j _0 ) ≤ρ_n almost surely with ρ_n √(log p/n)→ 0 and ρ_n ν^-1/2→ 0, and that there exists a constant C > 0 such that ℙ ( - _2 ≥ C ρ_n (n^-1log p)^1/2) → 0. Then as ν≥ 9 and log p = o(n^1 - 4/ν), we have that for some constant C > 0, ℙ(max_1≤ j ≤ p n^-1_j - _j _2 ≤ C ( ρ_n (n^-1log p)^1/2 + ν^-1/2) ) → 1. The assumed convergence rate of ρ_n (n^-1log p)^1/2 for precision matrix estimation in (<ref>) has been verified in many existing works (e.g., <cit.>, <cit.>, and <cit.>) under the sparsity assumption. Proposition <ref> above indicates that the knockoffs procedure can potentially achieve the asymptotic FDR control even when the working distribution is misspecified but with the first two moments matched. We next compare our results to those in <cit.>. For simplicity, let us further assume that = I_p and is known. Then X d∼ t_ν ( 0, I_p) and the constructed approximate knockoff variables X̂d∼ N( 0, I_p). We set r = 1 in (<ref>) and (<ref>) when constructing the approximate and perfect knockoff matrices, hence the augmented covariance matrix in (<ref>) is given by ^ = I_2p. In such case, Proposition <ref> guarantees that ℙ( max_1 ≤ j ≤ p n^-1_j - _j _2 ≤ C ν^-1/2) → 1. This implies that Condition <ref> is satisfied with Δ_n = C ν^-1/2. Observe that X_j = Z_j/√(𝒳_ν^2 / ν) with Z_j d∼ N(0, 1) and the denominator satisfies that for an absolute constant C > 0 and ν≫log (np), ℙ( |𝒳_ν^2 / ν - 1 | ≥ C √(log (np )/ν)) = O( (np)^- C^2 / 8 ). These indicate that the multivariate t-distribution is asymptotically close to the standard Gaussian distribution when ν≫log (np). Thus, under Conditions <ref>–<ref> and <ref> for the setting of the linear model, if we construct the knockoff statistics as the regression coefficient difference from the debiased Lasso, we can prove using similar technical analysis as for Theorem <ref> that lim sup_n →∞≤ q, when ν^1/2≫ s (log p)^1 + 1/γ and s (log p)^3/2 + 1/γ/√(n)→ 0 for some 0 < γ < 1. <cit.> also derived an upper bound on the FDR inflation. Directly applying their result and calculating the KL divergence in their upper bound under the specific model setting stated above, we can obtain the lemma below. By applying Theorem 1 in <cit.>, it requires at least ν^2 ≫ n min(n, p) for lim sup_n →∞≤ q. The intuition behind Lemma <ref> above is that Theorem 1 in <cit.> requires the empirical KL divergence max_j∈ℋ_0K̂L̂_j converging to zero in probability, where K̂L̂_j = ∑_i = 1^n [ _i, j ^2/2 - ν + p/2log(1 + _i, j ^2/ν + _i, -j_2^2) - (_i, j^2/2 - ν + p/2log(1 + _i, j^2/ν + _i, -j_2^2) )]. Here, = (_i, j ) ∈ℝ^n × p consists of i.i.d. rows sampled from t_ν ( 0, I_p), while = (_i, j)∈ℝ^n × p consists of i.i.d. rows sampled from N( 0, I_p). As shown in the proof of Lemma <ref> in Section <ref> of the Supplementary Material, K̂L̂_j is a sum of i.i.d. random variables with positive mean of order O(p/ν (ν + p)). Hence, K̂L̂_j is concentrated at O(n p/ν (ν + p)) and to ensure that K̂L̂_j d→ 0, we need at least np/ν (ν + p)→ 0, or equivalently, ν^2 ≫ n min(n, p). Such condition is stronger than our requirement ν^1/2≫ s (log p)^1 + 1/γ derived from the coupling technique when s = o(√(n)) and p ≥ n. §.§ Gaussian knockoffs We now study the commonly used example of Gaussian knockoffs with the correctly specified distribution family. Assume that feature vector X = (X_1, …, X_p)^⊤d∼ N(0, ^-1) with unknown precision matrix , and we have an effective estimator for the precision matrix . Then the approximate knockoff variable matrix can be constructed as = (I_p - r ) + (2 r I_p - r^2 )^1/2, where r > 0 is some constant such that 2 r I_p - r^2 is positive definite, and = (_i, j) ∈ℝ^n × p is independent of (, ) with independent entries Z_i,jd∼ N(0, 1). Note that the approximate knockoff variable matrix above uses the correctly specified distribution family for (i.e., the Gaussian distribution). We couple the approximate knockoff variable matrix with the perfect knockoff variable matrix = (I_p - r ) + (2 r I_p - r^2 )^1/2, where importantly, and r are exactly the same as those used in constructing . We present the result below regarding the accuracy of the approximate knockoff variables. Assume that C_l≤^-1_2 ≤ C_u and (2 r I_p - r^2 )^-1_2 ≤ C_u for some constants C_u> 0 and C_l > 0. Assume further that precision matrix and its estimator are both sparse in the sense that max_1 ≤ j ≤ p (_j _0 + _j _0) ≤ρ_n almost surely with ρ_n √(log p/n)→ 0, and that there exists a constant C > 0 such that ℙ( - _2 ≥ C ρ_n √(log p/n)) → 0. Then we have that for some constant C > 0, ℙ(max_1≤ j ≤ p n^-1/2_j - _j _2 ≤ C ρ_n √(log p/n))→ 1. Proposition <ref> above implies that Condition <ref> is satisfied with coupling accuracy Δ_n = C ρ_n √(log p/n), where ρ_n represents the sparsity level of ^-1 and its estimator. We again consider the linear model setting and construct the knockoff statistics as the regression coefficient difference from the debiased Lasso. Then it follows from Theorem <ref> that under Conditions <ref>–<ref> and <ref>, we have lim sup_n →∞≤ q provided that s ρ_n (log p )^3/2 + 1/γ = o(√(n)) for some 0 < γ < 1. Our technical analyses do not require data splitting or an independent pretraining sample. The results in <cit.> require an independent unlabeled pretraining data set with sample size N to estimate the unknown precision matrix. Specific to the model setting considered in this section, their results indicate that lim sup_n →∞≤ q when N≫ n ρ_n (log p )^2. This again shows some potential advantage of our coupling technique in the robustness analyses. §.§ Nonparanormal knockoffs We further investigate a much more general distribution family, that is, the Gaussian copula distributions. Assume that X = (X_1, …, X_p)^⊤ has marginal distributions X_j d∼ F_j(·) and satisfies that (Φ^-1 (F_1(X_1)), …, Φ^-1(F_p(X_p)))^⊤d∼ N( 0, ^-1), where the diagonal entries of ^-1 are all one. Further assume that we have effective estimators F̂_j for F_j and for . Define = (_i, j) ∈ℝ^n× p with _i,j = Φ^-1 (F̂_j (_i, j) ) and = (_i, j) ∈ℝ^n× p with _i,j = Φ^-1 (F_j (_i, j) ). Let = (_i, j) ∈ℝ^n × p be given by = (I_p - r ) + (2 r I_p - r^2 )^1/2, where r > 0 is some constant such that 2 r I_p - r^2 is positive definite, and = (_i, j) ∈ℝ^n × p is independent of (, ) with i.i.d. entries Z_i,jd∼ N(0, 1). We construct the approximate knockoff variable matrix as = (_i, j) with _i, j = F̂_j^-1 (Φ(_i, j)). It is seen that this example also uses the correctly specified distribution family for X, i.e., the Gaussian copula. We suggest to construct the coupled perfect knockoff variable matrix as = (_ij) with _i,j = F_j^-1 (Φ(_i,j)), where _i,j represents the (i,j)th entry of matrix = (I_p - r ) + (2 r I_p - r^2 )^1/2 with and r identical in values to the ones used in (<ref>). The proposition below characterizes the coupling rate between and . Assume that (<ref>) is satisfied and both and are sparse in the sense that max_1≤ j ≤ p (_j _0 +_j _0) ≤ρ_n with p ρ_n = o(n /(log n)^3) almost surely. Assume further that for 1 ≤ j ≤ p, the distribution estimators satisfy 1/2n≤F̂_j (x) ≤ 1 - 1/2n for each x ∈(X_j), (X_j) ⊂ [-b, b] for some constant b >0, and there exists a constant M > 0 such that ℙ( max_1 ≤ j ≤ psup_ x ∈ [ 2M n^-1log n, 1 - 2M n^-1log n] | F̂_j^-1 (x) - F_j^-1 (x) | ≥ (M n^-1 log n )^1/2) → 0, ℙ( max_1 ≤ j ≤ psup_x ∈ (F_j^-1(2M n^-1log n), F_j^-1 (1 - 2M n^-1log n) ) | F̂_j(x) - F_j(x) | / F_j(x) [1 - F_j(x)] ≥ (M n^-1 log n )^1/2) → 0, ℙ( max_1 ≤ j ≤ psup_ x, y ∈ (0, 1) | F̂_j^-1 (x) - F̂_j^-1 (y) | / |x - y| + ( n^-1 (log n) |x - y| )^1/2 + n^-1log n ≥ M ) → 0. Then we have ℙ(max_1≤ j ≤ p n^-1_j - _j _2 ≤ C ( ρ_n √(log p/n) + √( p ρ_n (log n)^3/n)) ) → 1. When estimators {F̂_j}_j = 1^p are the empirical distribution functions and p = o(n), it can be shown that (<ref>), (<ref>), and (<ref>) can be satisfied when the density function f_X_j is uniformly bounded on the support. See, e.g., <cit.> for the estimation of nonparanormal distributions, and we opt not to discuss it here due to the space constraint. We also remark that the bounded support assumption of (X_j) ⊂ [-b, b] is to simplify the technical proofs and may be removed by applying the truncation technique and letting b slowly diverge with n. Since such technical relaxation is not the main focus of the current paper, we choose not to explore it here. § DISCUSSIONS We have investigated in this paper the robustness of the model-X knockoffs framework introduced in <cit.> by characterizing the feature selection performance of the approximate knockoffs (ARK) procedure, a popularly implemented version of the model-X knockoffs framework in practice. The approximate knockoffs procedure differs from the model-X knockoffs procedure in that it uses the misspecified or estimated feature distribution to generate the knockoff variables without the use of sample splitting. We have proved formally that the approximate knockoffs procedure can achieve the asymptotic FDR and FWER control as the sample size diverges in the high-dimensional setting. A key idea empowering our technical analysis is coupling, where we pair statistics in the approximate knockoffs procedure with those in the model-X knockoffs procedure so that they are close in realizations with high probability. The knockoff variable coupling has been investigated under some specific distribution assumptions in the current work. An interesting future study is to investigate the coupling idea under broader class of or even general feature distributions. chicago Supplementary Material to “ARK: Robust Knockoffs Inference with Coupling” Yingying Fan, Lan Gao and Jinchi Lv This Supplementary Material contains the proofs of Theorems <ref>–<ref>, Propositions <ref>–<ref>, and some key technical lemmas. All the notation is the same as defined in the main body of the paper. § PROOFS OF THEOREMS <REF>–<REF> AND PROPOSITIONS <REF>–<REF> §.§ Proof of Theorem <ref> It has been shown in <cit.> that the model-X knockoffs inference procedure achieves the exact FDR control when the perfect knockoff statistics are employed. Note that the approximate knockoff statistics {Ŵ_j } are expected to provide a reliable approximation to the perfect knockoff statistics {W_j }, as assumed in Condition <ref>. The main idea of the proof is to establish the FDR control for the approximate knockoffs inference procedure through a comparison of the approximate knockoff statistics and a certain realization of the perfect knockoff statistics. The two lemmas below provide a sketch of the proof and can be established under Conditions <ref>–<ref>. Assume that Conditions <ref>, <ref>, and <ref> are satisfied. When a_n →∞ and m_n / a_n → 0, we have that sup_t ∈(0, G^-1 ( c_1 q a_n / p ) ] | ∑_j ∈ℋ_01 (Ŵ_j ≥ t) /∑_j ∈ℋ_0ℙ ( W_j ≥ t ) - 1 | = o_p(1), sup_t ∈(0, G^-1 ( c_1 q a_n / p ) ] | ∑_j ∈ℋ_01 (Ŵ_j ≤ -t) /∑_j ∈ℋ_0ℙ ( W_j ≤ - t ) - 1 | = o_p(1). sup_t ∈(0, G^-1 ( c_1 q a_n / p ) ) | ∑_j ∈ℋ_01 (Ŵ_j ≥ t) /∑_j ∈ℋ_0ℙ ( W_j ≥ t ) - 1 | = o_p(1), sup_t ∈(0, G^-1 ( c_1 q a_n / p ) ) | ∑_j ∈ℋ_01 (Ŵ_j ≤ - t) /∑_j ∈ℋ_0ℙ ( W_j ≤ - t ) - 1 | = o_p(1) . Under Conditions <ref>–<ref>, we have that for some constant 0 < c_1 < 1, ℙ ( T ≤ G^-1 ( c_1 q a_n / p ) ) → 1. Under Condition <ref>, we have sup_t ∈ (0, M_n, p) ∑_j ∈ H_01 (t - b_n ≤W_j ≤ t + b_n) /∑_j ∈ H_0ℙ ( W_j ≥ t ) = o_p(1) We present the proofs of Lemmas <ref> and <ref> in Sections <ref> and <ref>, respectively. Now we are ready to prove Theorem <ref>. Let us define two events ℬ_1 = {T ≤ G^-1 ( c_1 q a_n/p ) } and ℬ_2, ϵ = {sup_t ∈ (0, G^-1 ( c_1 q a_n/p )](| ∑_j ∈ℋ_01 (Ŵ_j ≥ t )/∑_j ∈ℋ_0ℙ (W_j ≥ t ) - 1 | | ∑_j ∈ℋ_01 (Ŵ_j ≤ - t )/∑_j ∈ℋ_0ℙ (W_j ≤ - t ) - 1 | )≤ϵ} for ϵ > 0. Lemmas <ref> and <ref> above have shown that ℙ(ℬ_1^c ) → 0 and ℙ(ℬ_2, ϵ^c ) → 0 for each ϵ > 0. In addition, it holds naturally that 0 ≤≤ 1. Then it follows that ≤( ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) / 1 ∑_ j = 1^p 1 (Ŵ_j ≥ T) ·1 (ℬ_1)1 (ℬ_2, ϵ) ) + ℙ(ℬ_1^c ) + ℙ(ℬ_2, ϵ^c ) = ( ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) / 1 ∑_ j = 1^p 1 (Ŵ_j ≥ T) ·1 (ℬ_1)1 (ℬ_2, ϵ) ) + o(1) . In view of the definition of threshold T in (<ref>), we can deduce that ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) / 1 ∑_ j = 1^p 1 (Ŵ_j ≥ T) ·1 (ℬ_1)1 (ℬ_2, ϵ) = ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) /∑_j ∈ℋ_0 1 (Ŵ_j ≤ -T) ·∑_j ∈ℋ_0 1 (Ŵ_j ≤ - T) / 1 ∑_ j = 1^p 1 (Ŵ_j ≥ T) ·1 (ℬ_1)1(ℬ_2, ϵ) ≤ q ·∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) /∑_j ∈ℋ_0 1 (Ŵ_j ≤ - T ) ·1 (ℬ_1)1(ℬ_2, ϵ). Furthermore, it is easy to see that on event ℬ_1 ∩ℬ_2, ϵ, we have ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) /∑_j ∈ℋ_0 1 (Ŵ_j ≤ - T ) ≤sup_t ∈ (0, G^-1 ( c_1 q a_n/p )]∑_j ∈ℋ_01 ( Ŵ_j ≥ t ) /∑_j ∈ℋ_0 1 (Ŵ_j ≤ - t ) ≤1 + ϵ/1 - ϵsup_t ∈ (0, G^-1 ( c_1 q a_n/p )]∑_j ∈ℋ_0ℙ ( W_j ≥ t ) /∑_j ∈ℋ_0ℙ ( W_j ≤ - t ) = 1 + ϵ/1 - ϵ, where the last equation above is obtained by the symmetry of the perfect knockoff statistics {W_j }_j ∈ℋ_0 that ℙ ( W_j ≥ t ) = ℙ ( W_j ≤ - t ). Therefore, we can obtain that for each ϵ > 0, ≤ q ·1 + ϵ/1 - ϵ + o(1), which yields the desired result (<ref>). This completes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> We first define the corresponding threshold T_v for the perfect knockoff statistics {W_j}_j=1^p in the model-X knockoffs inference for the k-FWER control as T_v = sup{ t ∈𝒲: #{j: - W_j ≥ t} = v }, where v is defined as in (<ref>) and 𝒲 = { | W_1 |, …, | W_p | }. As sketched in Lemmas <ref>–<ref> below, the main idea of the proof is to show that the threshold T_v based on the approximate knockoff statistics and the threshold T_v based on the perfect knockoff statistics are sufficiently close under Condition <ref> such that for any > 0, the number of W_j's that lie between T_v and T_v is at most v with asymptotic probability one, where v satisfies v / k → 1 as k →∞. Specifically, let M_v be the integer such that T_v + M_v≥T_v - 2 b_n > T_v + M_v +1. Then we can establish a bound for M_v as shown in Lemma <ref> below. We first present the three lemmas below that provide an outline of the proof. The proofs of Lemmas <ref>–<ref> are provided in Sections <ref>–<ref>. Under Condition <ref>, we have that ℙ ( | T_v - T_v | ≥ b_n) → 0. Assume that k →∞. Then we have that v / k = 1 + O ( k^ - 1 /2 ). Under all the conditions of Theorem <ref>, we have that sup_t ∈ (0, G^-1 (v/2p) )| ∑_j ∈ℋ_01 ( W_j ≥ t ) /∑_j ∈ℋ_0 ℙ ( W_j ≥ t ) - 1 | = o_p (ϵ_n) Under all the conditions of Theorem <ref>, we have that for each > 0, ℙ ( M_v ≤ v ) → 1. We are now ready to prove Theorem <ref>. It follows straightforwardly from Lemma <ref> that ℙ ( V̂≥ k(1 + 2 ) ) = ℙ( ∑_j ∈ℋ_01 ( Ŵ_j ≥T_v ) ≥ k(1 + 2) ) ≤ℙ( ∑_j ∈ℋ_01 ( W_j ≥T_v - 2 b_n ) ≥ k(1 + 2) ) ≤ℙ( ∑_ j ∈ℋ_01 ( W_j ≥T_v + M_v ) ≥ k (1 + 2) ) = ℙ( ∑_ j ∈ℋ_01 ( - W_j ≥T_v + M_v ) ≥ k (1 + 2) ) ≤ℙ( ∑_j ∈ℋ_0 1 ( - W_j ≥T_v) ≥ k(1 + 2 ) - M_v ), where the second last step above is because of the symmetry of W_j's with j∈ℋ_0 and the last step above is due to ∑_ j ∈ℋ_01 ( - W_j ≥T_v + M_v )-∑_ j ∈ℋ_01 ( - W_j ≥T_v )≤ M_v by the definitions of T_v and M_v. Moreover, Lemma <ref> above shows that M_v ≤ v with asymptotic probability one and Lemma <ref> above proves that v / k = 1 + o(1). Then it holds that 2 k > M_v with asymptotic probability one. Hence, combining the above results and by the union bound, we can deduce that ℙ ( V̂≥ k(1 + 2 ) ) ≤ℙ( ∑_j ∈ℋ_01( - W_j ≥T_v ) ≥ k ) + o(1) = q + o(1). Consequently, it follows that for each > 0, lim sup_n →∞ℙ ( V̂≥ k(1 + 2 ) ) ≤ q . This concludes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> The main idea of the proof is to directly apply Theorem <ref> by verifying Conditions <ref>–<ref> involved. We will show in the lemmas below that Conditions <ref>–<ref> are satisfied for the marginal correlation knockoff statistics under Conditions <ref>–<ref> and the setting of nonparametric regression model (<ref>) with normal features. Proofs of Lemmas <ref>–<ref> are presented in Sections <ref>–<ref>. Assume that Condition <ref> is satisfied. Then we have that ℙ( max_1 ≤ j≤ p | Ŵ_j - W_j | ≥Δ_n ) → 0. Lemma <ref> above shows that Condition <ref> is satisfied with sequences b_n := Δ_n. Define w_j = ( Y^2)^-1/2 ( | (X_j Y)| - |(X_j Y)| ) for 1 ≤ j ≤ p. Note that w_j = 0 for j ∈ℋ_0 since (X_j, X_ℋ_1) d= (X_j, X_ℋ_1) for j ∈ℋ_0 by the exchangeability between X_j and X_j. Recall from the definition in (<ref>) that δ_n = √(log p/n)max_1 ≤ j ≤ p{ 16 √(2) X_j _ψ_2 Y _ψ_2/ ( Y^2)^1/2 8√(2) |w_j| Y _ψ_2^2 / Y^2 }. We have the concentration inequality below for W_j under the sub-Gaussian assumption in Condition <ref>. Assume that Condition <ref> is satisfied. When log p = o(n), we have that ∑_j = 1^p ℙ ( |W_j - w_j | ≥δ_n ) ≤ 6 p^-1 + p exp{ - n ( Y^2)^2 /8 Y^4 }. Lemma <ref> above indicates that Condition <ref> related to the concentration rate of W_j is satisfied with δ_n defined in (<ref>) and that Δ_n ≤δ_n, where Δ_n is the approximation accuracy of the approximate knockoff statistics obtained in Lemma <ref>. In addition, from the definition of w_j, under Condition <ref> we have that the general Condition <ref> on the signal strength is also satisfied. Next we will turn to the verification of Conditions <ref>–<ref>. Assume that Condition <ref> is satisfied. Then we have that for each t ≥ 0, ( ∑_j ∈ℋ_01 (W_j ≥ t) ) / p_0 G (t) ≤ 2 m_n. Assume that Conditions <ref> and <ref> are satisfied. Then when (log p)^1/γ m_n / a_n → 0 and √(n)Δ_n (log p)^1/2 + 1/γ→ 0 for some constant 0< γ < 1, we have that (log p)^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] G(t - Δ_n ) - G(t + Δ_n) / G(t) → 0 and a_n^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + Δ_n ) → 0 as n →∞. Lemma <ref> above shows that Condition <ref> is satisfied, while Lemma <ref> above implies that Condition <ref> is satisfied. Finally, the conclusion of Theorem <ref> can be obtained by directly applying the general Theorem <ref>. This completes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> The proof of Theorem <ref> is analogous to that of Theorem <ref> in Section <ref>. We omit the detailed proof here to avoid redundancy. §.§ Proof of Theorem <ref> The main idea of the proof is to directly apply Theorem <ref> by verifying Conditions <ref>–<ref> for the knockoff statistics constructed from the debiased Lasso coefficients. A key observation is that the debiased Lasso coefficients are asymptotically normal. Denote by τ_j = _j _2 / |_j^⊤^_j|. The debiased Lasso coefficient can be written as √(n) ( β_j - β_j^ ) = _j^⊤/_j _2 ·√(n)τ_j + ∑_k ≠ j√(n)_j^⊤^_k (β_k^ - β_k^)/_j^⊤^_j . Observe that _j^⊤/_j _2 ∼ N(0, σ^2), √(n)τ_j =O_p(1), and the remainder term in (<ref>) above is of order o_p(1). Thus, the debiased Lasso estimator is asymptotically normal in the sense that τ_j^-1 (β_j - β_j^) d→ N(0, σ^2). Our proof will build mainly on such intuition. Throughout the proof below, constant C may take different values from line to line. We first present two lemmas below about the consistency of Lasso estimators ^ and _j. We omit the proofs of Lemmas <ref> and <ref> here to avoid redundancy since they are well-known results for the consistency of Lasso estimators in the literature. Under Conditions <ref>–<ref>, we have that with probability 1 - O(p^-3), ^ - ^_1 ≤ C s √(log p/n), ^ - ^_2 ≤ C √( slog p/n) ^ (^ - ^ ) _2 ≤ C √( slog p) . Under Conditions <ref>–<ref>, we have that with probability 1 - O(p^-3), max_1 ≤ j ≤ 2p _j - _j _1 ≤ C m_n √(log p/n) , max_1 ≤ j ≤ 2p _j - _j _2 ≤ C √( m_nlog p/n), max_1 ≤ j ≤ 2p _-j^ (_j - _j) _2 ≤ C √( m_n log p). In addition, when m_n log p/n→ 0 we have that with probability 1 - O(p^-3), |√(n)τ_j - ( e_j^2)^-1/2 | ≤ C √(m_n log p/n), |_j^⊤_l - (e_j, e_l) | ≤ C √(m_n log p/n). The four lemmas below outline the proof for verifying the general Conditions <ref>–<ref>. Proofs of Lemma <ref>–<ref> are provided in Sections <ref>–<ref>. Assume that Conditions <ref> and <ref>–<ref> are satisfied. Then as Δ_n s^1/2→ 0 and √(s log p/n)→ 0, we have that ℙ( max_1 ≤ j ≤ 2p | β_j - β̂_j | ≥ C Δ_n s √(log p/n)) → 0. Lemma <ref> above indicates that Condition <ref> is satisfied with sequences b_n := C Δ_n s √(log p/n). Let us define w_j = |β_j|. Assume that Conditions <ref>–<ref> are satisfied. Then as s √(m_n log p/n)→ 0, we have that for some C > 0, ∑_j= 1^p ℙ (|W_j - w_j | ≥ C √(n^-1log p) ) → 0. Lemma <ref> above shows that Condition <ref> related to the concentration rate of W_j is satisfied with δ_n = C √(n^-1log p). In addition, it holds that b_n ≪ C √(n^-1log p) due to the assumption Δ_n s → 0 in Theorem <ref>. In addition, in light of the definition of w_j, under Condition <ref> we have that the general Condition <ref> on the signal strength is also satisfied. We next turn to the verification of Conditions <ref>–<ref>. Assume that Conditions <ref>–<ref> are satisfied. Then as m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0, we have that ( ∑_j ∈ℋ_01 (W_j > t) ) ≤ V_1 (t) + V_2 (t), where for some 0 < γ < 1 and 0< c_1 < 1, (log p )^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] V_1 (t) / [p_0 G (t)]^2 → 0 and sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] V_2(t) / p_0 G (t) ≲ m_n. Assume that Conditions <ref> and <ref>–<ref> are satisfied. Then when m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0 and Δ_n s (log p)^1 + 1 /γ→ 0, we have that (log p)^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] G(t - b_n ) - G(t + b_n) / G(t) → 0 and a_n^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + b_n ) → 0 as n →∞. Lemma <ref> above shows that Condition <ref> is satisfied, whereas Lemma <ref> implies that Condition <ref> is satisfied. Finally, the conclusion of Theorem <ref> can be derived by directly applying the general Theorem <ref>. This completes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> The proof of Theorem <ref> is similar to that of Theorem <ref> in Section <ref>. Hence we omit the detailed proof here to avoid redundancy. §.§ Proof of Proposition <ref> From the definitions in (<ref>) and (<ref>), we see that - = r + + ( 1 - 1/√(Q_1/ν), …, 1 - 1/√(Q_n/ν)) , where = -, =(2 r I_p - r^2 )^1/2 - (2 r I_p - r^2 )^1/2, and = (2 r I_p - r^2 )^1/2. In view of assumption (<ref>) and the fact that := [(X)]^-1 = ν - 2/ν, it follows from the triangle inequality that with probability 1 - o(1), - _2 ≤ - _2 + - _2 = - _2 + 2 ν^-1_2 ≤ C ρ_n √(log p/n) + 2 ν^-1 C_l^-1. Now we deal with the three terms on the right-hand side of (<ref>) above separately. First, for the second term above, an application of similar arguments as for (<ref>) gives that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 ()_j _2^2 ≤ 3 _2^2 /2 ≤ C - _2^2 ≤ C ( ρ_n^2 log p/n + ν^-2). Regarding the first term on the right-hand side of (<ref>) above, observe that (_i, j , _i, l ) d= (η_i, j/√(Q_i / ν), η_i, l/√(Q_i / ν)), where (η_i, 1, …, η_i, p) d∼ N( 0, ^-1) and {Q_i}_i = 1^n are independent and identically distributed (i.i.d.) chi-square random variables with ν degrees of freedom. It holds that for some large constant C_1 > 0, ℙ( n^-1 ^⊤ - ^-1_max≥ C_1 √(log p/n) + ν^-1/2) = ℙ(max_1 ≤ j, l ≤ p| n^-1∑_i = 1^n η_i, jη_i, l/Q_i / ν - (η_i, jη_i, l) ( ν/Q_i ) | ≥ C_1 √(log p/n)+ ν^-1/2) ≤ℙ(max_1 ≤ j, l ≤ p| n^-1∑_i = 1^n ν/Q_i (η_i, jη_i, l - (η_i, jη_i, l)) | ≥ C_1 √(log p/n)) +ℙ(max_1 ≤ j, l ≤ p|n^-1∑_i = 1^n (η_i, jη_i, l) ( ν/Q_i - ( ν/Q_i ) )| ≥ν^-1/2). Before showing the bounds for the two probabilities on the right-hand side of the expression above, we first present some basic results for chi-square random variables. Note that from the property of the chi-square distribution, we have through some immediate calculations that ( ν^2/Q_i^2) = ν^2/(ν - 2) (ν - 4), ( ν/Q_i ) = ν^2 /(ν - 2)(ν - 4) - (ν/ν - 2)^2 = O(ν^-1), ( ν^2/Q_i^2 ) = ν^4/(ν - 2)(ν - 4)(ν - 6)(ν - 8) - (ν^2 /(ν - 2)(ν - 4))^2 = O (ν^-1). Thus, noting that ( ν^2/Q_i^2) + ν^-1/2 = ν^2/(ν - 2) (ν - 4) + ν^-1/2≤ 3 and ( ν^2/Q_i^2) - ν^-1/2≥ 2/3 when ν≥ 9, an application of the Markov inequality leads to ℙ( n^-1∑_i = 1^n ν^2/Q_i^2≥ 3 ) + ℙ( n^-1∑_i = 1^n ν^2/Q_i^2≤ 2/3 ) ≤ℙ( n^-1∑_i = 1^n ν^2/Q_i^2≥( ν^2/Q_i^2) + ν^-1/2) + ℙ( n^-1∑_i = 1^n ν^2/Q_i^2≤( ν^2/Q_i^2) - ν^-1/2) ≤ν n^-1( ν^2/Q_i^2 ) = O(n^-1 ) → 0. In addition, noting that e^-x/2 ≤ 1 and Stirling's formula for the gamma function Γ(x) = √(2 π / x) (x/ e)^x (1 + O(x^-1)) for x ≥ 0, we have through applying the density function of the chi-square distribution that for each constant C > 0, ℙ( max_1 ≤ i ≤ n ν/Q_i≥ C √(n/log p)) ≤ n ∫_0^C^-1ν√(log p/n)x^ν / 2 - 1 e^- x / 2/ 2^ν / 2Γ(ν / 2) dx ≤ 2 n (C^-1ν√(log p/n))^ν / 2/ν 2^ν / 2Γ(ν / 2) ≲ n ( C^-2log p/n)^ν / 4ν ^ν /2 /ν 2^ν /2√(4 π / ν) (ν / 2 e)^ν / 2 = ( C^-2 e^2 log p/n^1 - 4/ ν)^ν / 4 1/√(4 πν)→ 0 when log p = o(n^1 - 4/ν). Now we are ready to deal with the two probabilities on the right-hand side of (<ref>) above. Let us define two events 𝒟_1 = {max_1 ≤ i ≤ nν/Q_i≤ C_2 √(n/log p)} for a small constant C_2 > 0 and 𝒟_2 = {2/3 ≤ n^-1∑_i = 1^n ν^2/Q_i^2≤ 3 }. It follows from (<ref>) and (<ref>) that ℙ (𝒟_1^c) → 0 and ℙ (𝒟_2^c) → 0. For the first probability in (<ref>) above, since η_i, jη_i, l is a sub-exponential random variable and Q_i η_i, jη_i, l, we can obtain by applying the concentration inequality for the weighted sum of sub-exponential random variables (cf. Corollary 4.2 in <cit.>) that when C_1 is large enough and C_2 is small enough, ℙ(max_1 ≤ j, l ≤ p| n^-1∑_i = 1^n ν/Q_i (η_i, jη_i, l - (η_i, jη_i, l)) | ≥ C_1 √(log p/n)) ≤ℙ(max_1 ≤ j, l ≤ p| n^-1∑_i = 1^n ν/Q_i (η_i, jη_i, l - (η_i, jη_i, l)) | ≥ C_1 √(log p/n) , 𝒟_1 ∩𝒟_2 ) + ℙ (𝒟_1^c) + ℙ (𝒟_2^c) ≤ 2 p^2 exp{ - 3 log p } + o(1) → 0. Regarding the second probability in (<ref>), since max_1 ≤ j, l ≤ p | (η_i, jη_i, l) | ≤max_1 ≤ j ≤ p(η_i, j^2) ≤max_1 ≤ j ≤ p (^-1)_j, j≤ C_u, an application of the Markov inequality and (<ref>) yields that ℙ(max_1 ≤ j, l ≤ p|n^-1∑_i = 1^n (η_i, jη_i, l) ( ν/Q_i - ( ν/Q_i ) )| ≥ν^-1/2) ≤ℙ( |n^-1∑_i ( ν/Q_i - ( ν/Q_i ) )| ≥ C_u^-1ν^-1/2) ≤ C_u^-2ν n^-1 (ν/Q_i) = O(n^-1) → 0. By plugging (<ref>) and (<ref>) into (<ref>), we can show that with probability 1 - o(1), max_δ: δ_0 ≤ρ_n |δ^⊤ ( n^-1^⊤ - ^-1 ) δ |/δ_2^2 ≤ C ρ_n ( √(log p/n) + ν^-1/2) , which along with the fact ^-1_2 = ν/ν - 2^-1_2 ≤ν/ν - 2 C_u entails that as ρ_n = o(√(n / (log p))) and ρ_n = o(√(ν)), max_δ: δ_0 ≤ρ_nδ^⊤^⊤δ/ n δ_2^2 ≤C for some constant C > 0. Using (<ref>) and the sparsity assumption that max_1 ≤ j ≤ p_j_0 + _n_0 ≤ρ_n, an application of similar arguments as for (<ref>) gives that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1_j _2^2 = n^-1_j^⊤ ^⊤_j ≤ C max_1 ≤ j ≤ p_j ^2_2 = C - _2^2 ≤ C( ρ_n^2 log p/n + ν^-2). We now proceed with examining the third term on the right-hand side of (<ref>) above. Observe that _j d∼ N( 0, _j _2^2 I_n) and max_1 ≤ j ≤ p_j _2 ≤_2 ≤ 2r. Hence, it holds for some large constant C_3 > 0 that ℙ( max_1 ≤ j ≤ p n^-1( 1 - 1/√(Q_1/ν), …, 1 - 1/√(Q_n/ν)) _j _2^2 ≥ C_3 ν^-1) = ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n (1 - 1/√(Q_i / ν))^2 _j ^2 Z_i^2 ≥ C_3 ν^-1) ≤ℙ( n^-1∑_i = 1^n (1 - 1/√(Q_i / ν))^2 Z_i^2 ≥ C_3 ν^-1 / 4r^2 ) , where {Z_i }_i = 1^n are i.i.d. standard normal random variables that are independent of and {Q_i }_i = 1^n. Similar to the calculations in (<ref>) and (<ref>), we can deduce that [ (1 - 1/√(Q_i / ν))^2 Z_i^2 ] = (Z_i^2) [ (1 - 1/√(Q_i / ν))^2 ] = 1 - ( 2/√(Q_i / ν)) + ( 1/ Q_i / ν) = 1 - √(2 ν)Γ(ν - 1/2)/Γ(ν/2) + ν/ν - 2 and [ (1 - 1/√(Q_i / ν))^4 Z_i^4 ] = 3 ( 1 - 2 √(2)νΓ(ν - 1/2)/Γ(ν/2) + 6 (ν - 2)/ν - √(2)ν^3/2Γ(ν - 3/2)/Γ(ν/2) + ν^2/(ν - 2) (ν - 4)) By applying the asymptotic series of the gamma function Γ(x + 1/2 )/Γ(x) = √(x)(1 - 1/8 x + O(x^-2) ), we can obtain through some direct calculations that [ (1 - 1/√(Q_i / ν))^2 Z_i^2 ] = O(ν^-1) and [ (1 - 1/√(Q_i / ν))^4 Z_i^4 ] = O(ν^-2). Combining (<ref>) and (<ref>) and applying the Markov inequality, we have that for some large enough constant C_3 > 0, ℙ( max_1 ≤ j ≤ p n^-1( 1 - 1/√(Q_1/ν), …, 1 - 1/√(Q_n/ν)) _j _2^2 ≥ C_3 ν^-1) ≤ℙ( n^-1∑_i = 1^n (1 - 1/√(Q_i / ν))^2 Z_i^2 -[ (1 - 1/√(Q_i / ν))^2 Z_i^2 ] ≥ C_3 (ν^-1 ) / 4r^2 - O(ν^-1) ) ≤ C ν^-2 n^-1( (1 - 1/√(Q_i / ν))^2 Z_i^2 ) ≤ C ν^-2 n^-1( ( (1 - 1/√(Q_i / ν))^4 Z_i^4 ) ) = O (n^-1) → 0. Therefore, a combination of (<ref>), (<ref>), (<ref>), and (<ref>) yields the desired conclusion in (<ref>). This concludes the proof of Proposition <ref>. §.§ Proof of Proposition <ref> It follows from (<ref>) and (<ref>) that - = r + , where = - and =(2 r I_p - r^2 )^1/2 - (2 r I_p - r^2 )^1/2. By the Gaussianity of X, we see that X_j X_l is a sub-exponential random variable and thus for 0< u < C, ℙ ( | n^-1_j^⊤_l - (X_j X_l) | ≥ u ) ≤ 2 exp{ - C n u^2 }. Then we can obtain that ℙ( max_1 ≤ j ≤ p, 1 ≤ l ≤ p | n^-1_j_l - (X_j X_l) | ≥ C √(log p/n)) = o(1). Consequently, with probability 1 - o(1) it holds that max_δ: δ_0 ≤ρ_n |δ^⊤ ( n^-1^⊤ - ^-1 ) δ |/δ_2^2 ≤ C ρ_n √(log p/n), which combined with the assumption that ^-1_2 ≤ C_u leads to max_δ: δ_0 ≤ρ_nδ^⊤^⊤δ/ n δ_2^2 ≤ C_u + C ρ_n √(log p/n)≤C for some constant C > 0. Since _j _0 = ( - )_j_0 ≤ C ρ_n because of the sparsity of and , it follows from (<ref>) that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 ()_j_2^2 = max_1 ≤ j ≤ p n^-1_j_2^2 ≤max_1 ≤ j ≤ pC_j _2^2 = max_1 ≤ j ≤ pC (- )_j _2^2 ≤max_1 ≤ j ≤ pC- _2^2 ≤Cρ_n^2 log p/n, where we have used the accuracy assumption in (<ref>). Next we proceed with analyzing term . Observe that given , has i.i.d. standard normal components and is independent of , and hence _j|_j d∼ N( 0, _j_2^2 I_n). It holds that _j|_j d= (Z_1 _j_2, …, Z_n _j _2) with { Z_i }_i = 1^n i.i.d. standard normal random variables. Then we can deduce that ℙ ( max_1 ≤ j ≤ p n^-1 ()_j _2^2 ≥ 3_2^2 /2 | ) = ℙ ( max_1 ≤ j ≤ n^-1_j _2^2 ≥ 3_2^2 /2 |) = ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n Z_i^2 _j _2^2 ≥ 3 _2^2 /2 |) ≤ℙ( n^-1∑_i = 1^n Z_i^2 _2^2 ≥ 2 _2^2 |) = ℙ( n^-1∑_i = 1^n Z_i^2 ≥ 3/2 ) ≤ e^- n / 32→ 0 as n→∞, where we have used the fact that max_1 ≤ j ≤ p_j _2 ≤_2 and the concentration inequality for chi-square random variables that for 0 < t < 1, ℙ( | n^-1∑_i = 1^n Z_i^2 - 1 | ≥ t ) ≤ 2 e^- n t^2 / 8. Now we aim to bound _2. For two square matrices A and B, it holds that A^1/2 - B^1/2_2 = A^1/2 (B - A) B^-1 + (A^3/2 - B^3/2) B^-1_2 ≤A^1/2 (B - A) B^-1_2 +3 max{ A _2^1/2, B _2^1/2}A - B _2B^-1_2. Applying the above inequality to leads to _2 ≤ 2 r I_p - r^2 _2^1/2· r^2 - _2 · 2 r I_p - r^2 ^-1 + 3 max{ 2 r I_p - r^2 _2^1/2, 2 r I_p - r^2 _2^1/2}· r^2 - _2· 2 r I_p - r^2 ^-1 ≤ C - _2. Thus, from (<ref>) and assumption (<ref>), we can obtain that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 ()_j _2^2 ≤ 3 _2^2 /2 ≤ C - _2^2 ≤ C ρ_n^2 log p/n. Note that _j - _j _2 ≤ r _j _2 + _j _2. Therefore, in view of (<ref>) and (<ref>) we can show that for some constant C > 0, ℙ(n^-1/2_j - _j _2 ≤ C ρ_n √(log p/n)) → 1. This completes the proof of Proposition <ref>. §.§ Proof of Proposition <ref> In light of the definitions of and , we can obtain through the triangle inequality that n^-1/2max_1 ≤ j ≤ p_j - _j _2 ≤max_1 ≤ j ≤ p n^-1/2( ∑_i = 1^n [F̂_j^-1(Φ(_i, j)) - F̂_j^-1 (Φ(_i, j )) ]^2 )^1/2 + max_1 ≤ j ≤ p n^-1/2( ∑_i = 1^n [F̂_j^-1(Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 )^1/2. We claim that ℙ(max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F̂_j^-1 (Φ(_i, j )) ]^2 ≥C( ρ_n^2 log p/n + p ρ_n (log n)^3/n) ) → 0, ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 ≥2 M p (log n)^2 /n) → 0, which together with (<ref>) yields the desired conclusion of Proposition <ref>. It remains to establish (<ref>) and (<ref>). We will begin with the proof of (<ref>). Proof of (<ref>). From assumption (<ref>) and the observation that log n/n^2≪p ρ_n (log n)^3/n, it holds that for some large constant C > 0, ℙ(max_1 ≤ j ≤ p n^-1 ∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F̂_j^-1 (Φ(_i, j )) ]^2 ≥ C (ρ_n^2 log p/n + p ρ_n (log n)^3/n) ) ≤ℙ(max_1 ≤ j ≤ p n^-1 ∑_i = 1^n [ | Φ(_i, j) - Φ(_i, j)|^2 + (log n)^2 n^-2 + n^-1 (log n )|Φ(_i, j) - Φ(_i, j)| ] ≥ C (ρ_n^2 log p/n + p ρ_n (log n)^3/n) ) + ℙ( max_1 ≤ j ≤ psup_ x, y ∈ (0, 1) | F̂_j^-1 (x) - F̂_j^-1 (y) | / |x - y| + ( n^-1 (log n) |x - y| )^1/2 + n^-1log n ≥ M ) ≤ℙ(max_1 ≤ j ≤ p n^-1 ∑_i = 1^n [ | Φ(_i, j) - Φ(_i, j)|^2 + n^-1 (log n) |Φ(_i, j) - Φ(_i, j)| ] ≥C(ρ_n^2 log p/n + p ρ_n (log n)^3/n) ) + o(1) : = P_1 + o(1). We next bound term P_1 above. Using the fact that |Φ(x) - Φ(y)| ≤1/√(2 π) | x - y | and the basic inequality ∑_i = 1^n |a_n| ≤√(n) (∑_i = 1^n a_n^2)^1/2, we have that P_1 ≤ℙ( max_1 ≤ j ≤ p( n^-1_j - _ j_2^2 + (log n) n^-3/2_j - _ j_2 ) ≥C(ρ_n^2 log p/n + p ρ_n (log n)^3/n) ). It suffices to consider the bound of max_1 ≤ j ≤ p n^-1_j - _ j_2^2. With the aid of the triangle inequality and the definitions of and , it follows that max_1 ≤ j ≤ p n^-1_j - _ j_2^2 ≤ 3 max_1 ≤ j ≤ p n^-1 ( - ) (I_p - r )_j _2^2 + 3 r^2 max_1 ≤ j ≤ p n^-1 ( _j - _j ) _2^2 + 3 max_1 ≤ j ≤ p n^-1 [(2 r I_p - r^2 )^1/2 - (2 r I_p - r^2 )^1/2 ] _2^2. We will investigate the three terms in the upper bound above separately. Regarding the third term above, under the assumption in (<ref>) it has been shown in (<ref>) that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 [(2 r I_p - r^2 )^1/2 - (2 r I_p - r^2 )^1/2 ] _2^2 ≤ C ρ_n^2 log p/n . As for the second term in the upper bound in (<ref>), noting that the rows of are i.i.d. and follow the Gaussian distribution N( 0, ^-1), an application of similar arguments as for (<ref>) gives that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 ( _j - _j ) _2^2 ≤ Cρ_n^2 log p/n . For the first term in the upper bound in (<ref>) above, noting that I_p - r)_j ≤ρ_n + 1 by the sparsity assumption that _j≤ρ_n, we have that max_1 ≤ j ≤ p n^-1 ( - ) (I_p - r )_j _2^2 ≤max_J: |J| ≤ρ_n +1 n^-1 (_J - _J)^⊤ (_J - _J) _2 ×max_1 ≤ j ≤ p (I_p - r )_j _2^2. For the second term in the bound above, from the triangle inequality and inequality _j_2 ≤_2 for each matrix , it is easy to see that max_1 ≤ j ≤ p (I_p - r )_j _2 ≤ I_p - r _2 ≤ I_p - r _2 + r - _2. Thus it follows from assumption (<ref>) that for a constant C > 0, with probability 1 - o(1) we have max_1 ≤ j ≤ p (I_p - r )_j _2 ≤ C. Regarding the first term on the right-hand side of (<ref>) above, using the definitions of and , and inequality _2 ≤ d _max for each square matrix ∈ℝ^d × d, we can deduce that max_J: |J| ≤ρ_n +1 n^-1 (_J - _J)^⊤ (_J - _J) _2 ≤ (ρ_n + 1) n^-1 ( - )^⊤ ( - ) _max ≤ (ρ_n + 1) max_1 ≤ j ≤ p n^-1∑_i = 1^n |_i, j - _i, j|^2 = (ρ_n + 1) max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) - Φ^-1 ( F_j (_i, j )) |^2 . Denote by H_j, n =[F_j^-1 (2M n^-1 log n), F_j^-1 (1 - 2M n^-1log n )] with constant M as given in assumption (<ref>). We can write that max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) - Φ^-1 ( F_j (_i, j )) |^2 = max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) - Φ^-1 ( F_j (_i, j )) |^2 1 (_i, j∈ H_j, n) + max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) - Φ^-1 ( F_j (_i, j )) |^2 1 (_i, j∉ H_j, n) := E_1 + E_2. Let us first consider term E_2 above. Observe that E_2 ≤max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) |^2 1 (_i, j∉ H_j, n) + max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F_j (_i, j )) |^2 1 (_i, j∉ H_j, n). For the first term in the bound above, notice that |Φ^-1 (F̂_j (_i, j)) |= O(√(log n )) due to the assumption that 1/2n≤ F_j(x) ≤ 1 - 1/2n for each x∈(X_j). Then it follows from the union bound, the Markov inequality, and the definition of H_j, n that ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F_j (_i, j )) |^2 1 (_i, j∉ H_j, n) ≥p (log n )^3/n) ≤∑_j = 1^p ℙ( n^-1log n ∑_i = 1^n 1 (_i, j∉ H_j, n) ≥p (log n )^3/n) ≤ n /p (log n)^2∑_j = 1^p ℙ (_i, j∉ H_j, n) = p n / p (log n )^2 · 4 M n^-1log n = 4 M (log n )^-1→ 0. As for the second term in the upper bound in (<ref>) above, an application of the Markov inequality and the fact that F_j(_i, j ) follows the standard uniform distribution gives that ℙ(max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F_j (_i, j )) |^2 1 (_i, j∉ H_j, n) ≥p (log n )^3/n) ≤n/p (log n)^3∑_j = 1^p (| Φ^-1 (F_j (_i, j )) |^2 1 (_i, j∉ H_j, n) ) = 2 n/ (log n)^3∫_-∞^Φ^-1 (2Mlog n/n ) 1/√(2π) u^2 e^-u^2/2 du ≤2n/(log n)^3 |Φ^-1 (2Mlog n/n )| ∫_-∞^Φ^-1 (2Mlog n/n ) 1/√(2π) |u|^3 e^-u^2/2 du ≤ C n/(log n)^3 |Φ^-1 (2Mlog n/n ) | ·| Φ^-1 (2Mlog n/n)|^3 ·Φ(Φ^-1 (2Mlog n/n) ) ≤ C (log n)^-1→ 0, where in the last step above, we have used the facts that |Φ^-1 (M log n/n) | ≤ C √(log n), ∫ u^3 e^-u^2 / 2 du = - (u^2 + 2) e^-u^2/2, and e^-x^2/2/Φ(x) = O(|x|) for x < -2. Combining (<ref>), (<ref>), and (<ref>) yields that with probability 1 - o(1), E_2≤p (log n )^3/n. Next we proceed with studying term E_1. First, note that when |Φ^-1 (y )| > 2, it holds that [ Φ^-1 (y) ]' = 1/Φ'( Φ^-1 (y) )≤ C 1/(y (1 - y)) |Φ^-1 (y)| due to the fact that Φ'(x) /(1 - Φ(x) ) ≥ C x for x > 2 and Φ'(x) / Φ(x) ≥ C |x| for x < -2. When |Φ^-1(y)| ≤ 2, it is easy to see that [ Φ^-1 (y) ]' = 1/Φ'( Φ^-1 (y) )≤ C. Thus, combining the previous two results shows that for y ∈ℝ, [ Φ^-1 (y) ]' ≤C/(y (1 - y)) |Φ^-1 (y)| ≤C/(y (1 - y)) . Let us define an interval δ_j(x) = [F_j(x) - √(M [F_j(x) (1 - F_j(x)) ] log n/n), F_j(x) + √(M [F_j(x) (1 - F_j(x)) ]log n/n)]. Observe that under assumption (<ref>), we have that ℙ ( E_1≥ x) ≤ℙ( max_1 ≤ j ≤ p n^-1 (M log n/n) ∑_i = 1^n (sup_y ∈δ_j(_i, j ) [Φ^-1 (y)]' )^2 F_j (_i, j ) (1 - F_j (_i, j ) ·1 (_i, j∈ H_j, n) ≥ x) + o(1). When _i, j∈ H_j, n, it holds that F_j(_i, j ) ∈ [2 M n^-1log n, 1 - 2 M n^-1log n] and hence sup_y ∈δ(_i, j ) | y/F (_i, j) - 1 | ≤√(M log n/n F_j(_i, j ))≤ 1/√(2). Similarly, we have that sup_y ∈δ(_i, j ) | 1 - y/1 - F (_i, j) - 1 | ≤ 1/√(2). The above two bounds combined with (<ref>) yields that for _i, j∈ H_j, n, sup_y ∈δ_j(_i, j ) [Φ^-1 (y)]' ≤sup_y ∈δ_j(_i, j )C / y (1 - y)≤C /F_j(_i, j ) (1 - F_j(_i, j )). In view of the above bound, (<ref>), and the fact that F_j(_i, j ) follows the standard uniform distribution, we can deduce that ℙ ( E_1≥p (log n)^3/n) ≤ℙ(max_1 ≤ j ≤ p n^-1 (M log n/n) ∑_i = 1^n C / F_j(_i, j ) (1 - F_j(_i, j )) 1 (_i, j∈ H_j, n) ≥p (log n)^3/n) + o(1) ≤C M /p (log n)^2 ∑_j = 1^p ( 1 / F_j(_i, j ) (1 - F_j(_i, j )) 1 (_i, j∈ H_j, n) ) = C M / (log n)^2 ∫_2 M n^-1log n^1 - 2 M n^-1log n1/u(1 - u) du ≤C M / (log n)^2 · C log n ≤C M /log n→ 0. A combination of (<ref>), (<ref>), (<ref>), and (<ref>) shows that with probability 1 - o(1), max_J: |J| ≤ρ_n +1 n^-1 (_J - _J)^⊤ (_J - _J) _2 ≤C p ρ_n (log n )^3 /n, which together with (<ref>)–(<ref>) entails that with probability 1 - o(1), n^-1max_1 ≤ j ≤ p_j - _ j_2^2 ≤ C ( ρ_n^2 log p/n + p ρ_n (log n )^3 /n) and (log n) n^-3/2max_1 ≤ j ≤ p_j - _ j_2 ≤ C (log n) n^-1( ρ_n log p/n + √( p ρ_n (log n )^3 /n)). Plugging (<ref>) into (<ref>), it follows that P_1 → 0. Therefore, substituting (<ref>) into (<ref>) derives the desired result (<ref>). It remains to establish (<ref>). Proof of (<ref>). Let us define I_n = [2M n^-1log n, 1 - 2M n^-1log n]. It holds that ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 ≥2 M p (log n)^2 /n) = ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 1 ( Φ(_i, j) ∈ I_ n) ≥ M p (log n)^2 /n) + ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 1 ( Φ(_i, j) ∉ I_ n) ≥ M p (log n)^2 /n). For the first term on the right-hand side of (<ref>) above, under assumption (<ref>) we have that ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 1 ( Φ(_i, j) ∈ I_ n) ≥ M p (log n)^2 /n) ≤ℙ( M log n /n≥ M p (log n)^2 /n) + o(1) = 0 + o(1) → 0. Regarding the second term on the right-hand side of (<ref>) above, observe that | F_j^-1 (Φ(_i, j)) | ≤ b and |F̂_j^-1 (Φ(_i, j)) | ≤ b by the assumption (X_j) ∈ [-b, b]. In addition, Φ(_i, j) follows the standard uniform distribution and thus ℙ (Φ(_i, j) ∉ I_n) = 4 M n^-1log n. Then we can deduce that ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 1 ( Φ(_i, j) ∉ I_1, n) ≥ M p (log n)^2 /n) ≤ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n 1 ( Φ(_i, j) ∉ I_ n) ≥ M p (log n)^2 /4 n b^2 ) ≤4 n b^2 / M p (log n)^2 · p ℙ ( Φ(_i, j∉ I_n) ) = 16 b^2 /log n→ 0. Finally, combining (<ref>)–(<ref>) leads to the desired result (<ref>). This concludes the proof of Proposition <ref>. § PROOFS OF SOME KEY LEMMAS §.§ Proof of Lemma <ref> Let g_j (· | _-j) be the conditional density function of X_j | X_-j = _-j for X = (X_1, …, X_p )^⊤d∼ t_ν ( 0, I_p) and h_j(· | _ -j) the conditional density function of X̂_j | X̂_-j = _-j for X̂ = (X̂_1, …, X̂_p)^⊤d∼ N( 0, I_p). Following the definition in <cit.>, we define K̂L̂_j : = ∑_i = 1^n log( g_j (_i, j | _i, -j) · h_j(_i, j | _i, j ) / h_j (_i, j | _i, - j) g_j (_i, j | _i, -j) ), where = (_i, j ) ∈ℝ^n × p consists of i.i.d. rows sampled from t_ν ( 0, I_p) and = (_i, j)∈ℝ^n × p consists of i.i.d. rows sampled from N( 0, I_p). Note that Theorem 1 in <cit.> states that ≤min_≥ 0{q e^ + ℙ(max_j ∈ℋ_0K̂L̂_j > ) }. We claim that if n p /ν (ν + p)≥ C for some constant C> 0, there exists some positive constant α such that ℙ( K̂L̂_j≥ C/4 ) ≥α. Then it holds that for 0 < < C/4, ℙ(max_1 ≤ j ≤ pK̂L̂_j≥) ≥α, and thus we cannot obtain the desired asymptotic FDR control lim sup_(n, p)≤ q via applying Theorem 1 in <cit.>. By contradiction, to allow ℙ(max_1 ≤ j ≤ pK̂L̂_j≥) → 0, we must have that np/ν (ν + p)→ 0 , which is equivalent to ν^2 ≫ n min (n, p). Hence, Lemma <ref> is proved. Now it remains to establish (<ref>). Proof of (<ref>). Note that <cit.> showed that the conditional density g_j(_j | _-j) of the multivariate t-distribution satisfies that g_j(_i, j | _i, -j) ∝( 1 + _i, j ^2/ν + _i, -j_2^2 )^- (ν + p) / 2. It is easy to see that the conditional density h_j (_i, j | _i, -j) of the standard normal distribution satisfies that h_j(_i, j | _i, -j) ∝exp{ - _i, j ^2 / 2 }. Plugging the two expressions above into (<ref>) yields that K̂L̂_j = ∑_i = 1^n [ _i, j ^2/2 - ν + p/2log(1 + _i, j ^2/ν + _i, -j_2^2) - (_i, j^2/2 - ν + p/2log(1 + _i, j^2/ν + _i, -j_2^2) )]. Applying the basic inequality that |log (1 + x) - (x - x^2/2)| ≤ x^3 for each x > 0, we can obtain that K̂L̂_j = R_1, j + R_2, j + R_3, j, where R_1, j = ∑_i = 1^n [ _i, j ^2 (ν + p)/2 (ν + _i, -j_2^2 )( ν + _i, -j_2^2/ν + p - 1 ) - _i, j^2 /2 ( 1 - ν + p/ν + _i, -j_2^2) ], R_2, j = ∑_i = 1^n ν + p/4(_i, j^4/(ν + _i, -j_2^2)^2 - _i, j ^4/(ν + _i, -j_2^2)^2), R_3, j = ∑_i = 1^n ν + p/2(_i, j^6/(ν + _i, -j_2^2)^3 + _i, j ^6/(ν + _i, -j_2^2)^3). We now calculate the mean and variance of K̂L̂_j separately. Observe that _i, jd∼ N(0, 1), (p-1)^-1_i, -j_2^2 d∼ F_p-1, ν, _i, -j√(ν + p/ν + _i, -j_2^2)_i, j, and √(ν + p-1/ν + _i, -j_2^2)_i, jd∼ t_ν + p- 1 as shown in <cit.>. Using the properties of the multivariate t-distribution and F-distribution, some straightforward calculations show that (R_1, j) = n/2[ ν + p/ν + p - 3( ν (ν + p - 3)/(ν - 2)(ν + p) - 1 ) - ( 1 - (ν + 2) (ν + p)/ν (ν + p - 1)) ] = n ( p/ν (ν + p) + O(ν^-2) ), (R_2, j) = 3 n (ν + p)/4 [ 1/(ν + p - 3) (ν + p - 5) - ν + 2/ν (ν + p - 1)(ν + p + 1)] = O (n /ν (ν + p)), and (R_3, j) ≤ C n (ν + p)^-2. Combining (<ref>)–(<ref>) yields that when ν and p are large, (K̂L̂_j) = n p/ν (ν + p) + O(n ν^-2)≥n p/2 ν (ν + p). Next we analyze the variance of K̂L̂_j. Notice that (K̂L̂_j) = ( ( K̂L̂_j - K̂L̂_j )^2 ) ≤ C ∑_i = 1^n {[ _i, j ^2 (ν + p)/2 (ν + _i, -j_2^2 )( ν + _i, -j_2^2/ν + p - 1 ) - _i, j^2 /2 ( 1 - ν + p/ν + _i, -j_2^2) ]^2 } + C ∑_i = 1^n [ (ν + p)^2/16(_i, j^4/(ν + _i, -j_2^2)^2 - _i, j ^4/(ν + _i, -j_2^2)^2)^2 ] ≤Cn p/ν (ν + p), where in the last step above, we have used the facts that ( _i, j ^4 (ν + p)/ (ν + _i, -j_2^2 )^2) ≤ C, [ ( ν + _i, -j_2^2/ν + p - 1 )^2 ] = 2 p /ν (ν + p) + O(ν^-2), [ ( 1 - ν + p /ν + _i, -j_2^2)^2 ] = 2 p /ν (ν + p) + O(ν^-2). In view of the results on the mean and variance of K̂L̂_j shown in (<ref>) and (<ref>) above, we see that if np/ν (ν + p)≥ C for some constant C > 0, (K̂L̂_j ) ≥np/2ν (ν + p)≥ C /2 . Therefore, we can obtain through the one-sided Markov inequality that for a small constant α > 0 (noting that (K̂L̂_j) > 2 α√( (K̂L̂_j)) if α is small), ℙ (K̂L̂_j ≥ C/4) ≥ℙ (K̂L̂_j ≥ (K̂L̂_j )/2 ) ≥ℙ( K̂L̂_j≥ (K̂L̂_j) - α√((K̂L̂_j))) ≥ 1 - (K̂L̂_j )/(K̂L̂_j ) + α^2 (K̂L̂_j ) = α^2/1 + α^2, which establishes (<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Recall that G(t) = p_0^-1∑_j ∈ℋ_0ℙ (W_j ≥ t) and G(t) is a decreasing, continuous function. The main idea of the proof is to divide the continuous interval (0, G^-1 (c_1 q a_n/p)] into a diverging number of smaller intervals with end points { t_i }_i = 0^l_n such that t_0≥ t_1≥⋯≥ t_l_n and |G(t_i)/ G(t_i+1) - 1 | → 0 uniformly for 0 ≤ i ≤ l_n as l_n→∞. Then the supreme over the continuous interval (0, G^-1 (c_1 q a_n/p)] can be reduced to the supreme over the set of discrete points {t_i}_i= 0^l_n and hence, we can apply the union bound to establish the desired result. Similar arguments have also been used in <cit.>, <cit.>, and <cit.>. We detail only the proof of (<ref>) here since (<ref>) can be shown in a similar fashion. We start with defining a sequence 0 ≤ z_0 < z_1 < ⋯ < z_l_n = 1 and t_i = G^-1 (z_i), where z_0 = c_1 q a_n/p, z_i = c_1 q a_n/p + h_n e^i ^γ/p, and l_n = [log ((p - c_1 q a_n)/h_n)]^1/γ with 0 < γ < 1 and sequence h_n →∞ satisfying that h_n /a_n → 0. As long as m_n /a_n = o(1), we can choose h_n = a_n/(a_n / m_n)^η for some η∈ (0, 1). Then an application of similar technical analysis as in <cit.> shows that as a_n →∞, sup_0 ≤ i ≤ l_n|G(t_i)/G(t_i+1) - 1 | → 0. For t ∈ (0, G( c_1 q a_n/p)], there exists some 0 ≤ i ≤ l_n - 1 such that t ∈ [t_i+1, t_i]. It follows from the monotonicity of ℙ (W_j ≥ t) and 1 ( Ŵ_j ≥ t) that | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t) / p_0 G(t) - 1 | ≤max{| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i) - 1 |, | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i)/p_0 G(t_i+1) - 1 | } . The two terms within the brackets on the right-hand side of the expression above can be bounded similarly and we will provide only the details on how to bound the first term for simplicity. With the aid of the fact that | x y - 1 | ≤ | x -1| |y - 1| + |x - 1| + |y -1| for all x, y ∈ℝ, we can deduce that | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i) - 1 | ≤| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i+1) - 1 | ·sup_0 ≤ i ≤ l_n| G(t_i)/G(t_i+1) -1 | + | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i+1) - 1 | + sup_0 ≤ i ≤ l_n| G(t_i)/G(t_i+1) -1 | ≤| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i+1) - 1 |·(1+o(1)) + sup_0 ≤ i ≤ l_n| G(t_i)/G(t_i+1) -1 |, where the last step above is because of (<ref>) and the o(1) term is uniformly over all i. Combining the above two results and applying (<ref>) again lead to | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t) / p_0 G(t) - 1 | ≤max{| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i+1) - 1 |, | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i)/p_0 G(t_i) - 1 | } ×(1 + o(1) ) + o(1). Thus, to prove the desired result, it is sufficient to show that D_n := sup_0 ≤ i ≤ l_n| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i) / p_0 G(t_i) - 1 |=o_p(1). We now proceed with establishing (<ref>). Let us define an event ℬ_3 = {max_1 ≤ j ≤ p | Ŵ_j - W_j | ≤ b_n }. From Condition <ref>, it holds that ℙ (ℬ_3^c) → 0. Note that for any two events A and B, we have that ℙ(A) ≤ℙ(A∩ B) + P(B^c). Repeatedly using such inequality, the union bound, and the property that ℙ (ℬ_3^c) → 0, we can deduce that for each ϵ > 0, ℙ ( D_n ≥ϵ ) ≤∑_i = 0^l_nℙ( | ∑_j ∈ℋ_0 {1 (Ŵ_j ≥ t_i) - ℙ ( W_i ≥ t_i ) }/ p_0 G(t_i) | ≥ϵ, ℬ_3 ) + ℙ (ℬ_3^c) ≤∑_i = 0^l_nℙ( | ∑_j ∈ℋ_0 {1 (W_j ≥ t_i) - ℙ ( W_i ≥ t_i ) }/ p_0 G(t_i) | ≥ϵ /2 ) + ∑_i = 0^l_nℙ( | ∑_j ∈ℋ_0 [ 1 (Ŵ_j ≥ t_i) - 1 ( W_i ≥ t_i ) ] / p_0 G(t_i) | ≥ϵ /2, ℬ_3) + o(1) ≤∑_i = 0^l_n 4 [ {∑_j ∈ℋ_0 [ 1 (W_j ≥ t_i) - ℙ ( W_i ≥ t_i ) ] }^2 ] /ϵ^2 p_0^2 G^2 (t_i) + ∑_i = 0^l_n 2 ∑_j ∈ℋ_0ℙ( t_i - b_n ≤W_j ≤ t_i + b_n ) /ϵ p_0 G(t_i) + o(1), where the last step above is due to the Markov inequality and the fact that |1 (Ŵ_j ≥ t_i) - 1 ( W_i ≥ t_i )|≤1 (t_i-b_n≤W_j ≤ t_i+b_n) on event ℬ_3. We next bound the first two terms on the very right-hand side of (<ref>) above. For the first term, under Condition <ref> for the weak dependence between {W_j}, we have that ∑_i = 0^l_n 4 [ {∑_j ∈ℋ_0 [ 1 (W_j ≥ t_i) - ℙ ( W_i ≥ t_i ) ] }^2 ] /ϵ^2 p_0^2 G^2 (t_i) ≤ C ∑_i = 0^l_n m_n p_0 G(t_i) + o( (log p)^-1/γ [p_0 G(t_i)]^2 ) /ϵ^2 p_0^2 G^2 (t_i) = C ϵ^-2 m_n ∑_i = 0^l_n1/p_0 G(t_i) + C ϵ^-2 o (l_n (log p)^- 1/γ). Moreover, it holds that ∑_i = 0^l_n 1 / p_0 G (t_i) = p_0^-1∑_i = 0^l_n 1 / z_i = p/p_0∑_i = 0^l_n1/ c_1 q a_n + h_n e^i ^γ ≤ C h_n^-1, where the last inequality above is related to the proof of Theorem 3 in <cit.>. In light of the definition of h_n and the assumption of m_n / a_n → 0, we have that m_n / h_n = (m_n / a_n)^1 - η→ 0. Therefore, combining (<ref>)–(<ref>) and the fact that l_n = [log ((p - c_1 q a_n)/h_n)]^1/γ≤ (log p)^1/γ shows that the first term for the bound in (<ref>) tends to zero as n →∞. Moreover, since l_n ≤ (log p)^1/γ, the second term on the very right-hand side of (<ref>) above is bounded by 2/ϵ (log p)^1/γsup_t ∈ (0, G^-1 (c_1 q a_n/p) ] G(t - b_n ) - G(t + b_n) / G(t) , which converges to zero as n →∞ under Condition <ref>. Finally, we can obtain that for each ϵ > 0, ℙ ( D_n > ϵ ) → 0, which establishes the desired result in (<ref>). This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> We will show that with asymptotic probability one, it holds that for some 0 < c_1 < 1, 1 + ∑_j = 1^p 1( Ŵ_j < - G^-1 ( c_1 q a_n/ p ) ) ≤ q a_n ≤ q ∑_j = 1^p 1( Ŵ_j ≥ G^-1 ( c_1 q a_n/ p ) ). Then from the definition of T, we can obtain the desired result of the lemma. We aim to establish (<ref>). The main idea of the proof is to prove that the population counterpart of (<ref>) holds. Then with an application of Lemma <ref> to both left- and right-hand sides of (<ref>), we can connect it to the population counterpart and thus prove that (<ref>) holds with asymptotic probability one. First, it follows from the union bound and the fact that ℙ(A) ≤ℙ(A∩ B) + ℙ(B^c) for any two events A and B that under Conditions <ref>–<ref>, ℙ ( Ŵ_j < 3 δ_n   j ∈𝒜_n) ≤ℙ ( Ŵ_j < 3 δ_n   j ∈𝒜_n, max_1 ≤ j ≤ p |Ŵ_j - W_j | < b_n) + ℙ (max_1 ≤ j ≤ p |Ŵ_j - W_j | ≥ b_n ) ≤ℙ ( W_j < 3 δ_n + b_n    j ∈𝒜_n) + ℙ (max_1 ≤ j ≤ p |Ŵ_j - W_j | ≥ b_n ) ≤∑_j ∈𝒜_n ℙ ( W_j - w_j < 3 δ_n + b_n - w_j ) + o(1) ≤∑_j ∈𝒜_n ℙ ( | W_j - w_j | > δ_n )+ o(1) ≤∑_j = 1 ^p ℙ ( | W_j - w_j | > δ_n )+ o(1) → 0 . Then we have ℙ (∩_j ∈𝒜_n{Ŵ_j ≥ 3 δ_n}) → 1 and thus with asymptotic probability one, ∑_j = 1^p 1 ( Ŵ_j ≥ 3 δ_ n ) ≥ a_n , where a_n = |𝒜_n|. In addition, since w_j > - δ_n for 1 ≤ j ≤ p by assumption, we can deduce that ∑_j = 1^p ℙ ( Ŵ_j < - 3 δ_n) ≤∑_j = 1^p ℙ ( Ŵ_j < - 3 δ_n, max_1 ≤ j ≤ p |Ŵ_j - W_j | < b_n ) + ℙ (max_1 ≤ j ≤ p |Ŵ_j - W_j | ≥ b_n ) ≤∑_j = 1^p ℙ ( W_j < - 3 δ_n + b_n ) + o(1) ≤∑_j = 1^p ℙ (W_j - w_j ≤ - 3 δ_n + b_n - w_j ) + o(1) ≤∑_j = 1^p ℙ ( | W_j - w_j | > δ_n ) + o(1) → 0 , which yields ∑_j = 1^p ℙ ( Ŵ_j < - 3 δ_n)→ 0. Using similar arguments as for (<ref>), it holds that ∑_j = 1^p ℙ (W_j ≤ - 3 δ_n ) → 0. Then we can obtain that G( 3 δ_n ) = p_0^-1∑_j ∈ℋ_0ℙ ( W_j ≤ -3 δ_n ) ≤ p_0^-1∑_j = 1^p ℙ ( W_j ≤ -3 δ_n ) = o(p_0^-1) . Since a_n →∞, p_0 / p → 1, and G(t) is a nonincreasing, continuous function, it follows that G(3 δ_n) ≤ c_1 q a_n / p and thus G^-1 ( c_1 q a_n/ p ) ≤ 3 δ_n for some constant 0 < c_1 < 1 when n is sufficiently large. This together with (<ref>) entails that with asymptotic probability one, ∑_j = 1^p 1 ( Ŵ_j ≥ G^-1 ( c_1 q a_n/ p ) ) ≥ a_n. This completes the proof of the second inequality in (<ref>). It remains to establish the first inequality in (<ref>). From the definition of G(t) and Lemma <ref>, it holds that c_1 q a_n / p = p_0^-1∑_j ∈ℋ_0ℙ ( W_j ≤ - G^-1 ( c_1 q a_n/ p ) ) = (1 + o_p(1)) · p_0^-1∑_j ∈ℋ_01( Ŵ_j < - G^-1 ( c_1 q a_n/ p ) ). Then for some constant c_2 satisfying 0 < c_1 < c_2 < 1, we can obtain that with asymptotic probability one, 1 + ∑_j ∈ℋ_01( Ŵ_j < - G^-1 ( c_1 q a_n/ p ) ) ≤c_1 q a_n p_0/p (1 + o_p(1)) ≤ c_2 q a_n , where we have used the assumption of p_0/p → 1. Further, under (<ref>) in Condition <ref>, an application of the union bound yields that ℙ( ∑_j ∈ℋ_11( Ŵ_j < - G^-1 (c_1 q a_n/p) ) ≥ (1 - c_2) q a_n ) ≤ℙ( ∑_j ∈ℋ_11( W_j < - G^-1 (c_1 q a_n/p ) + b_n ) ≥ (1 - c_2) q a_n, max_1 ≤ j ≤ p |Ŵ_j - W_j | < b_n ) + o(1) ≤1/ (1 - c_2 ) q a_n∑_j ∈ℋ_1ℙ( W_j < - G^-1 (c_1 q a_n/p) + b_n ) + o(1) → 0, which together with (<ref>) implies that 1 + ∑_j = 1^p 1( Ŵ_j < - G^-1 ( c_1 q a_n/ p ) ) ≤ q a_n with asymptotic probability one. This proves the first inequality in (<ref>), which completes the proof of Lemma <ref>. New proof sketch for lower bound (need to revise to formalize). Now we have proved that with asymptotic probability 1, T∈ (0, G^-1 ( c_1 q a_n/ p )). Denote by this event ℬ_1. We will establish the lower bound now. By definition, we have T∈𝒮 with asymptotic probability one, where 𝒮 := {t∈ (0, G^-1 ( c_1 q a_n/ p )) : ∑_j = 1^p 1( Ŵ_j ≤ -t )/1⋁∑_j = 1^p 1( Ŵ_j ≥ t )≤ q}. Note that for any t∈𝒮, we have ∑_j ∈ℋ_0 1 ( Ŵ_j ≤ -t ) + ∑_j∈ℋ_1 1 ( Ŵ_j ≤ -t )≤ q+ q ∑_j ∈ℋ_0 1 ( Ŵ_j ≥ t ) + q∑_j∈ℋ_1 1 ( Ŵ_j ≥ t ). Denote by the event where the inequalities in Lemma 2 hold as ℬ_2ϵ. Then on ℬ_2ϵ, (1-ϵ)∑_j ∈ℋ_0ℙ( W_j ≤ -t ) ≤ q+qs + (1+ϵ)q∑_j ∈ℋ_0ℙ( W_j ≥ t ). That is, ∑_j ∈ℋ_0ℙ( W_j ≤ -t ) ≤q(1+s)/1-q-ϵ-qϵ, which yields t≥ G^-1(q(1+s)/(1-q-ϵ-qϵ)p). That is, on event ℬ_2ϵ∩ℬ, it holds that G^-1(q(1+s)/(1-q-ϵ-qϵ)p)≤ T≤ G^-1 ( c_1 q a_n/ p ). §.§ Proof of Lemma <ref> The proof of this lemma relies on the definitions of T_v and T_v, with the intuition that T_v resembles the vth order statistic of - W_j, while T_v resembles the vth order statistic of Ŵ_j. Intuitively, this means that if the distance between W_j and Ŵ_j is bounded by b_n, the distance between the corresponding order statistics should also be bounded by b_n. We will formalize such argument next. Let us define an event 𝒞 := {max_1 ≤ j ≤ p | Ŵ_j - W_j| ≤ b_n} . Condition <ref> assumes that ℙ (𝒞) → 1. Denote by Ŝ_v = { 1 ≤ j ≤ p : - Ŵ_j ≥T_v } and S_v = { 1 ≤ j ≤ p: - W_j ≥T_v } . Observe that | Ŝ_v | = v and | S_v | = v by the definitions of T_v and T_v. If j_0 ∈Ŝ_v, on event 𝒞 we have that - W_j_0 = - Ŵ_j_0 + ( Ŵ_j_0 - W_j_0 ) ≥T_v - b_n, which entails that ∑_j = 1^p 1 ( - W_j ≥T_n - b_n ) ≥ v. Moreover, since T_v satisfies ∑_j = 1^p 1( - W_j ≥T_v ) = v, it follows that T_v ≥T_v - b_n by the monotonicity of the indicator function. Similarly, we can also show that T_v ≥T_v - b_n on event 𝒞. Thus, (<ref>) is derived. This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Note that k is the number of failures before v successes in a binomial process with success probability 1/2. The major intuition of the desired result (<ref>) is that by the law of large numbers, the number of failures and successes should become asymptotically comparable as the number of trials tends to infinity. Let D_k + v - 1 be a binomial random variable with distribution B ( k + v - 1, 1/2 ) and L_v the negative binomial random variable with distribution NB(v, 1/2 ). Observe that (<ref>) is equivalent to ℙ ( L_v ≥ k ) ≤ q. According to the relationship between the negative binomial distribution and binomial distribution, we have that ℙ ( L_v ≥ k ) = 1 - ℙ ( L_v ≤ k - 1 ) = 1 - ℙ ( D_k + v - 1 ≥ v ) = ℙ ( D_k + v - 1 ≤ v - 1 ). By the central limit theorem, it holds that when k + v →∞, ℙ ( D_k + v - 1 ≤ v - 1 ) = Φ( v - 1 - k /√( k + v - 1 )) + o(1). Therefore, (<ref>) implies that v - 1 - k /√( k + v - 1 )≤Φ ^-1 (q - o(1) ). In addition, since v is the largest integer such that (<ref>) holds, we have that ℙ ( L_v +1≥ k) > q . Using similar arguments as for (<ref>), it follows that as k + v →∞, ℙ ( L_v + 1 ≥ k ) = ℙ ( D_k + v≤ v ) = Φ( v - k /√( k + v )) + o(1) and hence v - k /√(k + v )≥Φ^-1 ( q - o(1) ), which along with (<ref>) leads to (<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> The proof of this lemma consists of two steps. We will first establish the tight bounds below for T_v. In the second step, noting that T_v + M_v + 1 < T_v - 2 b_n ≤T_v + M_v by the definition of M_v in (<ref>), we will show that M_v is bounded as long as b_n is sufficiently small. For 0< < 1/8, under Conditions <ref>, <ref>, and <ref> we have that ℙ( G^-1( v(1+)/p_0) < T_v < G^-1(v(1 - )/p_0) ) → 1. Under Condition <ref>, we have that 2 b_n < G^-1( v(1 + )/p_0) - G^-1( v (1 + 3 ) (1 - ) /p_0). Using similar arguments as in the proof of Lemma <ref> below, we can show that under Conditions <ref>, <ref>, and <ref>, ℙ( G^-1( v (1 + 3 )(1 + )/p_0) < T_v (1 + 3 ) < G^-1((v (1 + 3 ))(1 - )/p_0) ) → 1. Then it follows that T_v (1 + 3 ) < G^-1 ( v (1 + 3 ) (1 - ) /p_0) < G^-1 ( v (1 + )/p_0 ) < T_v . Additionally, applying Lemmas <ref> and <ref> together with the definition of T_v gives that with asymptotic probability one, T_v + M_v ≥T_v - 2 b_n ≥ G^-1 ( v(1 + )/p_0 ) - [ G^-1 ( v(1 + )/p_0 ) - G^-1 ( v (1 + 3 ) (1 - )/p_0 ) ] = G^-1 ( v (1 + 3 ) (1 - )/p_0 ) > T_v (1 + 3 ). Therefore, we can obtain that ℙ (M_v < 3 v ) → 1 since T_v is decreasing with respect to v. This will conclude the proof of Lemma <ref>. We will present the formal proofs of Lemmas <ref> and <ref> below. Proof of Lemma <ref>. The main idea of the proof is to establish the convergence of the empirical distribution of {W_j} that ∑_j ∈ℋ_01(W_j ≥ t) is close to ∑_j ∈ℋ_0ℙ(W_j ≥ t). Using similar arguments as in the proof of Lemma <ref> in Section <ref>, we can obtain that when m_n/k → 0 (which combined with Lemma <ref> implies that m_n / v → 0), sup_t ∈ (G^-1(3k/2p), G^-1(k/2p) ) | ∑_j ∈ℋ_01(W_j ≤ - t) /∑_j ∈ℋ_0ℙ(W_j ≤ - t) - 1 | = o_p(1). Since ∑_j ∈ℋ_0ℙ(W_j ≤ - G^-1(v (1 + ) /p_0) = v (1 + ), we see from (<ref>) that ∑_j=1^p 1(-W_j ≥ G^-1(v (1 + ) /p_0)) ≥∑_j ∈ℋ_01(W_j ≤ - G^-1(v (1 + ) /p_0)) = v (1 + ) (1 + o_p(1)) > v holds with asymptotic probability one. Hence, from the definition of T_v, we have that ℙ(T_v > G^-1 ( v (1 + )/p_0 ) ) → 1. We next prove the upper bound for T_v. Note that ∑_j =1^p 1 (W_j ≤ - T_v) = v. We will aim to show that with asymptotic probability one, ∑_j ∈ℋ_11 ( W_j ≤ - T_v ) < v / 2. Then with asymptotic probability one, it holds that ∑_j ∈ℋ_01 (W_j ≤ - T_v) ≥ v (1 - /2). On the other hand, applying (<ref>) and similar argument as for (<ref>), we can obtain that with asymptotic probability one, ∑_j ∈ℋ_01(W_j ≤ - G^-1(v (1 - ϵ_n) /p_0) < v (1 - /2) . Combining the above two results shows that with asymptotic probability one, T_v≤ G^-1(v (1 - ) /p_0), which completes the proof for the upper bound. It remains to establish (<ref>). Since p_0/ p → 1 and v/k → 1 (cf. Lemma <ref>), we have that G^-1 (3 k /2 p) < G^-1 ( v (1 + )/p_0 ) when n and p are sufficiently large and 0 < < 1/8. Then from (<ref>), it holds that G^-1 (3 k /2 p) ≤T_v and hence with asymptotic probability one, ∑_j ∈ℋ_11 (W_j ≤ - T_v) ≤∑_j ∈ℋ_11 (W_j < - G^-1 (3 k /2 p)) . Moreover, an application of the Markov inequality, Lemma <ref>, and (<ref>) in Condition <ref> yields that as n →∞, ℙ(∑_j ∈ℋ_11 (W_j < - G^-1 (3 k /2 p)) > v /2 ) ≤2/ v ∑_j ∈ℋ_1ℙ(W_j < - G^-1 (3 k /2 p) ) → 0. Therefore, (<ref>) is derived in view of (<ref>). This completes the proof of Lemma <ref>. Proof of Lemma <ref>. Let us observe that v (1 + 3 ) (1 - )/p_0 - v(1 + )/p_0 = v/p_0 ( - 3 ^2). By the assumptions that p_0/p→ 1 and m_n/k→ 0, and applying Lemma <ref> and the observation above, it follows that when k and p are sufficiently large, v (1 + 3 ) (1 - )/p_0 - v(1 + )/p_0≥ k / 2 p . Note that assumption (<ref>) in Condition <ref> entails that sup_t ∈ ( G^-1 (3k/2p), G^-1(k/2p)) [ G(t - b_n ) - G(t + b_n) ] = o( k /p ). Combining the above two results and Lemma <ref>, we can obtain that v (1 + 3 ) (1 - )/p_0 - v(1 + )/p_0≫sup_t ∈ ( G^-1 (3k/2p), G^-1(k/2p)) [ G(t - b_n ) - G(t + b_n) ]. Notice that G^-1(v (1 + 3 ) (1 - )/p_0 ) ∈ ( G^-1 (3k/2p), G^-1(k/2p)) and G^-1 (v(1 + )/p_0 ) ∈ ( G^-1 (3k/2p), G^-1(k/2p)) when k and p are sufficiently large. Therefore, using proof by contradiction and the monotonicity of function G(·), we can establish the desired result of Lemma <ref>. This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Recall that the perfect and approximate knockoff statistics based on the marginal correlation are defined as W_j = (√(n)_2)^-1 ( | _j^⊤ | - |_j^⊤| ) and Ŵ_j = (√(n)_2)^-1 ( | _j^⊤ | - |_j^⊤| ), respectively. By the triangle inequality, it is easy to see that max_1 ≤ j ≤ p | Ŵ_ j - W_ j | ≤max_1 ≤ j ≤ p (√(n)_2)^-1 | (_j - _j)^⊤ |. Then an application of the Cauchy–Schwarz inequality gives that max_1 ≤ j ≤ p | Ŵ_ j - W_ j | ≤ (√(n))^-1max_1 ≤ j ≤ p _j - _j _2 . Thus, the conclusion of Lemma <ref> can be derived under Condition <ref>. This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> From the definitions of W_j and w_j and the triangle inequality, it holds that ℙ (| W_j - w_j | ≥δ_n ) ≤ℙ( ( n^-1^2_2 )^-1/2| n^-1 ( | _j^⊤ | - | _j^⊤ | ) - ( | (X_j Y)| - |(X_j Y)| ) | ≥δ_n / 2 ) + ℙ( | ( n^-1^2_2 )^-1/2 - ( Y^2)^-1/2| ·| | (X_j Y)| - |(X_j Y)| | ≥δ_n / 2 ) := P_1 + P_2. We will aim to show that for δ_n → 0, P_1 ≤ 4 exp{ - n δ_n^2 Y^2 / 256 X_j_ψ_2^2 Y _ψ_2^2 } + exp{ - n ( Y^2)^2 /8 Y^4 } and P_2 ≤ 2 exp{ - n δ_n^2 ( Y^2 )^2 / 64 | w_j |^2 Y _ψ_2^4 } + exp{ - n ( Y^2 )^2 /8 Y^4}. Then setting δ_n = √(log p/n)max_1 ≤ j ≤ p{ 16 √(2) X_j _ψ_2 Y _ψ_2/ ( Y^2)^1/2 8√(2) |w_j| Y _ψ_2^2 / Y^2 }, a combination of the above results leads to the desired conclusion of this lemma. We proceed with proving (<ref>). Since _2^2 = ∑_i = 1^n y_i^2 is the sum of i.i.d. random variables, an application of Bernstein’s inequality yields that ℙ ( n^-1_2^2 ≤[Y^2] /2 ) ≤exp{ - n ( Y^2)^2 /8 Y^4 }. It follows from the triangle inequality and (<ref>) that P_1 ≤ℙ( | n^-1 ( | _j^⊤ | - | _j^⊤ | ) - ( | (X_j Y)| - |(X_j Y)| ) | ≥δ_n ( Y^2)^1/2/2 √(2)) + ℙ ( n^1/2(_2)^-1≥√(2) ([Y^2])^-1/2 ) ≤ℙ( 1/n| ∑_i = 1^n [_i, j y_i - (X_j Y)] | ≥δ_n ( Y^2)^1/2/4 √(2)) + ℙ( 1/n| ∑_i = 1^n [_i, j y_i - (X_j Y)] | ≥δ_n ( Y^2)^1/2/4 √(2)) + exp{ - n ( Y^2)^2 /8 Y^4 }. We next bound the first two terms on the right-hand side of the expression above. Under Condition <ref>, we see that _i, j y_i and _i, j y_i are both sub-exponential random variables, with sub-exponential norms X_j _ψ_2 Y _ψ_2 and X_j _ψ_2 Y _ψ_2, respectively. Then we can obtain through applying Bernstein's inequality for sub-exponential random variables (see, e.g., Corollary 2.8.3 in <cit.>) that when δ_n = o(1), ℙ( 1/n| ∑_i = 1^n [_i, j y_i - (X_j Y)] | ≥δ_n ( Y^2)^1/2/4 √(2)) ≤ 2 exp{ - n δ_n^2 Y^2 / 256 X_j_ψ_2^2 Y _ψ_2^2 } and ℙ( 1/n| ∑_i = 1^n [_i, j y_i - (X_j Y)] | ≥δ_n ( Y^2)^1/2/4 √(2)) ≤ 2 exp{ - n δ_n^2 Y^2 / 256 X_j_ψ_2^2 Y _ψ_2^2 }. Thus, combining the above three inequalities establishes (<ref>). As for term P_2, noting that w_j = ( Y^2)^-1/2 ( | (X_j Y)| - |(X_j Y)| ) and | ( n^-1^2_2 )^-1/2 - ( Y^2)^-1/2| = | n^-1_2^2 - Y^2 | / n^-1/2_2 ( Y^2)^1/2 (( Y^2)^1/2 + n^-1/2_2 ) , we can deduce that P_2 = ℙ( |w_j| | n^-1_2^2 - Y^2 | / n^-1/2_2 (( Y^2)^1/2 + n^-1/2_2 ) ≥δ_n / 2 ) ≤ℙ( |w_j| | n^-1_2^2 - Y^2 |/ n^-1/2_2 ( Y^2)^1/2≥δ_n / 2 ) = ℙ( | n^-1_2^2 - Y^2 | ≥δ_n Y^2 /2 √(2) |w_j| ) + ℙ ( n^-1_2^2 ≤ Y^2 /2 ) . The very last term above can be bounded by applying (<ref>). Again we can see that under Condition <ref>, y_i^2 is a sub-exponential random variable with sub-exponential norm Y _ψ_2^2. With the aid of Bernstein's inequality for sub-exponential random variables (Corollary 2.8.3 in <cit.>), we can obtain that for δ_n = o(1), ℙ( 1/n| ∑_i = 1^n [ y_i^2 - ( Y^2 )] | ≥δ_n Y^2 /2 √(2) |w_j| ) ≤ 2 exp{ - n δ_n^2 ( Y^2 )^2 / 64 | w_j |^2 Y _ψ_2^4 }. Therefore, the bound for term P_2 in (<ref>) can be shown. This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> The main idea of the proof is to apply the law of total variance and decompose the total into two terms by conditioning on (_ℋ_1, ), where _ℋ_1= (_j)_j ∈ℋ_1 and = (ε_1, …, ε_n)^⊤. Specifically, it holds that ( ∑_j ∈ℋ_01 (W_j ≥ t) ) = {[ ( ∑_j ∈ℋ_01 (W_j ≥ t) - ∑_j ∈ℋ_0ℙ( W_j ≥ t | _ℋ_1, ) )^2 | _ℋ_1, ] } + {(∑_j ∈ℋ_0ℙ( W_j ≥ t | _ℋ_1, ) - ∑_j ∈ℋ_0ℙ( W_j ≥ t) )^2 } := V_1 + V_2. We will bound terms V_1 and V_2 above separately. Let us begin with the first term V_1. We can expand the square and obtain that V_1 = ∑_j ∈ℋ_0∑_ℓ∈ℋ_0{[ ( 1 (W_j ≥ t) - ℙ( W_j ≥ t | _ℋ_1, ) ) ×( 1 (W_ℓ≥ t) - ℙ( W_ℓ≥ t | _ℋ_1, ) ) | _ℋ_1, ] }. Observe that conditional on (_ℋ_1, ), it follows from model (<ref>) that is deterministic. In addition, W_j depends only on _j and _j besides . Thus, we need only to consider the conditional distribution of (_j, _j, _k, _k) | (_ℋ_1, ). We will aim to show that each W_j depends on at most m_n random variables in {W_k: k∈ℋ_0}. Indeed, it suffices to show that conditional on (_ℋ_1, ), the number of (_k, _k)'s that are dependent on (_j, _j) is at most m_n. Since the rows of (, ) are i.i.d. and are independent of , we need only to consider the distribution of a single row; that is, (X_j, X_j, X_k, X_k) | (X_ℋ_1, ε) d= (X_j, X_j, X_k, X_k) | X_ℋ_1. In view of the multinormal distribution in (<ref>), it follows that the conditional distribution (X_j, X_j, X_k, X_k) | X_ℋ_1 is still normal. We can obtain from the conditional distribution that {( [ X_j; X_j; ], [ X_k; X_k; ]) | X_ℋ_1} = [ _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k; _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k; ]. In particular, (X_j, X_j) and (X_k, X_k) are independent conditional on X_ℋ_1 if and only if _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k=0. Thus, to count the number of dependent pairs of (X_j, X_j) and (X_k, X_k) for j, k ∈ℋ_0, we need only to count the number of nonzero (_j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k)'s. Without loss of generality, let us assume that X = (X_ℋ_1, X_ℋ_0) and = [ _ℋ_1, ℋ_1 _ℋ_1, ℋ_0; _ℋ_0, ℋ_1 _ℋ_0, ℋ_0; ]. Using the formula for block matrix inverse, it holds that ^-1 = [ (^-1)_11 (^-1)_12; (^-1)_21 _ℋ_0, ℋ_0 - _ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0; ], where (^-1)_11 = _ℋ_1, ℋ_1^-1 + _ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0 (_ℋ_0, ℋ_0 - _ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0)^-1_ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1 , (^-1)_12 =- _ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0 (_ℋ_0, ℋ_0 - _ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0)^-1 , and (^-1)_21 = (^-1)_12^⊤. In addition, Condition <ref> assumes that max_1 ≤ j ≤ p (^-1)_j_0 ≤ m_n, which indicates that max_j ∈ℋ_0 ( _ℋ_0, ℋ_0 - _ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0 )_j _0≤ m_n since it is a submatrix of ^-1. Hence, we can obtain that for a given j ∈ℋ_0, ∑_k ∈ℋ_01( _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k = 0 ) ≤ m_n. Consequently, we see that conditional on (_ℋ_1, ), the number of k ∈ℋ_0 such that (_k, _k) is dependent on (_j, _j) is at most m_n. For j ∈ℋ_0, let us define N(j) := { k∈ℋ_0: W_k W_j | (_ℋ_1 , ) }. Then it holds that | N(j) | ≤ m_n. From (<ref>) and the fact that the indicator function takes values between 0 and 1, we can deduce that V_1 = ∑_j ∈ℋ_0∑_ℓ∈ N(j){[ 1 (W_j ≥ t) ·1 (W_ℓ≥ t) | _ℋ_1, ] } - ∑_j ∈ℋ_0∑_ℓ∈ N(j){[ ℙ( W_j ≥ t | _ℋ_1, ) ℙ( W_ℓ≥ t | _ℋ_1, ) ] } ≤∑_j ∈ℋ_0∑_ℓ∈ N(j){[ 1 (W_j ≥ t) ·1 (W_ℓ≥ t) | _ℋ_1, ] } ≤ m_n ∑_j ∈ℋ_0{[ 1 (W_j ≥ t) | _ℋ_1, ] } = m_n ∑_j ∈ℋ_0ℙ (W_j ≥ t ) = m_n p_0 G(t). We next proceed with showing the bound for term V_2. We can expand V_2 as V_2 = ∑_j ∈ℋ_0∑_ℓ∈ℋ_0{( ℙ ( W_j ≥ t | _ℋ_1, ) - ℙ ( W_j ≥ t ) ) ×( ℙ ( W_ℓ≥ t | _ℋ_1, ) - ℙ ( W_ℓ≥ t ) ) }. The key idea of the proof is to examine the conditional distribution ℙ (W_j ≥ t | _ℋ_1, ) and show that given j ∈ℋ_0, the number of dependent ℙ ( W_ℓ≥ t | _ℋ_1, ) is at most m_n. Since (X, X ) is multinormal, it holds that (X_j, X_j) | ( X_ℋ_1, )  d∼ N ( [ _j, ℋ_1_ℋ_1, ℋ_1 ^-1 X_ℋ_1; _j, ℋ_1_ℋ_1, ℋ_1 ^-1 X_ℋ_1; ], _cond), where _cond = [ _j, j - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j _j, j - r - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j; _j, j - r - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j _j, j - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j; ]. Since the rows of the augmented data matrix (, ) are i.i.d. and is deterministic given ( _ℋ_1, ), we can obtain that ( _j^⊤/√(n)_2, _j^⊤/√(n)_2) | ( _ℋ_1, ) d∼ N ((√(n)_2)^-1[ _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1^⊤; _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1 ^⊤; ], n^-1_cond). Note that when _ℋ_1, j = 0, the conditional distribution above does not depend on (_ℋ_1, ) and hence any term involving such j ∈ℋ_0 in the expansion of V_2 will disappear. Denote by N_dep = {j ∈ℋ_0: _ℋ_1, j≠ 0}. It follows from Condition <ref> that | N_dep | ≤ m_n. Then we have that V_2 = ∑_j ∈ℋ_0∑_ℓ∈ N_dep{( ℙ ( W_j ≥ t | _ℋ_1, ) - ℙ ( W_j ≥ t ) ) ×( ℙ ( W_ℓ≥ t | _ℋ_1, ) - ℙ ( W_ℓ≥ t ) ) } ≤∑_j ∈ℋ_0∑_ℓ∈ N_dep{ℙ ( W_j ≥ t | _ℋ_1, ) ℙ ( W_ℓ≥ t | _ℋ_1, ) } ≤∑_j ∈ℋ_0∑_ℓ∈ N_dep{ℙ ( W_j ≥ t | _ℋ_1, ) }≤ m_n p_0 G(t). Therefore, substituting (<ref>) and (<ref>) into (<ref>) yields (<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Proof of (<ref>). In the proof of Lemma <ref> in Section <ref> (cf. (<ref>)), we have shown that ( _j^⊤/_2, _j^⊤/_2) | ( _ℋ_1, )  d∼ N ( [ μ_j; μ_j; ], σ_j^2 [ 1 ρ_j; ρ_j 1; ]), where μ_j = _2^-1_j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1^⊤, σ_j^2 = _j, j - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j, ρ_j = 1 - r / σ_j^2, and r is as given in (<ref>). Recall the definition N_dep = {j ∈ℋ_0: _ℋ_1, j≠ 0} in the proof of Lemma <ref>. It holds that |N_dep| ≤ m_n in view of Condition <ref>. Furthermore, note that G(t) ≥ c_1 q a_n / p for t ∈ (0, G^-1 ( c_1 q a_n / p ) ]. Let us define R_n := sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] ∑_j ∈ℋ_0 ∩ N_dep^cℙ (t - Δ_n ≤W_j < t + Δ_n) /∑_j ∈ℋ_0 ∩ N_dep^cℙ (W_j ≥ t ) . Then we can write sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] G(t - Δ_n) - G(t + Δ_n) / G(t) = sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] ∑_j ∈ℋ_0 ∩ N_depℙ (t - Δ_n ≤W_j < t + Δ_n) / p_0 G(t) + R_n ≤m_n p / c_1 q a_n p_0 + R_n. From the assumptions that (log p)^1/γ m_n / a_n → 0 and p_0 / p → 1, we have that (log p )^1/γm_n p / c_1 q a_n p_0 → 0. It remains to establish (log p)^1/γ R_n → 0. A key observation is that when j ∈ℋ_0 ∩ N_dep^c, it follows that the conditional distribution ( _j^⊤/_2, _j^⊤/_2) | ( _ℋ_1, )  d∼ N ( [ 0; 0; ], [ _j, j^2 _j, j^2 - r; _j, j^2 - r _j, j^2; ]), which does not depend on ( _ℋ_1, ). Then we see that the distribution of W_j does not depend on ( _ℋ_1, ) and satisfies that ℙ ( √(n)W_j ≥ t ) = ℙ ( |Z_1| - |Z_2| ≥ t ), where (Z_1, Z_2)^⊤ is a two-dimensional multinormal random variable with mean (0, 0)^⊤ and covariance matrix [ _j, j^2 _j, j^2 - r; _j, j^2 - r _j, j^2; ] . For j ∈ℋ_0 ∩ N_dep^c and t > 0, the density function of √(n)W_j is given by f_√(n)W_j(t) = √(2)/√(π) c_2, j[ 1 - Φ( t /c_1, j) ] exp{- t^2 /2 c_2, j^2 } + √(2)/√(π) c_1, j[ 1 - Φ( t /c_2, j) ] exp{- t ^2 /2 c_1, j^2 } , where c_1, j = √(4 _j,j^2 - 2 r ) and c_2, j = √(2r). Based on the density function of √(n)W_t above and the basic inequality that 1 - Φ(x) ≤ e^-x^2/2 for x ≥ 0, it is easy to see that ℙ ( W_j ≥ t ) = ℙ (√(n)W_j ≥√(n) t ) ≤∫_√(n) t^∞√(2)/√(π) c_2, jexp{- x^2 /2 c_2, j^2 } dx +∫_√(n) t^∞√(2)/√(π) c_1, jΦ( -x /c_2, j) dx ≤(2 + 2 c_2, j/ c_1, j) [1 - Φ(√(n) t/c_2, j) ]. Then we can obtain that G(t ) ≤max_j ∈ℋ_0(2 + 2 c_2, j/ c_1, j) [1 - Φ(√(n) t/ c_2, j) ] . Setting t = G^-1 ( c_1 q a_n/ p ) in the inequality above yields that G^-1 ( c_1 q a_n/ p ) = O( √(log p/n) ) when C_1 < r < _j, j^2 < C_2 with some absolute constants C_1> 0 and C_2> 0 for each j ∈ℋ_0. We will bound the ratio in R_n by considering two ranges of t∈ (0, 4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j) and t∈ [4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j, G^-1(c_1qa_n/p)] separately. When t falls into the first range, in view of (<ref>) the denominator G(t) in the ratio in R_n is of a constant order, while the numerator is uniformly bounded from above by O(√(n)Δ_n) over all t in this range because the density f_√(n)W_j(t) is bounded from above by a constant. We now consider the ratio in R_n in the second range of t∈ [4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j, G^-1(c_1qa_n/p)]. We will bound the numerator and denominator in (<ref>) separately in this range. It follows from (<ref>) and the mean value theorem that there exists some ξ∈ (√(n) t - √(n)Δ_n, √(n) t + √(n)Δ_n) such that ℙ (√(n) t - √(n)Δ_n ≤√(n)W_j ≤√(n) t + √(n)Δ_n) = 2 √(n)Δ_n {√(2)/√(π) c_2, j[ 1 - Φ( ξ/c_1, j) ] exp{ - ξ^2 / 2 c_2, j^2 } + √(2)/√(π) c_1, jexp{- ξ^2 /2 c_1, j^2}[1 - Φ( ξ/c_2, j) ] }. Moreover, since √(n) t ≤√(n) G^-1 (c_1 q a_n/p) = O( √(log p) ) and Δ_n √(n log p)→ 0, we can obtain through some direct calculations that | 1 - Φ( ξ/c_1, j) / 1 - Φ( √(n) t /c_1, j) - 1 | ≤ C √(n) t ·√(n)Δ_n = O ( Δ_n √(n log p)). Similarly, it holds that | exp{- ξ^2 /2 c_1, j^2}/exp{- (√(n) t)^2 /2 c_1, j^2} - 1 | ≤ C √(n) t ·√(n)Δ_n = O ( Δ_n √(n log p)). Combining the above three inequalities yields that when Δ_n √(n log p)→ 0, ℙ (t - Δ_n ≤W_j < t + Δ_n) = ℙ ( √(n) t - √(n)Δ_n ≤√(n)W_j ≤√(n) t + √(n)Δ_n) ≤ C √(n)Δ_n [1 + O (√(n)Δ_n log p)] {√(2)/√(π) c_2, j[ 1 - Φ( √(n) t /c_1, j) ] exp{- (√(n) t )^2 /2 c_2, j^2 } + √(2)/√(π) c_1, j[ 1 - Φ( √(n) t /c_2, j) ] exp{- (√(n) t ) ^2 /2 c_1, j^2 }}. Next we need to deal with the denominator ℙ (√(n)W_j ≥ t). Via integration by parts, we can deduce that for t ∈ [4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j, G^-1(c_1qa_n/p)], ℙ (√(n)W_j ≥√(n) t) = 2 [ 1 - Φ( √(n) t / c_1, j) ] [ 1 - Φ( √(n) t/c_2, j) ] ≥C{ (√(n) t)^-1[ 1 - Φ( √(n) t / c_1, j) ] exp{ - (√(n) t )^2 /2 c_2, j^2 } + (√(n) t)^-1[ 1 - Φ( √(n) t / c_2, j) ] exp{ - (√(n) t )^2 /2 c_1, j^2 }} ≥C (√(n) t)^-1 f_√(n)W_j (√(n) t) , where we have used the definition of the density in (<ref>) and the fact that 1 - Φ(x) ≥ 0.75 x^-1 e^- x^2/ 2 for x ≥ 4, and C is some constant depending on c_1, j and c_2, j. Combining (<ref>) and (<ref>) and using some direct calculations, we can obtain the bound for the ratio in R_n in the second range sup_t∈ [4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j, G^-1(c_1qa_n/p))∑_j ∈ℋ_0 ∩ N_dep^cℙ (t - Δ_n ≤W_j < t + Δ_n) /∑_j ∈ℋ_0 ∩ N_dep^cℙ (W_j ≥ t ) ≤C√(n)Δ_n·√(n) G^-1 ( c_1 q a_n / p ) = O (√(n)Δ_n √(log p)). This together with the result for the first range proved previously leads to R_n = O (√(n)Δ_n √(log p)). Finally, plugging (<ref>) into (<ref>) yields (<ref>) because (log p)^1/γ m_n / a_n → 0 and √(n)Δ_n (log p)^1/2 + 1/γ→ 0. Proof of (<ref>). Recall from Condition <ref> that p_1^-1∑_j ∈ℋ_1ℙ ( W_j < - t ) ≤ G(t) for t ∈ (0, C √(n^-1log p)) with C some large constant. Also, note that Δ_n = o(G^-1 (c_1 q a_n /p)) since √(n)Δ_n → 0 by assumption and G^-1 (c_1 q a_n/p) = O(√(n^-1log p)) as shown in the proof of (<ref>). It follows from some direct calculations that a_n ^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + Δ_n ) ≤ a_n^-1 (p - p_0) G( G^-1 ( c_1 q a_n / p ) - Δ_n ) = c_1 q (p - p_0) / p + a_n^-1(p - p_0) | G' ( ξ ) |Δ_n, where ξ is some number lying between G^-1 ( c_1 q a_n / p ) and G^-1 ( c_1 q a_n / p ) - Δ_n. From (<ref>) and f_√(n)W_j (√(n)ξ )≤ C with C>0 some constant, we can deduce that | G' (ξ)| = ∑_j ∈ℋ_0 p_0^-1√(n) f_√(n)W_j (√(n)ξ ) ≤C √(n) m_n /p_0 + p_0^-1∑_j ∈ℋ_0 ∩ N_dep^c √(n) f_√(n)W_j (√(n)ξ ) ≤C √(n) m_n /p_0 + C p_0^-1√(n)·√(n) G( c_1 q a_n /p) ∑_H_0 ∩ N_dep^cℙ( W_j ≥ G( c_1 q a_n /p) ) ≤C √(n) m_n /p_0 + C p_0^-1√(n log p ) p_0 c_1 q a_n /p, where the second last step above is due to (<ref>). Therefore, substituting the bound above into (<ref>) gives that a_n ^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p + Δ_n ) ≤c_1 q (p - p_0)/p + C Δ_n √(n) m_n (p - p_0 )/a_n p_0 + C Δ_n √(n log p ) q (p - p_0)/p → 0, where we have used the assumption that p_0 / p → 1, Δ_n √(n log p )→ 0, and m_n / a_n → 0. This derives (<ref>), which concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> The main intuition of the proof is that when the approximate augmented data matrix ^ is close to its perfect counterpart ^, the corresponding Lasso estimators would be close as well. From the definitions of _j in (<ref>) and _j in (<ref>), it holds that max_1 ≤ j ≤ 2p | β_j - β̂_j | ≤max_1 ≤ j ≤ 2p | β_j^ - β̂_j^ | + max_1 ≤ j ≤ 2p | _j^⊤( - ^^) /_j^⊤^_j - _j^⊤( - ^^) /_j^⊤^_j|. We will aim to prove that for some large enough constant C, ℙ( ^ - ^_2 ≤ C Δ_n s √(log p/n)) → 1, ℙ( max_1 ≤ j ≤ 2p | _j^⊤( - ^^) /_j^⊤^_j - _j^⊤( - ^^) /_j^⊤^_j|≤ C Δ_n s√(log p/n))→ 1. Then combining the two results above can establish the desired conclusion of Lemma <ref>. We next proceed with proving (<ref>) and (<ref>). Proof of (<ref>). It follows from the Karush–Kuhn–Tucker (KKT) condition that n^-1 [^]^⊤^ (^ - ^ ) = n^-1 [^]^⊤ - λζ, n^-1 [^]^⊤^ (^ - ^ ) = n^-1 [^]^⊤ - λζ̂, where ζ = (ζ_1, …, ζ_2p) and ζ̂ = (ζ̂_1, …, ζ̂_2p) with ζ_j = {[ (β^_j) β_j^≠ 0,; ∈ [-1, 1] β_j^ = 0, ]. ζ̂_j = {[ (β̂_j^) β̂_j^≠ 0,; ∈ [-1, 1] β̂_j^ = 0. ]. Taking the difference between (<ref>) and (<ref>) above leads to n^-1 [^]^⊤^ (^ - ^ ) + n^-1([^]^⊤^ - [^]^⊤^) (^ - ^) = - n^-1([^]^⊤^ - [^]^⊤^) ( ^ - ^ ) + n^-1( ^ - ^)^⊤ - λ (ζ - ζ̂). Furthermore, multiplying both sides of the equation above by (^ - ^ )^⊤ yields that n^-1^ (^ - ^ )_2^2 = n^-1( ^ - ^)^⊤([^]^⊤^ - [^]^⊤^) ( ^ - ^) - n^-1 ( ^ - ^)^⊤([^]^⊤^ - [^]^⊤^) ( ^ - ^ ) + n^-1 ( ^ - ^)^⊤( ^ - ^)^⊤ - λ ( ^ - ^)^⊤ (ζ - ζ̂). We claim that the last term on the right-hand side of the expression above satisfies that ( ^ - ^)^⊤ (ζ - ζ̂) ≥ 0 . To understand this, observe that when both β_j^ and β̂_j^ are nonzero or zero, it is easy to see that (β_j^ - β̂_j^) (ζ_j - ζ̂_j ) ≥ 0. When either of β_j^ and β̂_j^ is zero, without loss of generality let us assume that β_j^ = 0 and β̂_j^≠ 0. When β_j^ = 0 and β̂_j^ > 0, it follows that ζ_j ≤ 1 = ζ̂_j and hence (β_j^ - β̂_j^) (ζ_j - ζ̂_j) = - β̂_j^ ((ζ_j - ζ̂_j)) ≥ 0. Similarly, we can show that (β_j^ - β̂_j^) (ζ_j - ζ̂_j) ≥ 0 when β_j^=0 and β̂_j^ < 0. Thus, the last term on the right-hand side of (<ref>) above satisfies that -( ^ - ^)^⊤ (ζ - ζ̂) ≤ 0 . We next examine the three terms on the right-hand side of the earlier expression above separately. First, let us observe that n^-1 [^]^⊤^ - [^]^⊤^_max ≤ n^-1 [^]^⊤(^ - ^) _max+ n^-1 ( ^ - ^)]^⊤^_max ≤max_jn^-1/2_j^_2max_jn^-1/2(_j^ - _j^)_2 + max_jn^-1/2_j^_2max_jn^-1/2(_j^ - _j^)_2. Under Condition <ref> and the sub-Gaussian assumption for , it can be shown that ℙ( n^-1 [^]^⊤^ - [^]^⊤^_max≥ C Δ_n ) → 0 for some constant C > 0. From the sparsity of and in Condition <ref>, we have that with probability 1 - o(1), the first term on the right-hand side of (<ref>) can be bounded as n^-1|( ^ - ^)^⊤([^]^⊤^ - [^]^⊤^) ( ^ - ^)| ≤ C Δ_n s ^ - ^_2^2. By the Cauchy–Schwarz inequality, we can bound the second term on the right-hand side of (<ref>) as |n^-1 ( ^ - ^)^⊤([^]^⊤^ - [^]^⊤^) ( ^ - ^ )| ≤^ - ^_2 n^-1([^]^⊤^ - [^]^⊤^) ( ^ - ^ ) _2. Finally, with the aid of Condition <ref> on sparsity and Condition <ref> on the restrictive eigenvalues, the left-hand side of (<ref>) can be lower bounded by c_1^ - ^_2^2. Combining all the results above and applying the Cauchy–Schwarz inequality to the second and third terms on the right-hand side of (<ref>), we can deduce that as Δ_n s → 0, the representation in (<ref>) entails that with probability 1 - o(1), ^ - ^_2 ≲ n^-1([^]^⊤^ - [^]^⊤^) ( ^ - ^) _2 + max_J: |J| ≤ C s n^-1( ^_J - ^_J )^⊤_2 := I_1 + I_2. We will bound the two terms I_1 and I_2 above separately. It follows from (<ref>), the sparsity of and ^, and Lemma <ref> that with probability 1 - o(1), I_1 ≤ C Δ_n s^1/2^ - ^_2 ≤ C Δ_n s √(log p/n). As for term I_2, conditional on (^, ^ ) we have that for each 1 ≤ j ≤ 2p, n^-1/2( ^_j - ^_j )^⊤ d∼ N (0, n^-1^_j - ^_j _2^2 ). Thus, it holds that ℙ( I_2≥ C σΔ_n √(s log n /n )) ≤ℙ( s max_1 ≤ j ≤ 2p ( n^-1/2( ^_j - ^_j )^⊤)^2 ≥ C^2 σ^2 Δ_n^2 s log n ) = ℙ( max_1 ≤ j ≤ 2p n^-1/2^_j - ^_j _2 |Z| ≥ C σΔ_n √(log n)), where Z d∼ N(0, σ^2) is independent of ^ and ^. Moreover, Condition <ref> implies that max_1 ≤ j ≤ 2p^_j - ^_j _2 ≤Δ_n with probability 1 - o(1). Then using the union bound, we can obtain that for some constant C > √(2), ℙ( I_2 ≥ C σΔ_n √(s log n /n )) ≤ℙ( |Z| > C σ√(log n)) + ℙ(max_1 ≤ j ≤ 2p^_j - ^_j _2 ≥Δ_n) → 0. Consequently, substituting (<ref>) and (<ref>) into (<ref>) leads to (<ref>). Further, applying (<ref>) again with the bounds in (<ref>), (<ref>), (<ref>), and (<ref>) yields that ℙ(n^-1/2^ (^ - ^ ) _2 ≤ C Δ_n s √(log p/n)) → 1. Proof of (<ref>). Let us first state three results (<ref>), (<ref>), and (<ref>) below that will be used repeatedly in our proof. With similar arguments as for (<ref>) and (<ref>) and the union bound, we can deduce that under Conditions <ref>–<ref>, ℙ( max_1 ≤ j ≤ 2p _j - _j _2 ≤ C ( m_n^1/2Δ_n + Δ_n m_n √(log p/n)) ≤ C m_n^1/2Δ_n )→ 1, ℙ(n^-1/2max_j _-j^ ( _j - _j )_2 ≤ C Δ_n m_n √(log p/n)) → 1, where we have used √(m_n log p/n)→ 0 for showing (<ref>). Observe that for 1 ≤ j ≤ 2p, n^-1/2 ( _j - _j ) _2 ≤ n^-1/2 ( _j^ - _j^) _2 + n^-1/2_-j^ ( _j - _j ) _2 + n^-1/2 (_-j^ - _-j^) _j _2 + n^-1/2 (_-j^ - _-j^) (_j - _j) _2. Then it follows from the sparsity of 𝒮_j = (_j) ∪(_j) ∪(_j), the sub-Gaussianity of X_j, and the bound in (<ref>) that with probability 1 - o(1), max_1 ≤ j ≤ 2p n^-1/2 ( _j - _j ) _2 ≤ C (Δ_n + Δ_n m_n √(log p/n) + Δ_n m_n^1/2max_1 ≤ j ≤ 2p_j_2 + m_n Δ_n √(log p/n)) ≤ C Δ_n m_n^1/2max_1 ≤ j ≤ 2p_j_2 ≤ C Δ_n m_n^1/2. We are now ready to establish (<ref>). In particular, we have the decomposition for the main term in (<ref>) max_1 ≤ j ≤ 2p | _j^⊤( - ^^) /_j^⊤^_j - _j^⊤( - ^^) /_j^⊤^_j| ≤max_1 ≤ j ≤ 2p | (_j - _j )^⊤( - ^^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | _j^⊤(^^ - ^^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | _j^⊤( - ^^) ( 1/_j^⊤^_j - 1/_j^⊤^_j )| := P_1 + P_2 + P_3. We will investigate the three terms P_1, P_2, and P_3 above separately. Let us first deal with term P_1. Note that P_1 ≤max_1 ≤ j ≤ 2p | (_j - _j )^⊤^( ^ - ^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | (_j - _j )^⊤/_j^⊤^_j|. Since d∼ N( 0, I_n) and is independent of design matrix , it holds that conditional on design matrix , (_j - _j )^⊤/_j^⊤^_jd∼ N ( 0, _j - _j _2^2 / [_j^⊤^_j ]^2 ). This together with the bounds in (<ref>) and (<ref>) leads to ℙ( max_1 ≤ j ≤ 2p | (_j - _j )^⊤/_j^⊤^_j| > C m_n^1/2Δ_n √(log p/n)) = ∑_j = 1^2pℙ( _j - _j _2 / |_j^⊤^_j | · | Z | > C m_n^1/2Δ_n √(log p/n)) ≤∑_j = 1^2pℙ( _j - _j _2 / n · | Z | > C m_n^1/2Δ_n √(log p/n)) + o(1) ≤∑_j = 1^2pℙ (|Z| > C √(log p)) + o(1) = o(1), where Z d∼ N(0, σ^2) is independent of ^ and ^, and C is some large constant that may take different value at each appearance. In addition, from (<ref>), the Cauchy–Schwarz inequality, Lemma <ref>, and (<ref>), we can deduce that with probability 1 - o(1), max_1 ≤ j ≤ 2p | (_j - _j )^⊤^( ^ - ^) /_j^⊤^_j| ≤max_1 ≤ j ≤ 2p _j - _j _2 ^ (^ - ^) _2 / | _j^⊤^_j | ≤ C Δ_n m_n^1/2√(s log p/n). Substituting (<ref>) and (<ref>) into (<ref>) yields that with probability 1 - o(1), P_1 ≤ C Δ_n m_n^1/2√(s log p/n). We next turn to the bound for term P_2. It is easy to see that P_2 ≤max_1 ≤ j ≤ 2p | _j^⊤^(^ - ^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | _j^⊤ ( ^ - ^ ) ^/_j^⊤^_j| + max_1 ≤ j ≤ 2p | (_j - _j )^⊤^(^ - ^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | (_j - _j )^⊤ ( ^ - ^ ) ^/_j^⊤^_j| := P_21 + P_22 + P_23 + P_24. Regarding term P_21, in view of (<ref>) and the definition of _j, we have that with probability 1 - o(1), P_21 ≤max_1 ≤ j ≤ 2p | β^_j - β̂_j^ | + max_1 ≤ j ≤ 2p | _j^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C Δ_n s √(log p/n) + max_1 ≤ j ≤ 2p | ( _j + ^_-j ( _j - _j ) ) ^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C Δ_n s √(log p/n) + max_1 ≤ j ≤ 2p | _j^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | [^ ( _j - _j ) ]^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j|. We will bound the last two terms on the very right-hand side of the expression above separately. Since for ℓ≠ j, n^-1 [ _j^⊤^_ℓ] = 0 due to zero correlation between _j and _-j^, and _j and _ℓ^ both have i.i.d. sub-Gaussian entries, we can show that for ℓ≠ j, ℙ( max_1 ≤ j ≤ 2p max_ℓ≠ j n^-1 | _j^⊤_ℓ^ | ≥ C √(log p/n)) ≤ C p^-1→ 0. This combined with (<ref>), the sparsity assumption that |J| = |() ∪() ∪()| ≲ s, and the result in (<ref>) yields that with probability 1 - o(1), the second term on the very right-hand side of (<ref>) above can be bounded as max_1 ≤ j ≤ 2p | _j^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C n^-1max_1 ≤ j ≤ 2p max_J': | J' | ≲ s _j^⊤_ J'∖{j}_2 ·^_J'∖{j} - ^_J'∖{j}_2 ≤ C √( s log p/n)·Δ_n s √(log p/n)≤ C Δ_n s √(log p/n), where the last inequality above holds due to the assumption that √( s log p/n)→ 0. By the Cauchy–Schwarz inequality, we can deduce that with probability 1 - o(1), the third term on the very right-hand side of (<ref>) above can be bounded as max_1 ≤ j ≤ 2p | [^ ( _j - _j ) ]^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C n^-1max_1 ≤ j ≤ 2p ^ ( _j - _j ) _2 ·^_-j(^_-j - ^_-j)_2. An application of Lemma <ref>, (<ref>), and the sub-Gaussian assumption of X_j gives that with probability 1 - o(1), the second term on the right-hand side above can be bounded as max_1 ≤ j ≤ 2p n^ -1/2^_-j(^_-j - ^_-j)_2 ≤ n^ -1/2^(^ - ^)_2 + max_1 ≤ j ≤ 2p n^ -1/2^_j _2 | β_j - β̂_j| ≤ C Δ_n s √(log p/n). Then plugging (<ref>) into (<ref>) yields that max_1 ≤ j ≤ 2p | [^ ( _j - _j ) ]^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C √(m_n log p/n)· C Δ_n s √(log p/n)≤ C Δ_n s √(log p/n), where the last inequality above is due to the assumption that √(slog p/n)→ 0 and m_n ≲ s. Hence, it follows from substituting (<ref>) and (<ref>) into (<ref>) that with probability 1 - o(1), P_21≤ C Δ_n s √(log p/n). We next proceed with considering term P_22 introduced in (<ref>). Observe that ^ - ^ = [0, - ] and ^ = (^⊤ , 0^⊤)^⊤. Then it holds that ( ^ - ^ ) ^ = 0. From (<ref>) and the Cauchy–Schwarz inequality, we can deduce that P_22 ≤max_1 ≤ j ≤ 2p | _j^⊤ ( ^ - ^ ) ^/_j^⊤^_j| + max_1 ≤ j ≤ 2p | _j^⊤ ( ^ - ^ ) (^ - ^ ) /_j^⊤^_j| ≤ C n^-1max_1 ≤ j ≤ 2p_j_2 · ( ^ - ^ ) (^ - ^ ) _2. Moreover, we have _j = _j + _-j^ (_j - _j). Since the components of _j are i.i.d. sub-Gaussian random variables, it is easy to see that ℙ (max_1 ≤ j ≤ 2pn^-1/2_j _2 ≥ C) → 0 for some large enough constant C > 0. Further, it follows from the sub-Gaussianity of X_j and the sparsity of _j and _j that max_1 ≤ j ≤ 2p n^-1/2_-j^ (_j - _j) _2 ≤ C m_n^1/2√(m_n log p/n) ≤ C m_n √(log p/n)→ 0. Thus, when m_n √(log p/n)→ 0 we have ℙ ( n^-1/2max_1 ≤ j ≤ 2p _j _2 ≥ C) → 0 . Similarly, based on Lemma <ref> and the sparsity of ^ and ^, it holds that with probability 1-o(1), n^-1/2 ( ^ - ^ ) (^ - ^ ) _2 ≤max_J': |J'| ≲ s( ∑_j ∈ J' n^-1^_j - ^_j _2^2 )^1/2·^_J' - _J'^_2 ≤ C s^1/2Δ_n · ( √(s log p/n) + Δ_n s √(log p /n) ) ≤ C Δ_n s √(log p/n), where the last inequality above holds due to Δ_n s^1/2→ 0. Consequently, combining the above three inequalities shows that with probability 1 - o(1), P_22≤ C Δ_n s√(log p/n). We now deal with term P_23 in (<ref>). In view of the Cauchy–Schwarz inequality, (<ref>), (<ref>), and (<ref>), we can obtain that with probability 1 - o(1), P_23 ≤max_1 ≤ j ≤ 2p _j - _j _2 /_j^⊤^_j·^(^ - ^ ) _2 ≤ C √(m_n log p/n)·Δ_n s √(log p/n) ≤ C Δ_n s √(log p/n). As for term P_24, since ( ^ - ^ ) = 0 it follows that with probability 1 - o(1), P_24 = max_1 ≤ j ≤ 2p | (_j - _j )^⊤ ( ^ - ^ ) (^ - ^) /_j^⊤^_j | ≤max_1 ≤ j ≤ 2p _j - _j _2 /_j^⊤^_j· ( ^ - ^ ) (^ - ^ ) _2 ≤ C √(m_n log p/n)·Δ_n s √(log p/n) ≤ C Δ_n s √(log p/n), where we have applied the bounds in (<ref>), (<ref>), and (<ref>). Consequently, plugging (<ref>), (<ref>), (<ref>), and (<ref>) into (<ref>) yields that with probability 1 - o(1), P_2≤ C Δ_n s √(log p/n). Now we proceed with dealing with term P_3. Note that P_3 ≤max_1 ≤ j ≤ 2p | _j ^⊤ ( - ^^ ) | ·| _j^⊤^_j - _j^⊤^_j | /| _j^⊤^_j | ·| _j^⊤^_j | . From (<ref>) and (<ref>), we can see that with probability 1 - o(1), max_1 ≤ j ≤ 2p n^-1/2_j _2 ≤max_1 ≤ j ≤ 2p n^-1/2_j _2 + max_1 ≤ j ≤ 2p n^-1/2_j - _j _2 ≤ C + C m_n Δ_n ≤ C. It follows from (<ref>), Condition <ref>, and the sub-Gaussian distribution of _j^ that with probability 1 - o(1), n^-1 | (_j - _j)^⊤^_j | ≤ C Δ_n m_n^1/2, n^-1 | _j^⊤ (^_j - _j^)| ≤ C Δ_n. Then with the aid of (<ref>), we can show that with probability 1 - o(1), min_1 ≤ j ≤ 2p n^-1 | _j^⊤^_j | ≥min_1 ≤ j ≤ 2p n^-1| _j^⊤^_j | - max_1 ≤ j ≤ 2p( n^-1 | (_j - _j)^⊤^_j | - n^-1 | _j^⊤ (^_j - _j^)| ) ≥ C - Cm_n Δ_n - C Δ_n ≥ C as m_n Δ_n → 0. As for the second component on the right-hand side of (<ref>) above, combining the results in (<ref>), (<ref>), and (<ref>) gives that with probability 1 - o(1), max_1 ≤ j ≤ 2p| _j^⊤^_j - _j^⊤^_j | /| _j^⊤^_j | ·| _j^⊤^_j | ≤max_1 ≤ j ≤ 2p| ( _j - _j )^⊤^_j | /| _j^⊤^_j | ·| _j^⊤^_j | + max_1 ≤ j ≤ 2p| _j^⊤ (_j^ - _j^) | /| _j^⊤^_j | ·| _j^⊤^_j | ≤ C n^-1 ( m_n^1/2Δ_n + Δ_n ). Regarding the first component on the right-hand side in (<ref>), from (^ - ^) = 0 we can deduce that max_1 ≤ j ≤ 2p n^-1| _j ^⊤ ( - ^^ ) | ≤max_1 ≤ j ≤ 2p n^-1| _j ^⊤ | + max_1 ≤ j ≤ 2p n^-1| _j ^⊤^ ( ^ - ^ ) | + max_1 ≤ j ≤ 2p n^-1| _j ^⊤ (^ - ^) ^| . Since d∼ N( 0, σ^2I_n ), it is easy to see that for the standard normal random variable Z, ℙ(max_1 ≤ j ≤ 2p n^-1| _j ^⊤ | > C √(log p/n)) = ℙ(max_1 ≤ j ≤ 2p n^-1_j_2 · |Z| > C √(log p/n)) ≤ℙ (|Z| > C √(log p)) → 0. Further, by Lemma <ref>, the sub-Gaussianity of X_j, and the sparsity of ^ and ^, we can obtain that with probability 1 - o(1), n^-1 | _j^⊤^ ( ^ - ^ ) | ≤ n^-1 | _j^⊤^ ( ^ - ^ ) | + n^-1 | _j^⊤^ ( ^ - ^ ) | ≤ C( √( slog p /n) + Δ_n s √(log p /n)) ≤ C √( slog p /n). Similarly, since (^ - ^) = 0, it holds that with probability 1 - o(1), n^-1 | _j^⊤ (^ - ^) ^ | = n^-1 | _j^⊤ (^ - ^) ( ^ -^ ) | ≤ C Δ_n s^1/2·√( s log p/n) ≤ C √( s log p/n). Consequently, by m_n≲ s in Condition <ref> we have that with probability 1 - o(1), P_3 ≤ C m_n^1/2Δ_n ·√(slog p/n)≤ C Δ_n s √(log p/n). Finally, a combination of (<ref>), (<ref>), (<ref>), and (<ref>) establishes (<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Using the definitions of W_j and w_j and the triangle inequality, we see that ∑_j = 1^p ℙ (| W_j - w_j | ≥ C √(n^-1log p )) ≤∑_j = 1^p ℙ( √(n)| |β_j - β_j | - |β_j + p - β_j + p| | ≥ C √(log p )) ≤∑_j = 1^p [ ℙ( √(n) |β_j - β_j | ≥ C √(log p ) / 2 ) + ℙ( √(n) |β_j + p - β_j + p| ≥ C √(log p ) /2 ) ]. The main idea of the proof is to exploit the decomposition in (<ref>) and the observation that the main term therein follows the normal distribution. Let us start with bounding the error term in (<ref>). We claim that with probability 1 - O (p^-3), max_1 ≤ j ≤ 2p| ∑_k ≠ j√(n)_j^⊤^_k (β_k^ - β_k^)/_j^⊤^_j | ≤C m_n^1/2 s log p /√(n). From the fact that β^_j + p = 0 for 1 ≤ j ≤ p and the bound in (<ref>), since m_n^1/2 s log p /√(n)≪√(log p) we can deduce through the union bound that ∑_j = 1^p ℙ( √(n) |β_j - β_j | ≥ C √(log p ) / 2 ) ≤∑_j = 1^p ℙ( |_j^⊤ | /_j _2 ·√(n)τ_j ≥ C √(log p ) / 3 ) + ∑_j = 1^p ℙ( max_1 ≤ j ≤ 2p| ∑_k ≠ j√(n)_j^⊤^_k (β_k - β_k^)/_j^⊤^_j | > C m_n^1/2 s log p /√(n)) ≤∑_j = 1^p ℙ( |_j^⊤ | /_j _2 ·√(n)τ_j ≥ C √(log p ) / 3 ) + O(p^-1). Recall the result (<ref>) in Lemma <ref> and that _j^⊤/_j _2 ∼ N(0, σ^2). As m_n log p/n = o(1), it holds that for some large constant C> 0, ∑_j = 1^p ℙ( _j^⊤/_j _2 ·√(n)τ_j ≥ C √(log p ) / 3 ) ≤∑_j = 1^p ℙ( _j^⊤/_j _2 ≥C√(log p )) = p exp{ - C^2 log p / 2 }→ 0. Similarly, we can show that ∑_j = 1^p ℙ( _j + p^⊤/_j+p_2 ·√(n)τ_j ≥ C √(log p )) → 0. Plugging the two inequalities above into (<ref>) leads to the desired result in Lemma <ref>. It remains to establish (<ref>). Proof of (<ref>). Observe that for k ≠ j, n^-1_j^⊤^_k = n^-1(^_j - ^_-j_j) ^⊤^_k = n^-1_j ^⊤^_k - n^-1 (_j - _j )^⊤ (^_-j )^⊤^_k. Since _j and _k^ are uncorrelated, it follows from the sub-Gaussian assumption in Condition <ref> that for some constant C > 0, ℙ( n^-1 | _j ^⊤^_k | ≥ C √(log p /n)) ≤ 2 p^-3. In light of lemma <ref> and the sub-Gaussian assumption on _j, we can deduce that with probability 1 - O(p^-3), | n^-1 (_j - _j )^⊤ (^_-j )^⊤^_k | ≤ n^-1/2^_-j (_j - _j) _2 n^-1/2_k _2 ≤ C √(m_n log p/n). Plugging the above two results into (<ref>), when m_nlog p = o(n) an application of the union bound shows that with probability 1 - O( p^-1), max_ 1 ≤ j ≤ p max_k ≠ j n^-1 | _j^⊤^_k | ≤ C √(log p /n) + C √(m_nlog p /n) ≤ C √( m_n log p /n). Similarly, when √(log p /n) = o(1), we can show that there exists some constant C>0 such that with probability 1 - O(p^-1), min_1 ≤ j ≤ p n^-1_j^⊤^_j ≥ C. Consequently, plugging (<ref>), (<ref>), and (<ref>) into Lemma <ref> yields that with probability 1 - O (p^-3), max_1 ≤ j ≤ p| ∑_k ≠ j√(n)_j^⊤^_k (β_k - β_k^)/_j^⊤^_j | ≤√(n)max_1 ≤ j ≤ pmax_k ≠ j | _j^⊤^_k | /min_1 ≤ j ≤ p | _j^⊤^_j | ·^ - ^_1 ≤ C √( m_n log p )· s √(log p/n) = C m_n^1/2 s log p /√(n), which establishes (<ref>). This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> The intuition of the proof is that the sparsity of ^A implies the weak dependence among the components of the knockoff statistic vector = (W_1, …, W_p), which entails the weak dependence among the indicator functions 1 (W_j > t)'s. For 1 ≤ j ≤ p, let us define N_j = { l ∈ℋ_0: _j, l^A ≠ 0 }. From the sparsity assumption on ^A in Condition <ref>, we see that |N_j| ≤ m_n for any 1 ≤ j≤ p. Then we can obtain through expanding the variance that ( ∑_j ∈ℋ_01 (W_j > t) ) = ∑_j ∈ℋ_0∑_l ∈ N_j^c ∩ℋ_0 l ≠ j( ℙ (W_j ≥ t, W_l ≥ t) - ℙ (W_j ≥ t) ℙ (W_l ≥ t) ) + ∑_j ∈ℋ_0∑_l ∈ N_j ∪{ j }( ℙ (W_j ≥ t, W_l ≥ t) - ℙ (W_j ≥ t) ℙ (W_l ≥ t) ) := V_1(t) + V_2(t). We will deal with terms V_1(t) and V_2(t) above separately. Regarding the second term V_2(t), it follows from |N_j ∪{ j}| ≤ m_n + 1 that sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] V_2 (t) /p_0 G(t) ≤sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] ∑_j ∈ℋ_0∑_l ∈ N_j ∪{ j }ℙ (W_j ≥ t) /∑_j ∈ℋ_0ℙ (W_j ≥ t) ≤sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] ∑_j ∈ℋ_0 (m_n + 1) ℙ (W_j ≥ t) /∑_j ∈ℋ_0ℙ (W_j ≥ t) ≤ m_n + 1. We claim that as m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0, (log p)^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] V_1 (t) / [p_0 G (t)]^2 → 0. Therefore, combining (<ref>), (<ref>), and (<ref>) leads to the desired result of Lemma <ref>. It remains to establish (<ref>). Proof of (<ref>). Let {η_j}_j = 1^p be a sequence of independent random variables with η_j having density function given by h_j (t) = √(2)/√(π) a_j[1 - Φ( b_j^-1 t ) ] exp{ - t^2 / (2 v_ j^2) } + √(2)/√(π) b_j[1 - Φ( v_ j^-1 t ) ] exp{ - t^2 / (2 b_ j^2) }, where v_j = √( 2 ( e_j^2)^-1 (1 - (e_j, e_j+p)) ) and b_j =√( 2 ( e_j^2)^-1 (1 + (e_j, e_j+p)) ). The essential step in the proof is to show that for l ∈ N_j^c∩ℋ_0, ( |ξ_j| - |ξ_j+p|, |ξ_l| - |ξ_l+p|) d→ (η_j, η_l). We proceed with proving such result. Define δ_n = C m_n^1/2 s log p/√(n). We claim that for l ≠ j and l ∈ N_j^c∩ℋ_0, ℙ ( W_j ≥ t, W_l ≥ t ) ≤ℙ ( η_j ≥√(n) t - δ_n ) ℙ ( η_l ≥√(n) t - δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3), ℙ ( W_j ≥ t, W_l ≥ t ) ≥ℙ ( η_j ≥√(n) t + δ_n ) ℙ ( η_l ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3), ℙ ( W_j ≥ t) ≥ℙ ( η_j ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3), ℙ ( W_j ≥ t) ≤ℙ ( η_j ≥√(n) t - δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3). The proofs for (<ref>)–(<ref>) above are analogous. Without loss of generality, we will present only the proof of (<ref>) and postpone it to the end of the proof for Lemma <ref>. In view of (<ref>)–(<ref>) above and the definition of V_1 (t) in (<ref>), we can deduce that V_1 (t) = ∑_j ∈ℋ_0∑_l ∈ N_j^c ∩ℋ_0 l ≠ j( ℙ (W_j ≥ t, W_l ≥ t) - ℙ (W_j ≥ t) ℙ (W_l ≥ t) ) ≤∑_j ∈ℋ_0∑_ l ≠ j{ℙ ( η_j ≥√(n) t - δ_n ) ℙ ( η_l ≥√(n) t - δ_n ) (1 + O ( √(m_n (log p)^3/n)) - ℙ ( η_j ≥√(n) t + δ_n ) ℙ ( η_j ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n))) } + O(p^-1) = ∑_j ∈ℋ_0∑_ l ≠ jℙ (√(n) t - δ_n ≤η_j ≤√(n) t + δ_n) ℙ ( η_l ≥√(n) t - δ_n ) + ∑_j ∈ℋ_0∑_ l ≠ jℙ ( η_j ≥√(n) t - δ_n ) ℙ (√(n) t - δ_n ≤η_l ≤√(n) t + δ_n) + ∑_j ∈ℋ_0∑_ l ≠ jℙ ( η_j ≥√(n) t - δ_n ) ℙ ( η_l ≥√(n) t - δ_n ) · O( √(m_n (log p)^3 /n)) + O(p^-1) := V_11(t) + V_12(t) + V_13(t) + O(p^-1). Recall that p_0 G(t) = ∑_j ∈ℋ_0ℙ (W_j ≥ t). Then it follows from the definition of V_11(t) and (<ref>) that V_11(t)/ [p_0 G(t)]^2 ≤∑_j ∈ℋ_0∑_ l ≠ jℙ (√(n) t - δ_n ≤η_j ≤√(n) t + δ_n) ℙ ( η_l ≥√(n) t - δ_n ) /[ ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n) )) + O(p^-2) ]^2. We will consider two ranges t ∈ (0, 4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)) and t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j), G^-1 (c_1 q a_n/p) ] separately. For the first range t ∈ (0, 4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), we can see that √(n) t is upper bounded by a constant. Since δ_n = o(1) by the assumption that m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0, it follows that √(n) t + δ_n and √(n) t - δ_n are both of a constant order. Hence, by the definition of the density function h_j(·) of η_j shown in (<ref>), max_1 ≤ j ≤ p h_j(u) is bounded by a constant for u ∈ [√(n) t - δ_n, √(n) t + δ_n], and C_1 ≤min_1≤ j ≤ pℙ (η_j ≥√(n) t + δ_n) ≤max_1 ≤ j ≤ pℙ (η_j ≥√(n) t - δ_n) ≤ C_2 for some positive constants C_1 < C_2. Thus, it is easy to see that sup_t ∈ (0, 4 n^-1/2max_1 ≤ j ≤ p (v_j b_j))V_11(t)/ [p_0 G(t)]^2 ≤ C p_0^2 δ_n max_1≤ j ≤ psup_u ∈ [√(n) t - δ_n, √(n) t + δ_n] h_j (u) max_1 ≤ j ≤ pℙ (η_j ≥√(n) t - δ_n) / p_0^2 [min_1 ≤ j ≤ pℙ (η_j ≥√(n) t + δ_n) ]^2 ≤ C δ_n = C m_n^1/2 s log p/√(n). We proceed with considering the second range t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j), G^-1 (c_1 q a_n/p) ). An application of similar arguments as for (<ref>) shows that max_1 ≤ j ≤ psup_t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)]∑_j ∈ℋ_0ℙ ( √(n) t - δ_n ≤η_j ≤√(n) t + δ_n) /∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) ≤ C √(n) G^-1 (c_1 q a_n/p) ·δ_n. Moreover, it follows from plugging t = G^-1(c_1 q a_n/p) into (<ref>) and taking summation over j ∈ℋ_0 that c_1 q a_n p_0/p ≤∑_j ∈ℋ_0ℙ (η_j ≥√(n) G^-1(c_1 q a_n/p) - δ_n) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3). Then from the density function h_j(t) for η_j, we can obtain through some direct calculations that ℙ ( η_j ≥ t) = 2 [1 - Φ(v_j^-1 t)] [1 - Φ(b_j^-1 t)]. Further, combining (<ref>) and (<ref>) yields that G^-1(c_1 q a_n/p) = O(√(log p/n) ). Substituting this bound into (<ref>) implies that max_1 ≤ j ≤ psup_t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)]∑_j ∈ℋ_0ℙ ( √(n) t - δ_n ≤η_j ≤√(n) t + δ_n) /∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) ≤ C m_n^1/2 s (log p)^3/2/√(n), where in the last inequality above we have utilized the definition of δ_n. Thus as m_n^1/2 s (log p)^3/2/√(n)→ 0, it holds that max_1 ≤ j ≤ psup_t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)]| ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t - δ_n ) /∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) - 1 | ≤ C m_n^1/2 s (log p)^3/2/√(n)→ 0. Since p_0 G(t) ≥ c_1 q a_n p_0 / p →∞ for 0≤ t ≤ G^-1 (c_1 q a_n/p), it follows from taking summation over j ∈ℋ_0 on both sides of (<ref>) that as m_n^1/2 (log p)^3/2 / √(n)→ 0, ∑_j ∈ℋ_0ℙ (η_j ≥√(n) t - δ_n) ≥ C ( c_1 q a_n p_0/ p + O(p^-2 ) ) →∞, which along with (<ref>) implies that ∑_j ∈ℋ_0ℙ (η_j ≥√(n) t + δ_n) ≥ C ( c_1 q a_n p_0/ p + O(p^-2 ) ) →∞. Combining this with (<ref>), we can further bound the ratio in (<ref>) in the second range of t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)) as sup_t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)]V_11(t)/ [p_0 G(t)]^2 ≤{[∑_j ∈ℋ_0ℙ ( √(n) t - δ_n ≤η_j ≤√(n) t + δ_n) ]^2 /[ ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) ]^2 + ∑_j ∈ℋ_0ℙ ( √(n) t - δ_n ≤η_j ≤√(n) t + δ_n) /∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) } ×(1 + O ( √(m_n (log p)^3/n) + p^-2)) ≤ C m_n^1/2 s (log p)^3/2/√(n). Hence, we see from the above result and (<ref>) that sup_t ∈ (0, G^-1(c_1 q a_n/p))V_11(t)/ [p_0 G(t)]^2 ≤ C m_n^1/2 s (log p)^3/2/√(n). In a similar manner, we can deduce that sup_t ∈ (0, G^-1(c_1 q a_n/p))V_12(t)/ [p_0 G(t)]^2 ≤ C m_n^1/2 s (log p)^3/2/√(n) and sup_t ∈ (0, G^-1(c_1 q a_n/p))V_13(t)/ [p_0 G(t)]^2 ≤ C √(m_n (log p)^3/n). Combining (<ref>) and (<ref>)–(<ref>) yields (<ref>) as m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0. This completes the proof of (<ref>). It remains to establish (<ref>). Proof of (<ref>). Note that for j ∈ℋ_0, it holds that β_j^ = β_j + p^ = 0 under the setting of the linear model. Then it follows that W_j = | β_j | - |β_j+p| = | β_j - β_j^ | - | β_j + p - β_j+p^ |. For 1 ≤ j ≤ 2p, let us define ξ_j = √(n)τ_j ·_j^⊤/_j _2. In view of the expression in (<ref>) and the bound of the remainder term established in (<ref>), an application of the total probability inequality gives that ℙ( W_j ≥ t, W_l ≥ t ) ≤ℙ( | ξ_j | - | ξ_j + p| ≥√(n) t - δ_n, | ξ_l | - | ξ_l + p| ≥√(n) t - δ_n ) + ℙ( max_1 ≤ j ≤ 2p| ∑_k ≠ j√(n)_j^⊤^_k (β_k - β_k^)/_j^⊤^_j | > δ_n ) = ℙ( | ξ_j | - | ξ_j + p| ≥√(n) t - δ_n, | ξ_l | - | ξ_l + p| ≥√(n) t - δ_n ) + O (p^-3). It suffices to consider probability ℙ( | ξ_j | - | ξ_j + p| ≥ t - δ_n, | ξ_l | - | ξ_l + p| ≥ t - δ_n ) for t ∈ (0, √(n) G^-1 (c_1 q a_n/p) ]. A useful observation is that ℙ( | ξ_j | - | ξ_j + p| ≥ t - δ_n, | ξ_l | - | ξ_l + p| ≥ t - δ_n ) ≤ℙ( | ξ_j | - | ξ_j + p| ≥ t - δ_n, | ξ_l | - | ξ_l + p| ≥ t - δ_n, max{ | ξ_j |, | ξ_j + p|, | ξ_l |, | ξ_l + p| }≤ C √(log p )) + ℙ( max{ | ξ_j |, | ξ_j + p|, | ξ_l |, | ξ_l + p| } > C √(log p)) := P_1 + P_2. We will consider terms P_1 and P_2 above separately. Let us first deal with term P_2. From the definition of ξ_j, (<ref>) in Lemma <ref>, and the fact that _j^⊤/_j _2 d∼ N(0, 1), we can obtain through the union bound that as m_n log p /n→ 0 and for some large constant C > 4 ( e_j^2)^-1/2, ℙ ( |ξ_j| ≥ C √(log p) ) ≤ℙ( | _j^⊤/_j _2 | ≥ 2 C ( e_j^2)^1/2√(log p) / 3 ) + ℙ (√(n)τ_j ≥ 3 ( e_j^2)^-1/2 /2 ) = O (p^-3). Hence, the inequality above implies that P_2 = O( p^-3). We next proceed with analyzing term P_1. Given ^, denote by f_ξ, ξ_j+p (x, y) the density of (ξ_i, ξ_j+p) and f_ξ_l, ξ_l+p |(ξ_j, ξ_j+p) (u, w | x, y) the conditional density of (ξ_l, ξ_l+p ) | (ξ_j, ξ_j+p). Then probability P_2 can be written as ℙ( | ξ_j | - | ξ_j + p| ≥ t - δ_n, | ξ_l | - | ξ_l + p| ≥ t - δ_n, max{ | ξ_j |, | ξ_j + p|, | ξ_l |, | ξ_l + p| ≤ C √(log p )) = _^[∫_|x| - |y| ≥ t - δ_n |x| ≤ C √(log p) |y| ≤ C √(log p) f_ξ, ξ_j+p (x, y) ·∫_|u| - |w| ≥ t - δ_n |u| ≤ C √(log p) |w| ≤ C √(log p) f_ξ_l, ξ_l+p |(ξ_j, ξ_j+p) (u, w | x, y) du dv dx dy ]. Since d∼ N( 0, I_n) and is independent of ^, it is easy to see that for j ≠ l, conditional on ^ we have (ξ_j, ξ_j + p, ξ_l, ξ_l + p)^⊤| ^d∼ N ( 0, ), where the covariance matrix is given by = [ _11_12; _21_22 ] with _11 = [ n τ_j^2 n _j^⊤_j + p/ |_j^⊤_j^| |_j + p^⊤_j + p^|; n _j^⊤_j + p/ |_j^⊤_j^| |_j + p^⊤_j + p^| n τ_j + p^2 ], _12 = _21^⊤ = [ n _j^⊤_l/ |_j^⊤_j^| |_l^⊤_l^| n _j^⊤_l + p/ |_j^⊤_j^| |_l + p^⊤_l + p^|; n _l^⊤_j + p/ |_l^⊤_l^| |_j + p^⊤_j + p^| n _l +p^⊤_j + p/ |_l + p^⊤_l +p^| |_j + p^⊤_j + p^| ], _22 = [ n τ_l^2 n _l^⊤_l + p/ |_l^⊤_l^| |_l + p^⊤_l + p^|; n _l^⊤_l + p/ |_l^⊤_l^| |_l + p^⊤_l + p^| n τ_l + p^2 ]. It follows from the conditional distribution of the multivariate normal distribution that given ^, f_ξ_l, ξ_l+p |(ξ_j, ξ_j+p) (u, v | x, y) = 1 /2π | _22 - _21_11^-1_12 |^1/2× exp{ - 1/2[ ( u v ) - _21_11^-1( x y ) ]^⊤ (_22 - _21_11^-1_12)^-1 ·[ ( u v ) - _21_11^-1( x y ) ] }. For l ≠ j and l ∈ N_j^c, it holds that (e_j, e_l) = _j, l^A/_j, j^A _l, l^A = 0. Since _j, l^A = _j, l+p^A = _j + p, l^A = _j+p, l+p^A due to the symmetric structure of , we also have (e_j, e_l+p) = (e_j+p, e_l) = (e_j+p, e_l+p ) = 0 for l ≠ j and l ∈ N_j^c. Then it follows from (<ref>) in Lemma <ref> that for l ≠ j and l ∈ N_j^c, with probability 1- O(p^-3) n^-1_j^⊤_l ≤ C √(m_n log p/n), n^-1_j^⊤_l+p≤ C √(m_n log p/n), n^-1_j+p^⊤_l≤ C √(m_n log p/n), n^-1_j+p^⊤_l+p≤ C √(m_n log p/n). Similarly, for 1 ≤ j ≤ 2p we can show that with probability 1 - O(p^-3), n^-1_j^⊤^_j ≥ C. _j^⊤_l = (_j^ - _-j^_j)^⊤ (_l^ - _-l^_l) = ( _j + ^_-j (_j - _j) )^⊤ ( _l + _-l^ (_l - _l) ) = _j^⊤_l + _j^⊤_-l^ (_l - _l) + _l^⊤_-j^ (_j - _j) + [ _-j^ (_j - _j) ]^⊤_-l^ (_l - _l). For l ≠ j and l ∈ N_j^c, we know that [e_j e_l] = (e_j, e_l) = _j, l^A/_j, j^A _l, l^A = 0. In addition, since e_j and e_l are sub-Gaussian random variables, we can obtain by applying Bernstein’s inequality for sub-exponential random variables that ℙ (n^-1 | _j^⊤_l | ≥ C √(log p/n)) = O(p^-3). Moreover, in view of Lemma <ref>, sparsity of _j and _j and the fact that (e_j, X_l) = 0 for j ≠ l, we have with probability 1 - O(p^-3) n^-1| _j^⊤_-l^ (_l - _l) | ≤ n^-1| _j^⊤_j^ (_l, j - _l) | + n^-1| ∑_k ≠ j, l_j^⊤_k^ (_l,k - _l, k) | ≤ n^-1| _j^⊤_j^ (_l, j - _l) | + n^-1[max_J': |J'| ≲ m_n∑_k ≠ j, l k ∈ J' (_j^⊤_k^ )^2 ∑_k ≠ j, l k ∈ J' (_l,k - _l, k)^2 ] ^2 ≤ C√(m_n log p/n) +C √(m_n log p/n)·√(m_n log p/n)≤ C √(m_n log p/n). Similarly, it holds that with probability 1 - O(p^-3), n^-1| _l^⊤_-j^ (_j - _j) | ≤ C √(m_n log p/n). Additionally, we have that with probability 1 - O(p^-3), | [ _-j^ (_j - _j) ]^⊤_-l^ (_l - _l) | ≤ C √(m_n log p/n)·√(m_n log p/n)≤ C √(m_n log p/n). Therefore, for l ≠ j and l ∈ N_j^c, it follows that with probability 1 - O(p^-3), n^-1_j^⊤_l ≤ C √(m_n log p/n). Then from (<ref>), (<ref>), and the definition of _12, we can obtain that with probability 1 - O(p^-3), _12_max≤ C √(m_n log p/n). We have shown in (<ref>) that _12_max≤ C √(m_n log p/n) with probability 1 - O(p^-3). Similarly, when e_j^2 e_j+p^2 - ([e_j e_j+p] )^2 > C for some constant C > 0, it can be shown that | V_11 | ≥ C and |V_22| ≥ C with probability 1 - O(p^-3). Let us define an event 𝒞 = {^: _12_max≤ C_1 √(m_n log p/n), |_22| ≥ C_2, |_11| ≥ C_2, _11_max≤ C_3, _22_max≤ C_3}. We have shown that ℙ (𝒞) ≥ 1- O(p^-3). Then it is straightforward to see that conditional on event 𝒞, we have 1 /2π | _22 - _21_11^-1_12 |^1/2 = 1/2 π | V_22 |^-1/2(1 + O ( m_n log p/n) ) and _22^-1 - (_22 - _21_11^-1_12)^-1_max≤ C m_n log p/n. In addition, given event 𝒞 and the range that |x| ≤ C √(log p) and |y| ≤ C √(log p), it holds that _21_11^-1( x y ) _2 ≤ C √(m_n/n)log p. Further, given event 𝒞 and that max{|u|, |w|, |x|, |y|}≤ C √(log p), it follows from (<ref>)–(<ref>) that as m_n (log p)^3/n = o(1), | [ ( u w ) - _21_11^-1( x y ) ]^⊤ (_22 - _21_11^-1_12)^-1[ ( u w ) - _21_11^-1( x y ) ] - ( u w )^⊤_22^-1( u w ) | ≤ C √(m_n (log p)^3/n). Hence, substituting the bounds in (<ref>) and (<ref>) into (<ref>) yields that as m_n (log p)^3/n = o(1), f_ξ_l, ξ_l+p |(ξ_j, ξ_j+p) (u, w | x, y) = 1 /2π | _22 |^1/2exp{ - 1/2( u w )^⊤_22^-1( u w ) }·(1 + O ( √(m_n (log p)^3/n))) = f_ξ_l, ξ_l+p (u, w) (1 + O ( √(m_n (log p)^3/n))), which entails that (ξ_l, ξ_l+p) is asymptotically independent of (ξ_j, ξ_j+p) for l ≠ j and l ∈ N_j^c. By plugging (<ref>) into (<ref>), we can deduce that P_1 ≤𝔼{1 (𝒞) ℙ( |ξ_j| - |ξ_j+p| ≥ t - δ_n, max{ |ξ_j|, |ξ_j+p|}≤ C √(log p) | ^) ×ℙ( |ξ_l| - |ξ_l+p| ≥ t - δ_n, max{ |ξ_l|, |ξ_l+p|}≤ C √(log p) | ^)} ×(1 + O ( √(m_n (log p)^3/n))) + ℙ (𝒞^c ), where ℙ (𝒞^c ) = O(p^-3). We next show that given ^, |ξ_j| - |ξ_j+p| converges in distribution to η_j. Given ^, we see that (ξ_j, ξ_j+p) d∼ N( 0, _11). Without ambiguity, let us denote by _11 = [ σ_1, n^2 ρ_n σ_1, nσ_2, n; ρ_n σ_1, nσ_2, n σ_2, n^2 ] for simpler notation, where σ_1, n^2 = n τ_j^2, σ_2, n^2 = n τ_j + p^2, and ρ_n = _j^⊤_j+p / (_j_2 _j+p_2 ). We define an event ℰ = {|σ_1, n^2 - ( e_j^2)^-1 | ≤ C √(m_n log p/n), |σ_2, n^2 - ( e_j+p^2)^-1 | ≤ C √(m_n log p/n), | ρ_n - (e_j, e_j+p) | ≤ C √(m_n log p/n)}. It follows from Lemma <ref> that ℙ (ℰ) ≥ 1 - O(p^-3). Some straightforward calculations show that for t > 0, given ^ the density of |ξ_j| - |ξ_j+p| can be written as f_|ξ_j| - |ξ_j+p| (t) = √(2)/√(π)a_1, n[1 - Φ( a_2, n^-1 t ) ] exp{ - t^2 / (2 a_1, n^2) } + √(2)/√(π)a_3, n[1 - Φ( a_4,n^-1 t ) ] exp{ - t^2 / (2 a_3, n^2) }, where a_1, n = √(σ_1, n^2 + σ_2, n^2 - 2 ρ_n σ_1, nσ_2, n), a_2, n = σ_1, nσ_2, n a_1, n√( (1 - ρ_n^2))/σ_2, n^2 - ρ_n σ_1, nσ_2, n, a_3, n = √(σ_1, n^2 + σ_2, n^2 + 2 ρ_n σ_1, nσ_2, n), a_4, n = σ_1, nσ_2, n a_3, n√( (1 - ρ_n^2))/σ_2, n^2 + ρ_n σ_1, nσ_2, n . Recall the notation that v_j = √( 2 ( e_j^2)^-1 (1 - (e_j, e_j+p)) ) and b_j =√( 2 ( e_j^2)^-1 (1 + (e_j, e_j+p)) ). It holds that (e_j^2 ) = (_j, j^A)^-1 = (^A_j+p, j+p)^-1 = (e_j+p^2) due to the symmetry of ^A. On event ℰ, we have that | a_1, n / v_j - 1 | ≤ C √(m_n log p/n), | a_2, n / b_j - 1 | ≤ C √(m_n log p/n), |a_3, n / b_j - 1 | ≤ C √(m_n log p/n), | a_4, n / v_j - 1 | ≤ C √(m_n log p/n). Thus, in view of the definition of h_j(t) in (<ref>) and (<ref>), it follows that as |t| ≤ C√(log p), f_|ξ_j| - |ξ_j+p| (t) = h_j (t) (1 + O ( √(m_n (log p)^3/n))). With the aid of the above result, we can deduce that on event ℰ, ℙ( |ξ_j| - |ξ_j+p| ≥ t - δ_n, max{ |ξ_j|, |ξ_j+p|}≤ C √(log p) | ^) ≤ℙ( |ξ_j| - |ξ_j+p| ≥ t - δ_n, |ξ_j| - |ξ_j+p| ≤ C √(log p) | ^) ≤(∫_ t - δ_n ^ C √(log p) h_j (u) du ) (1 + O ( √(m_n (log p)^3/n))) = ℙ ( t - δ_n ≤η_j ≤ C √(log p) ) (1 + O ( √(m_n (log p)^3/n))) = [ℙ ( η_j ≥ t - δ_n ) - ℙ (η_j > C √(log p) ) ] (1 + O ( √(m_n (log p)^3/n))). Moreover, in light of (<ref>) it is easy to see that ℙ (η_j > C √(log p) ) = O(p^-3) for some large constant C, which together with (<ref>) leads to ℙ( |ξ_j| - |ξ_j+p| ≥ t - δ_n, max{ |ξ_j|, |ξ_j+p|}≤ C √(log p) | ^) ≤ℙ ( η_j ≥ t - δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O (p^-3). Plugging (<ref>) into (<ref>) shows that P_1 ≤ℙ ( η_j ≥ t - δ_n ) ℙ ( η_l ≥ t - δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3). Finally, combining (<ref>), (<ref>), (<ref>), and (<ref>) yields (<ref>). Similarly, we can also establish (<ref>)–(<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Let us first prove (<ref>). In the proof of Lemma <ref> in Section <ref>, we have established the lower bound and upper bound for ℙ (W_j ≥ t) in (<ref>) and (<ref>), respectively. Recall the definitions that δ_n = C m_n^1/2 s log p/√(n) and b_n = C Δ_n s √(log p/n). For the numerator and denominator in (<ref>), we can write that p_0( G(t - b_n) - G(t + b_n) ) = ∑_j ∈ℋ_0[ ℙ (W_j ≥ t - b_n ) - ℙ (W_j ≥ t + b_n ) ] ≤∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t - √(n) b_n -δ_n ) (1 + O ( √(m_n (log p)^3/n))) - ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + √(n) b_n + δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-2) ≤∑_j ∈ℋ_0ℙ ( √(n) t - √(n) b_n - δ_n ≤η_j ≤√(n) t + √(n) b_n + δ_n ) + ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t - √(n) b_n -δ_n ) · O ( √(m_n (log p)^3/n)) + O(p^-2) and p_0 G(t) ≥∑_j ∈ℋ_0ℙ (η_j ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-2), respectively. It follows from (<ref>)–(<ref>), similar arguments as for (<ref>), and G^-1 (c_1 q a_n/p) = O(√(log p/n)) in the proof of Lemma <ref> that as √(n) G^-1 (c_1 q a_n/p) (√(n) b_n+ δ_n ) → 0, sup_t ∈ (0, G^-1 (c_1 q a_n/p)] G(t - b_n ) - G(t + b_n) / G(t) ≤ C √(log p) (√(n) b_n + δ_n) + C √(m_n (log p)^3/n) ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ). Thus, we see that when m_n^1/2 s (log p)^3/2 + 1/γ/√(n) + Δ_n s (log p)^1 + 1 /γ→ 0, the desired result (<ref>) holds. We next proceed with establishing (<ref>). In view of Condition <ref>, it holds that p_1^-1∑_j ∈ℋ_1ℙ ( W_j < - t ) ≤ G(t) for t = O(√(n^-1log p)). Moreover, we have b_n = C Δ_n s √(log p/n) = o(G^-1(c_1 q a_n /p)) due to the assumption Δ_n s → 0 and G^-1 (c_1 q a_n /p) = O (√(log p/n) ). Then it follows that a_n ^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + b_n ) ≤ a_n^-1 (p - p_0) G( G^-1 ( c_1 q a_n / p ) - b_n ) = c_1 q (p - p_0) / p + a_n^-1(p - p_0) [ G ( G^-1 ( c_1 q a_n / p ) - b_n ) - G ( G^-1 ( c_1 q a_n / p ) ) ]. For notational simplicity, let us define t_n = G^-1 ( c_1 q a_n / p ). With the aid of the upper and lower bounds for ℙ (W_j ≥ t) given in (<ref>) and (<ref>), we can deduce that G ( t_n - b_n ) - G ( t_n ) ≤ p_0^-1∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t_n - √(n) b_n - δ_n ) (1 + O( √(m_n (log p)^3 /n)) ) - p_0^-1∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t_n + δ_n ) (1 + O( √(m_n (log p)^3 /n)) ) + O(p^-2) = p_0^-1∑_j ∈ℋ_0ℙ (√(n) t_n - √(n) b_n - δ_n ≤η_j ≤√(n) t_n + δ_n ) + p_0^-1∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t_n - √(n) b_n - δ_n ) · O( √(m_n (log p)^3 /n)) + O(p^-2). An application of similar arguments as for (<ref>) leads to ℙ (√(n) t_n - √(n) b_n - δ_n ≤η_j ≤√(n) t_n + δ_n ) /ℙ (η_j ≥√(n) t_n + δ_n ) ≤ C √(t_n) ( √(n) + δ_n ) ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ) and | ℙ ( η_j ≥√(n) t_n - √(n) b_n - δ_n ) /ℙ (η_j ≥√(n) t_n + δ_n ) - 1 | ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ). It follows from the lower bound in (<ref>) and G(t_n) = G(G^-1(c_1 q a_n/p)) = c_1 q a_n/p that as m_n (log p)^3/n→ 0, p_0^-1∑_j ∈ℋ_0ℙ (η_j ≥√(n) t_n + δ_n ) ≤ C (c_1 q a_n/p + O(p^-3) ) ≤ C c_1 q a_n/p. Therefore, combining (<ref>)–(<ref>) shows that G ( t_n - b_n ) - G ( t_n ) ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ) ·c_1 q a_n /p + C √(m_n (log p)^3 /n)·c_1 q a_n /p + O(p^-2) ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ) ·c_1 q a_n /p + O(p^-2). Finally, substituting the above bound into (<ref>) yields that as m_n^1/2 s (log p)^3/2 /√(n) + Δ_n s (log p) → 0, a_n ^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p + Δ_n ) ≤c_1 q (p - p_0)/p + C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ) ·c_1 q (p - p_0)/p + O( p - p_0 / a_n p^2) → 0, where we have used the assumption that p_0/p → 1. This establishes (<ref>), which concludes the proof of Lemma <ref>.
http://arxiv.org/abs/2307.04028v1
20230708183125
Measuring the Success of Diffusion Models at Imitating Human Artists
[ "Stephen Casper", "Zifan Guo", "Shreya Mogulothu", "Zachary Marinov", "Chinmay Deshpande", "Rui-Jie Yew", "Zheng Dai", "Dylan Hadfield-Menell" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
[ Measuring the Success of Diffusion Models at Imitating Human Artists equal* Stephen Casperequal,MIT Zifan Guoequal,MIT Shreya MogulothuMIT Zachary MarinovMIT Chinmay DeshpandeHarvard Rui-Jie YewMIT,Brown Zheng DaiMIT Dylan Hadfield-MenellMIT MITMIT HarvardHarvard University BrownBrown University Stephen [email protected] Machine Learning, ICML 0.3in ] § OVERVIEW Modern diffusion models have set the state-of-the-art in AI image generation. Their success is due, in part, to training on Internet-scale data which often includes copyrighted work. This prompts questions about the extent to which these models learn from, imitate, or copy the work of human artists. This work suggests that questions involving copyright liability should factor in a model's capacity to imitate an artist. Tying copyright liability to the capabilities of the model may be useful given the evolving ecosystem of generative models. Specifically, much of the legal analysis of copyright and generative systems focuses on the use of protected data for training <cit.>. However, generative systems are often the result of multiple training processes. As a result, the connections between data, training, and the system are often obscured. In our approach, we consider simple image classification techniques to measure a model's ability to imitate specific artists. Specifically, we use Contrastive Language-Image Pretrained (CLIP) <cit.> encoders to classify images in a zero-shot fashion. Our process first prompts a model to imitate a specific artist. Then, we test whether CLIP can be used to reclassify the artist (or the artist's work) from the imitation. If these tests match the imitation back to the original artist, this suggests the model can imitate that artist's expression. Our approach is simple and quantitative. Furthermore, it uses standard techniques and does not require additional training. We demonstrate our approach with an audit of Stable Diffusion's <cit.> capacity to imitate 70 professional digital artists with copyrighted work online. When Stable Diffusion is prompted to imitate an artist from this set, we find that the artist can be identified from the imitation with an average accuracy of 81.0%. Finally, we also show that a sample of the artist's work can be matched to these imitation images with a high degree of statistical reliability. Overall, these results suggest that Stable Diffusion is broadly successful at imitating individual human artists. Code is available https://colab.research.google.com/drive/1ScHo9uMdUgId0DlSr4W4RgnMD44dLiku?usp=sharinghere. § BACKGROUND Contrastive Language-Image Pretraining (CLIP): CLIP <cit.> is a technique for training AI systems that encode images and text into fixed-length vector representations. CLIP image and text encoders are trained to produce similar encodings of image/caption pairs and dissimilar encodings of image/caption non-pairs. The more geometrically distant two encodings of images or captions are, the less related they are according to the encoder, and vice versa. Using this principle, <cit.> introduced a method to classify an image among a set of labels based on the distances between encodings. We use this method in our proposed test. Diffusion Models: Diffusion models <cit.> such as Stable Diffusion <cit.> and Midjourney <cit.>, are capable of generating images from arbitrary, user-specified prompts. Their success has largely been due to training on large amounts of text/image data, often including copyrighted works <cit.>. Modern image-generation diffusion models are trained using CLIP-style encoders. When given an encoding of a caption, a diffusion model is trained to generate an image corresponding to the caption <cit.>. Accordingly, a diffusion model that generates images from these embeddings is trained to be the inverse of a CLIP image encoder. Legal Motivation: In the United States, <cit.> established that copyright infringement “is measured by considering the qualitative and quantitative significance of the copied portion in relation to the plaintiff’s work as a whole”. However, the subjective nature of these determinations makes practical enforcement complicated. <cit.>. In evaluating copyright questions involving AI systems, legal analyses have focused on how copyrighted work is used in the system's training data <cit.>, but such a focus on training data does not connect liability to an AI system's ability to copy an artist. In contrast, we show how standard image classification techniques can be used to help determine how successful AI image generators are at imitating individual human artists. This approach is consistent, quantitative, and connected to the capabilities of the resulting AI system. Our goal, however, is not to automate determinations of infringement but to demonstrate how tried and tested image classification techniques from machine learning can be used to analyze legal claims. § EXPERIMENTS We conduct two complementary experiments to evaluate Stable Diffusion's ability to imitate human artists. First, we classify human artists from imitations of their work, and second, we match real work from human artists to imitations. Both experiments suggest that Stable Diffusion is broadly successful at imitating human artists. §.§ Identifying Artists from Imitations Method: We used CLIP encoders to classify artists from Stable Diffusion's imitations of them. We selected 70 artists from the LAION-aesthetics dataset <cit.>, the dataset used to train Stable Diffusion. We selected these 70 as artists who may potentially be harmed by digital imitations using several criteria: each artist is alive, has a presence on digital art platforms (Instagram, DeviantArt, and ArtStation), publishes artwork or sells their artwork (e.g., prints or digital works), and has more than 100 images in the LAION dataset. Figure <ref> outlines our method. We prompted https://huggingface.co./runwayml/stable-diffusion-v1-5Stable Diffusion (v1.5) to generate images in the style of each artist, using prompts of the form “Artwork from <artist’s name>”. Example images are in Figure <ref>. We then used https://huggingface.co./openai/clip-vit-base-patch32CLIP encoders to classify each image among a set of 73 labels. The 73 labels consisted of each of the 70 artist's prompts (“Artwork from <artist’s name>”) plus three default labels: “Artwork”, “Digital Artwork”, and “Artwork from the public domain.” These additional labels lend insight into how confident CLIP is that an image imitates a particular artist's style instead of some more generic style. We then classified each imitation image among these labels using the technique from <cit.>. CLIP-based classification produces a probability of an image matching each label, and we evaluate the model on the correctness of its most-likely prediction and confidence in the correct artists. Results: We repeated the experiment with the 70 artists ten times to reduce the effect of random variation. On average, CLIP correctly classified 81.0% of the generated images as works made by artists whose names were used to generate them. Over the ten trials, 69 of the 70 artists were correctly classified in a plurality of the ten trials. Overall, these results suggest that Stable Diffusion has a broad-ranging ability to imitate the styles of individual artists. We compared these results to two baselines. First, we implemented a random-name baseline by running the same experiment with 70 random names from a https://randomwordgenerator.com/name.phprandom name generator. Since Stable Diffusion was not trained on artists with these names (unless a random name is coincidentally the same as some artist's), this experiment serves as a proxy for how Stable Diffusion would handle artists not in its training data. In this case, only 6 names (8.6%) were guessed correctly. Second, a random guess would only result in a successful classification every 1 in 73 attempts (1.4%) on average. We visualize results from our main experiment alongside the controls in Figure <ref>. Results are Robust to Different Sets of Artists: To test whether our 70 artists were especially classifiable, we ran the original experiment but with a larger set of indiscriminately-selected artists and found similar results. We selected the 250 artists with the highest number of images in the LAION dataset and found that CLIP correctly classified 81.2% of the images. This demonstrates that successful classification transcends a particular specific set of artists. §.§ Matching Artwork to Imitations Method: Our first experiment tested how easily artists could be identified from diffusion model imitations of them. To provide a complementary perspective, we also directly study the similarity of artists' digital works to Stable Diffusion's imitations of them. For each of the 70 artists, we retrieve the top result obtained by Google Image searching “<artist's name> art.” As before, we then use Stable Diffusion to generate 10 images for each artist with the prompt “Artwork from [artist's name].” We then compare the real images and generated images. Distances are measured by first encoding images using the CLIP image encoder and calculating the cosine distance between encodings. Results: For each artist, we calculate whether real images from artists are more similar to imitations of that artist or other artists. The significance was calculated using a rank sum test with a Bonferroni correction factor of 70. Results are in Figure <ref>. 90% (63/70) of the experiments produce p values less than 0.05. This compares to an average of 22.8% (16/70) for a control experiment using random artist assignments of real images. These results further support that Stable Diffusion is broadly successful at imitating artists. § CONCLUSION We have demonstrated how AI image classification can help to measure the success of diffusion models imitating human artists. We argue that these methods can provide a practical way to tie questions about copyright liability to the capabilities of a model instead of its training data alone. By matching imitation images to both artists' names and works, we find that Stable Diffusion is broadly successful at imitating human digital artists. We hope that future work can use image classification to analyze legal claims and to test defenses against AI imitation of copyrighted work. § ACKNOWLEDGEMENTS We thank Taylor Lynn Curtis and Lennart Schulze for feedback. icml2023
http://arxiv.org/abs/2307.07222v1
20230714082747
Õptimal Fault-Tolerant Reachability Labeling in Planar Graphs
[ "Shiri Chechik", "Shay Mozes", "Oren Weimann" ]
cs.DS
[ "cs.DS" ]
2426Communication29.9pc0.5pt1618 Three-Dimensional Fully Metallic Dual Polarization Frequency Selective Surface Design Using Coupled-Resonator Circuit Information Shiri ChechikTel Aviv University, Israel, mailto:[email protected]@tauex.tau.ac.il Shay MozesReichman University, Israel, mailto:[email protected]@idc.ac.il Oren WeimannUniversity of Haifa, Israel, mailto:[email protected]@cs.haifa.ac.il ================================================================================================================================================================================================================================================================================== empty We show how to assign labels of size Õ(1) to the vertices of a directed planar graph G, such that from the labels of any three vertices s,t,f we can deduce in Õ(1) time whether t is reachable from s in the graph . Previously it was only known how to achieve Õ(1) queries using a centralized Õ(n) size oracle [SODA'21]. § INTRODUCTION Reachability is undeniably one of the most fundamental properties of graphs. Determining the reachability between all pairs of vertices in a directed graph can be done in Õ(min{n^ω, mn}) time (where ω<2.373 is the exponent for matrix multiplication <cit.>). Reachability oracles. A reachability oracle is a compact data structure that allows to efficiently answer reachability queries between any pair of vertices in the graph. Henzinger et al. <cit.> provided conditional lower bounds for combinatorial constructions of reachability oracles, showing that no non-trivial combinatorial reachability oracle constructions exist for general directed graphs.[The term combinatorial is often referred to algorithms that do not utilize fast matrix multiplications.] Specifically, they proved that it is impossible to design a reachability oracle that simultaneously achieves O(n^3-ϵ) preprocessing time and O(n^2-ϵ) query time, for any ϵ > 0. Since non-trivial reachability oracles are not attainable for general graphs, efforts have been directed towards developing improved reachability oracles for specific graph families. Notably, graphs possessing separators of size s(n) admit a straightforward reachability oracle of size Õ(n· s(n)) and query time Õ(s(n)). Consequently, planar graphs (and more extensive graph classes such as H-minor free graphs) admit oracles of size Õ(n^1.5) and query time Õ(√(n)). In a groundbreaking result, Thorup <cit.> introduced a near-optimal reachability oracle for directed planar graphs with Õ(n) space and Õ(1) query time. Subsequently, Holm et al. <cit.> further improved this construction to a truly optimal oracle with O(n) space and O(1) query time. Fault-tolerant reachability oracles. In real-world networks, the vulnerability to changes and failures is a significant concern, and effectively handling failures has become crucial in modern computing. This motivation has sparked extensive research on data structures that can operate robustly in the presence of failures. Specifically, a fault-tolerant reachability oracle is designed to handle queries of the form s,t,f and determines whether vertex t is reachable from vertex s in the graph (the graph G with vertex f and all its incident edges removed). Fault-tolerant reachability oracles have been studied extensively in general graphs, see e.g. <cit.>. In planar graphs, one can leverage the more powerful fault-tolerant distance oracles. Baswana et al. <cit.> introduced a single-source fault-tolerant distance oracle with near-optimal Õ(n) space and Õ(1) query time. They further extended their construction to handle the all-pairs variant of the problem at the cost of an increased Õ(n^1.5) space and Õ(√(n)) query time. Subsequently, Charalampopoulos et al. <cit.> presented an improved fault-tolerant distance oracle for the all-pairs version in planar graphs. Their fault-tolerant oracle accommodates multiple failures; however, it comes with a polynomial trade-off between the size of the oracle and the query time that may be less favorable compared to previous constructions. It is important to highlight that, in the context of the all-pairs version, the fault-tolerant oracles mentioned above have considerably worse bounds when compared to the best known distance oracles without faults for planar graphs. Finally, in a groundbreaking development, Italiano et al. <cit.> in SODA 2021 introduced a nearly optimal fault-tolerant reachability oracle for directed planar graphs. Their innovative approach finally achieved near-optimal Õ(n) size and Õ(1) query time. Fault-tolerant reachability labeling. A more powerful concept than oracles is that of labeling schemes, where each vertex is assigned a compact label, and the objective is to answer queries based solely on the labels of the involved vertices. Labeling schemes have proven to be valuable in distributed settings where inferring properties like distances or reachablity is advantageous using only local information such as the labels of the source and destination vertices. Such scenarios arise in communication networks or disaster-stricken areas, where communication with a centralized entity may be infeasible or impossible. Labeling schemes provide a natural and structured framework for studying the distribution of graph information. Various problems have been explored within this model, including adjacency <cit.>, distances <cit.>, flows and connectivity <cit.>, and Steiner tree <cit.>. See <cit.> for a survey. In this paper, we focus on fault-tolerant reachability labeling in directed planar graphs. The objective is to assign a compact label to each vertex, such that given the labels of any three vertices s,t,f, one can efficiently determine whether t is reachable from s in the graph . The best previously known labeling scheme for fault-tolerant reachability was the one for fault-tolerant distances by <cit.> in which the label size is Õ(n^2/3). Is this the best possible? Ideally, the goal would be to devise a labeling scheme in which the sum of the label sizes is roughly equal to the size of the state-of-the-art oracle. However, achieving this goal is not always possible. For example, in <cit.>, an almost optimal exact distance oracle is given for directed planar graphs (without faults) of size O(n^1+o(1)) and query time Õ(1). On the other hand, it was shown in <cit.> that exact distance labels (even without faults) for planar graphs necessitate polynomial-sized labels regardless of the query time. Given this discrepancy between oracles and labeling schemes for distances in planar graphs, a natural question arises: does the same discrepancy exist for fault-tolerant reachability? In other words, is it possible to design a labeling scheme for directed planar graphs with label size Õ(1) and query time Õ(1)? Or does a similar gap exist between fault-tolerant reachability oracles and labeling schemes, as in the case of distances? Our results. We answer this question in the affirmative by providing a near optimal labeling scheme for fault-tolerant reachability in directed planar graphs with Õ(1) label size and Õ(1) query time. To obtain a fault-tolerant reachability labeling scheme, one might hope to generalize existing labeling schemes for undirected graphs. Specifically, for undirected planar graphs, Abraham et al. <cit.> presented labels of size Õ(1) that for any fixed ϵ>0, from the labels of vertics s,t, and the labels of a set F of failed vertices, can report in Õ(|F|^2) time a (1+ϵ)-approximation of the shortest s-to-t path in the graph G∖ F. One would hope to generalize this result to the directed case, even just settling for the seemingly easier task of reachability, and even for a single fault. This is particularly tempting because previous results for the failure-free case have successfully generalized from undirected to directed planar graphs. For example, Thorup <cit.> was able to convert his (1+ϵ)-distance oracle for undirected planar graphs to also work in the directed case by employing clever ideas. However, in the presence of failures, the task becomes significantly more challenging. In a nutshell, one of the challenges in adapting undirected results to the directed case in the context of fault-tolerant reachability labeling in planar graphs is the following. In many planar graph papers, including <cit.> and <cit.>, the algorithms store distances from a vertex to other vertices in relevant separators. In Thorup's reachability oracle <cit.>, the algorithm stores, for each vertex s and each relevant path separator, the first vertex on the path that is reachable from s and the last vertex on the path that can reach s. To determine if vertex s can reach vertex t by a path that intersects the path separator, one can examine the information stored in the labels of s and t. By utilizing this stored information, it becomes possible to check the reachability between s and t. One of the main challenges we face with this approach is the occurrence of failures anywhere along the relevant path separator. A faulty vertex f on the path separator requires us to store additional information, including the closest vertex to f that appears after f on the path separator and is reachable from the starting vertex s. Considering that failures can happen at any vertex on the path separator, this means we would need to store all vertices that are reachable from s and to s on the path separator. This requirement renders the approach impractical. To address this issue, we need to adopt a different approach and develop new techniques specifically tailored for accommodating failures in the directed case. It is worth mentioning that in both our algorithm and the algorithm proposed in <cit.>, the most challenging scenario arises when all three vertices s,t, and f are situated on the same path separator P. In <cit.>, this situation was addressed by employing a data structure that extends dominator trees and previous data structures designed for handling strong-connectivity in general (non-planar) graphs under failures <cit.>. We tackle this case without relying on complex dominator trees or similar sophisticated techniques. Therefore, our algorithm not only achieves the milestone of introducing a near optimal efficient fault-tolerant labeling scheme for planar reachability, but it also boasts a simpler approach compared to the one employed in <cit.>. We firmly believe that the simplicity of our algorithm makes it an important milestone in generalization to labels for approximate distances and for multiple failures in directed planar graphs and related graph families. § PRELIMINARIES The decompostition tree. In <cit.>, Thorup proved that we can assume the graph G has an undirected spanning tree T (i.e., T is an unrooted spanning tree in the undirected graph obtained from G by ignoring the directions of edge) such that each path in T is the concatenation of O(1) directed paths in G. This way, we can describe the process of decomposing G into pieces in the undirected version of G. After describing the decomposition, we will replace each undirected path of T defined in the process by its O(1) corresponding directed paths in G. We therefore proceed to describe the decomopostion treating G as an undirected graph with a rooted spanning tree T. A balanced simple cycle separator <cit.> (cf. <cit.>) is a simple cycle C in G whose vertices can be covered by a single path of the (unrooted) spanning tree T. The removal of the vertices of C and their incident edges separates G into two roughly equal sized subgraphs. The recursive decomposition tree 𝒯 of G is defined as follows. Each node of 𝒯 corresponds to a vertex induced subgraph of G (called a piece). The root piece of 𝒯 is the entire graph G. The boundary ∂ H of a piece H is a set of vertex disjoint paths that lie on the faces of H that are not faces of G. A vertex is a boundary vertex if it belongs to some path of ∂ H. The boundary of the root piece is empty. We define the children of a piece H in 𝒯 with boundary ∂ H and a simple cycle separator C recursively. Let Q be the maximal subpath of C that is internally disjoint from ∂ H. The vertices of H that are enclosed by C (including the vertices of C) belong to one child of H. The vertices of Q and the vertices of H not enclosed by C belong to the other child. Note that the vertices of Q are the only vertices of H that belong to both children. The endpoints of Q that belong to ∂ H are called the apices of H. We call the path Q without the apices of H the separator of H. We do not include the apices in the separator to guarantee that it is vertex disjoint from ∂ H. The boundary ∂ H' of a child H' of H consists of the separator of H and of the subpaths of ∂ H induced by the vertices of H'. The leaves of 𝒯 (called atomic pieces) correspond to pieces of size O(1). The depth of 𝒯 is O(log n). For convenience, we consider all O(1) vertices of an atomic (leaf) piece that are not already boundary vertices as the separator of the piece. It follows that the boundary ∂ H of any piece H consists of O(log n) vertex disjoint paths (the subpath induced by the vertices of H on the separators of the of the ancestor pieces of H). Also, there are O(log n) apices along any root-to-leaf path in 𝒯. Having defined the decomposition tree 𝒯 we can go back to treating G as a directed graph. As we explained above, each path we had discussed in the undirected version of G is the union of O(1) directed path in G. From now on when we refer to the separator paths of a piece H (resp., paths of ∂ H), we mean the set of directed paths comprising the undirected separator of H (resp., the set of directed paths comprising the paths of ∂ H). To be able to control the size of the labels in our construction we need to be aware of the number of pieces of 𝒯 to which a vertex belongs. The only vertices of a piece H that belong to both its children are the vertices of the separator path of H and the (at most 2) apices of H. The above definitions imply that every vertex belongs to the separator of at most one piece in 𝒯. Hence, if a vertex is not an apex, it appears in O(log n) pieces of 𝒯. Apices, on the other hand require special attention because they may belong to many (i.e., ω(log n)) pieces of 𝒯; High degree vertices may be apices in many pieces, and we will need a special mechanism for dealing with such vertices. Dealing with apices (like dealing with holes in other works on planar graphs) introduces technical complications that are not pertinent to understanding the main ideas of our work.[A reader who is not interested in those details can safely skip the parts dealing with apices and just act under the assumption that each vertex appears in 2 atomic pieces (leaves) of 𝒯, and that the following definition of ancestor pieces of a vertex v just degenerates to the set of O(log n) ancestors of the 2 atomic pieces containing v.] We associate with every vertex v∈ G the (at most 2) rootmost pieces H in 𝒯 in which v is an apex (or the atomic pieces containing v if v is never an apex). We denote these pieces by H_v. Note that every piece that contains a vertex v is either an ancestor of a piece in H_v or a descendant of a piece in H_v. For a vertex v∈ G we define the ancestor pieces of v to be the set of (weak) ancestors in 𝒯 of the pieces H_v. By definition of H_v, every vertex, apex or not, has O(log n) ancestors pieces. We similarly define the ancestor separators/paths/apices of a vertex v∈ G as the separators/separator-paths/apices of any ancestor piece of v. See Figure <ref>. We say that the separator Q of a piece H separates two vertices u and v (in H) if any u-to-v path in H must touch the separator Q of H or an apex of H. I.e., if at least one of the following holds: (1) u∈ Q or u is an apex of H, or (2) v∈ Q or v is an apex of H, or (3) each of u and v is in one distinct child of H. Note that if Q separates u and v in H then every u-to-v path in G either touches Q or touches the boundary of H. For a subgraph H, a path P and a vertex v, let HvP denote the first vertex of P that is reachable from v in H, and let HvP denote the last vertex of P that can reach v in H. If vertex u appears before vertex v on a path P then we denote this by u<_P v (or simply u<v if P is clear from the context). Throughout the paper, we gradually describe the information stored in the labels along with the explanations of why this particular information is stored (and why it is polylogarithmic). To assist the reader, we highlight in gray the parts that describe the information stored. For starters, we let every vertex v∈ G store in its label, for every ancestor path P of v, the identity of P and, if v ∈ P, the location of v in P (so that given two vertices u,v of P, we can tell if u<v). We denote P[u,v] the subpath of P between vertices u and v. Thorup's non-faulty labeling. Using the above definitions and notations, it is now very simple to describe Thorup's non-faulty reachability labeling <cit.>. Consider any vertex v. Let H be the rootmost piece in 𝒯 in which v belongs to the separator. The crucial observation is that v is separated from every other vertex in G either by the separator of H or by the separator of some ancestor piece of H. Hence, every vertex v∈ G stores in its label GvP and GvP for every path P of the separator of every ancestor of the rootmost piece in which v belongs to the separator. Then, given a query pair u,v, there exists a u-to-v path in G if and only if GuP < GvP for one of the O(1) paths P of the separator of an ancestor piece of the rootmost piece whose separator separates u and v. Both u and v store the relevant information for these paths in their labels. We note that in Thorup's scheme we do not need to worry about apices since each vertex v only stores information in pieces above the first time v appears on a separator. § THE LABELING In this section, we explain our labeling scheme. I.e., what to store in the labels so that given the labels of any three vertices s,f,t we can infer whether t is reachable from s in . We call the s-to-t path R in the replacement path. Let be the rootmost piece in 𝒯 whose separator Q separates t and f. Let H be child piece of that contains t (if both children of contain t then, if one of the children does not contain f we choose H to be that child). Note that by choice of H, f ∉ H ∖∂ H. We assume without loss of generality that s ∈. We handle the other case by storing a symmetric label to the one described here in the graph G with all edges reversed (the reverse graph of G has exactly the same decomposition tree as G, but there the roles of s and t are swapped, so the assumption does hold). Observe that by definition of H and of separation, f ∈∂ H iff f ∈. In what follows, we separately handle the cases where f ∉ and f ∈ Q. The main challenge is in the latter case. §.§ When f ∉ (and so, f ∉∂ H) Consider first the case when the replacement path R does not touch ∂ H. i.e., s,t and R are all contained in H∖∂ H. We handle this case by having each vertex v ∈ H ∖∂ H (and in particular s and t) store the standard (non-faulty) labeling of Thorup for H∖∂ H. Since each vertex v ∈ G is non-boundary in O(log n) pieces of 𝒯, this contributes Õ(1) to the size of the label at each vertex. We next consider the case that R touches ∂ H. In this case, R must have a suffix contained in H, and this suffix is unaffected by the fault f. More precisely, R exists iff s < Ht for one of the paths forming ∂ H. It is easy to find Ht; For every vertex v in H, if H is an ancestor piece of v, then v stores in its label Hv for each of the O(log n) paths of ∂ H. Since each vertex has only O(log n) ancestor pieces, this contributes Õ(1) to the size of the label. Notice that by the rootmost choice of H, H is an ancestor piece of t, so t indeed stores Ht. It thus remains only to describe how to find s from the labels of s and f. For every vertex s ∈ G, for every ancestor apex f of s and for every ancestor path of s, s stores in its label s. Similarly, for every vertex f ∈ G, for every ancestor apex s of f and for every ancestor path of f, f stores in its label s. If either s or f stores s, we are done. Otherwise, consider the set of leafmost pieces in 𝒯 that contain both s and f. Let ' be such a piece. It must be that ' is an ancestor piece of both s and f or else one of s and f is an ancestor apex of the other and stores s. It follows that if neither s nor f store s, then there are only O(1) leafmost pieces that contain both s and f. To avoid unnecessary clutter we shall assume there is a unique piece '. In reality we would have to apply the same argument for all O(1) such pieces. Since ' is an ancestor piece of both s and f, we can find the piece ' by traversing the list of ancestors of s (stored in s) and of f (stored in f) until finding the lowest common ancestor. Let be the child piece of ' that contains only s (if ' is an atomic piece then define ='). Recall that both s and f are in , so is a (possibly weak) ancestor of '.
http://arxiv.org/abs/2307.05299v1
20230711144325
Discovering Symbolic Laws Directly from Trajectories with Hamiltonian Graph Neural Networks
[ "Suresh Bishnoi", "Ravinder Bhattoo", "Jayadeva", "Sayan Ranu", "N M Anoop Krishnan" ]
cs.LG
[ "cs.LG", "cond-mat.dis-nn", "cond-mat.mtrl-sci", "physics.comp-ph" ]
Signal-background separation and energy reconstruction of gamma rays using pattern spectra and convolutional neural networks for the Small-Sized Telescopes of the Cherenkov Telescope Array [ August 12, 2023 ============================================================================================================================================================================================ The time evolution of physical systems is described by differential equations, which depend on abstract quantities like energy and force. Traditionally, these quantities are derived as functionals based on observables such as positions and velocities. Discovering these governing symbolic laws is the key to comprehending the interactions in nature. Here, we present a Hamiltonian graph neural network (), a physics-enforced that learns the dynamics of systems directly from their trajectory. We demonstrate the performance of on n-springs, n-pendulums, gravitational systems, and binary Lennard Jones systems; learns the dynamics in excellent agreement with the ground truth from small amounts of data. We also evaluate the ability of to generalize to larger system sizes, and to hybrid spring-pendulum system that is a combination of two original systems (spring and pendulum) on which the models are trained independently. Finally, employing symbolic regression on the learned , we infer the underlying equations relating the energy functionals, even for complex systems such as the binary Lennard-Jones liquid. Our framework facilitates the interpretable discovery of interaction laws directly from physical system trajectories. Furthermore, this approach can be extended to other systems with topology-dependent dynamics, such as cells, polydisperse gels, or deformable bodies. Any system in the universe is always in a continuous state of motion. This motion, also known as the dynamics, is observed and noted in terms of the trajectory, which comprises the system's configuration (that is, positions and velocities) as a function of time. Any understanding humans have developed about the universe is through analyzing the dynamics of different systems. Traditionally, the dynamics governing a physical system are expressed as governing differential equations derived from fundamental laws such as energy or momentum conservation, which, when integrated, provide the system's time evolution. However, these equations require the knowledge of functionals that relate abstract quantities such as energy, force, or stress with the configuration <cit.>. Thus, discovering these governing equations directly from the trajectory remains the key to understanding and comprehending the phenomena occurring in nature. Alternatively, several symbolic regression (SR) approaches have been used to discover free-form laws directly from observations <cit.>. However, the function space to explore in such cases is prohibitively large, and appropriate assumptions and constraints regarding the equations need to be provided to obtain a meaningful and straightforward equation <cit.>. Learning the dynamics of physical systems directly from their trajectory is a problem of interest in wide areas such as robotics, mechanics, biological systems such as proteins, and atomistic dynamics <cit.>. Recently, machine learning (ML) tools have been widely used to learn the dynamics of systems directly from the trajectory of systems <cit.>. Specifically, there have been three broad approaches to this extent, namely, data-driven, physics-informed, and physics-enforced approaches. Data-driven approaches try to develop models that learn the dynamics directly from ground-truth trajectories <cit.>. Physics-informed approaches rely on an additional term in the loss function, which is the governing differential equation: data loss and physics loss <cit.>. In contrast, physics-enforced approaches directly infuse the inductive biases in terms of the ordinary differential equations directly in the formulation as a hard constraint. These approaches are known as Hamiltonian () <cit.>, and Lagrangian neural networks (Lnn) <cit.>, and Graph Neural ODEs <cit.>. Adding the inductive bias in a physics-enforced fashion instead of a soft constraint in the loss function can significantly enhance the learning efficiency while also leading to realistic trajectories in terms of conservation laws <cit.>. Additionally, combining these formulations with graph neural networks (s) <cit.> can lead to superior properties such as zero-shot generalizability to unseen system sizes and hybrid systems unseen during the training, more efficient learning, and inference. However, although efficient in learning the dynamics, these approaches remain black-box in nature with poor interpretability of the learned function, which questions the robustness and correctness of the learned models <cit.>. Here, we present a framework combining Hamiltonian graph neural networks () and symbolic regression (SR), which enables the discovery of symbolic laws governing the energy functionals directly from the trajectory of systems. Specifically, we propose a architecture that decouples kinetic and potential energies and, thereby, efficiently learns the Hamiltonian of a system directly from the trajectory. We evaluate our architecture on several complex systems such as n-pendulum, n-spring, n-particle gravitational, and binary LJ systems. Further, the modular nature of enables the interpretability of the learned functions, which, when combined with SR, enables the discovery of the governing laws in a symbolic form, even for complex interactions such as binary LJ systems. § HAMILTONIAN MECHANICS Here, we briefly introduce the mathematical formulation of Hamiltonian mechanics that govern the dynamics of physical systems. Consider a system of n particles that are interacting with their positions at time t represented by the Cartesian coordinates as 𝐱(t)=(𝐱_1(t),𝐱_2(t),...𝐱_𝐧(t)). The Hamiltonian H of the system is defined as H(,)=T()+V(), where T() represents the total kinetic energy and V() represents the potential energy of the system. The Hamiltonian equations of motion for this system in Cartesian coordinates are given by <cit.> = ∇_𝐩_ H, 𝐩̇_̇ = -∇_ H where =∇_ẋH=𝐌 represents the momentum of the system in Cartesian coordinates and 𝐌 represents the mass matrix. Assuming Z = [; ] and J = [0, I; -I, 0], the acceleration of a particle can be obtained from the Hamiltonian equations as Ż = J(∇_Z H) since ∇_Z H + JŻ = 0 and J^-1 = -J. Sometimes systems may be subjected to constraints that depend on positions (holonomic) or velocities (Pfaffian). For example, in the case of a pendulum, the length between the bobs remains constant, or in multi-fingered grasping, the velocity of two fingers should be such that the combined geometry is able to hold the object. In such cases, the constrain equation is represented as Φ()=0, where Φ() ∈ℝ^k × D correspond to the k velocity constraints in a D-dimensional system. For instance, in the case of a pendulum, the constraint equation for two bobs located at (0,0) and (x_1,x_2) may be written as x_1ẋ_1 + x_2ẋ_2 = 0, which is the gradient of x_1^2 + x_2^2 =0. Following this, the Hamiltonian equations of motion can be modified to feature the constraints explicitly as <cit.> ∇_Z H + JŻ + (D_Z Ψ)^Tλ = 0 where Ψ(Z) = (Φ;Φ̇), D_Z Ψ is the Jacobian of Ψ with respect to Z, and (D_Z Ψ)^Tλ represents the effect of constraints on and  <cit.>. Thus, (D_ZΨ)Ż=0. Substituting for Ż from Eq. <ref> and solving for λ yields <cit.> λ = -[(D_ZΨ)J(D_ZΨ)^T]^-1[(D_ZΨ)J(∇_Z H)] Substituting λ in the Eq. <ref> and solving for Ż yields Ż = J[∇_Z H -(D_Z Ψ)^T [(D_ZΨ)J(D_ZΨ)^T]^-1 (D_Z Ψ) J ∇_Z H] Note that in the absence of constraint, Eq. <ref> reduces to Eq. <ref>. In Hamiltonian mechanics, Eq.<ref> is used to obtain the acceleration of the particles, which, when integrated, provides the updated configuration of the system. Thus, the only unknown in the previous equation is the H, which is represented as a function of and . § HAMILTONIAN GRAPH NEURAL NETWORK Now, we introduce our ML framework proposed to learn the Hamiltonian of a system directly from the trajectory, that is, only using the time evolution of the observable quantities (,). To this extent, we develop the Hamiltonian graph neural network () that parametrizes the actual H as a to obtain the learned Ĥ. Henceforth, all the terms with a hat, for example, represent the approximate function obtained from . Further, the Ĥ obtained from is substituted in the Eq.(<ref>) to obtain the acceleration and velocity of the particles. These values are integrated using a symplectic integrator to compute the updated position. First, we describe the architecture of (see Fig. <ref>(a)). The physical system is modeled as an undirected graph 𝒢=(𝒱,ℰ) with nodes as particles and edges as connections between them. For instance, in an n-ball-spring system, the balls are represented as nodes and springs as edges. The raw node features are t (type of particle) as one-hot encoding, , and , and the raw edge feature is the distance, d=||_j-_i||, between two particles i and j. A notable difference in the architecture from previous graph architectures is the presence of global and local features—local features participate in message passing and contribute to quantities that depend on topology. In contrast, global features do not take part in message passing. Here, we employ the position , velocity as global features for a node, while d and t are used as local features. For the , we employ an L-layer message passing , which takes an embedding of the node and edge features created by multi-layer perceptrons (MLPs) as input. Detailed hyper-parameters are provided in the Supplementary Material. The local features participate in message passing to create an updated node and edge embeddings. The final representations of the nodes and edges, _i and _ij, respectively, are passed through MLPs to obtain the Hamiltonian of the system. The Hamiltonian of the system is predicted as the sum of kinetic energy T and potential energy V in the . Specifically, the potential energy is predicted as V_i=∑_i _v(_i)+ ∑_ij_e(_ij)), where _v and _e represent the contribution from the node (particles themselves) and edges (interactions) toward the potential energy of the system, respectively. Kinetic energy is predicted as T= ∑_i _T(^0_i), where ^0_i is the embedding of particle i. To train the , we use only the time evolution of positions and momenta. This approach does not assume any knowledge of the functional form or knowledge of the Hamiltonian. The training approach, purely based on direct observables, can be used for any system (for example, trajectories from experiments) where the true Hamiltonian is unavailable. Thus, the loss function of is computed by using the predicted and actual positions at the timestep t+1 in a trajectory based on positions and velocities at t, which is then back-propagated to train the MLPs. Specifically, we use mean squared error (MSE) on the true and predicted Z, which is the concatenation of positions and velocities. ℒ= 1/n(∑_i=1^n (Z_i^t+1-Ẑ_i^t+1)^2) § CASE STUDIES Systems studied. Now, we evaluate the ability of to learn the dynamics directly from the trajectory. To evaluate , we selected four different types of systems, viz, 5-pendulums with explicit internal constraints and subjected to an external gravitational field, 5-springs with harmonic inter-particle harmonic interactions, 75-particle binary LJ system with two types of a particle interacting based on the Kob-Andersen LJ potential <cit.>, and 4-particle gravitational system with purely repulsive gravitational potential. Finally, in order to test the generalizability of to completely unseen system which is combination two systems on which it is trained, a hybrid system containing spring and pendulum is also considered. In this system, while the dynamics of pendulum is governed by the external gravitational field, the dynamics of the spring system depends on the internal forces generated in the system due to the expansion and compression of the spring. Thus, the systems selected here covers a broad range of cases, that is, dynamics (i) with internal constraints (pendulum), (ii) under the influence of an external field (gravitational), (iii) harmonic interactions (springs), (iv) complex breakable interactions (LJ potential), and (v) hybrid system with and without internal constraints. The training of is carried out for each system separately. A training dataset of 100 trajectories, each having 100 steps, were used for each system. For spring and pendulum, a 5-particle system is considered with random initial conditions. In the pendulum system, the initial conditions are considered in such a fashion that the constraints are respected. In the spring system, each ball is connected only to two other balls forming a loop structure. For gravitational system, a 4-particle system is considered where two particles are rotating in the clockwise direction, and two remaining particles are rotating in the anti-clockwise direction about their center of mass. For LJ system, a binary Kob-Andersen system with 75 particles are considered. The initial structure is generated by randomly placing the particles in a box with periodic boundary conditions. Further, the systems are simulated in a microcanonical ensemble (NVE) with temperatures corresponding to the liquid state to obtain equilibrium structures. Only once the system is equilibrated, the training data is collected for this system. models were trained on this dataset with a 75:25 split for training and validation. Further, to test the long-term stability and energy and momentum conservation error, the trained model was evaluated on a forward simulation for 10^5 timesteps on 100 random initial configurations. See Methods for detailed equations for the interactions, datasets, and training parameters. Learning the dynamics. Now, we evaluate the performance of the trained models. To evaluate the long-term stability of the dynamics learned by , we analyze the trajectory predicted by for 100 random initial configurations. Specifically, we compare the predicted and actual phase space, trajectory, kinetic energy, potential energy, and forces on all the particles of the system during the trajectory. Note that the systems studied in this case are chaotic; hence, the exact trajectory followed by will diverge with time. However, the phase space and the errors in energy and forces can be effective metrics to analyze whether the trajectory generated by is statistically equivalent to that of the original system, that is, sampling the same regions of the energy landscape. Further, in contrast to purely data-driven <cit.> or physics-informed methods, the physics-enforced architecture of strictly follows all the characteristics of the Hamiltonian equations of motion, such as the conservation laws of energy and momentum (see Supplementary Materials). This is due to the fact that the graph architecture only predicts the Hamiltonian of the system, which is then substituted in the Hamiltonian equations of motion to obtain the updated configuration. Due to this feature, the trajectory predicted by the is more realistic and meaningful in terms of the system's underlying physics. Fig. <ref> shows the performance of for the pendulum (Figs. <ref>(a)-(e), first row), spring (Figs. <ref>(f)-(j), second row), binary LJ (Figs. <ref>(k)-(o), third row), and gravitational systems (Figs. <ref>(p)-(t), fourth row). For pendulum and spring systems, we observe that the phase space represented by the positions in 1-direction (x_1) and velocities in the orthogonal direction (ẋ_2) predicted by (Figs. <ref>(a) and (f)) exhibit an excellent match with the ground truth trajectory. It is interesting to note that trained only on a trajectory of a single step (t to t+1) is able to learn the dynamics accurately and simulate a long-term stable trajectory of 10^5 timesteps that exactly matches the simulated trajectory. Similarly, for the binary LJ and gravitational systems, we observe that the predicted (Figs. <ref>(k) and (p)) and actual (Figs. <ref>(j) and (q)) positions in the trajectory of random unseen initial configurations explored by the systems exhibit an excellent match. Further, we observe that the predicted kinetic (Figs. <ref>(c), (h), (m), (r)) and potential (Figs. <ref>(d), (i), (n), and (s)) energies and forces (Figs. <ref>(e), (j), (o), and (t)) exhibit an excellent match with the ground truth values with a mean squared error almost close to zero. Additional evaluation of the architecture is performed by comparing it with two baselines, namely, (which is a physics-enforced MLP) and , which does not decouple potential and kinetic energies (see Supplementary Materials) and on additional metrics such as energy and momentum error. We observe that significantly outperforms and in terms of rollout and energy error (see Supplementary Materials). These results confirm that the architecture can learn the systems' dynamics directly from the trajectory and hence can be used for systems where the Hamiltonian is unknown or inaccessible (such as experimental or coarse-grained systems). Zero-shot generalizability. Now, we evaluate the generalizability of the to unseen systems, for instance, systems larger than those on which is trained or a completely new system that is a combination of two systems on which it is independently trained. While traditional neural networks based on approaches are restricted to the system sizes on which it is trained, is inductive to larger (and smaller) systems than those on which they are trained. This is due to the modular nature of the , thanks to the graph-based approach, where the learning occurs at the node and edge level. Fig. <ref> shows the generalizability of to larger system sizes than those on which it is trained. Specifically, we evaluate on 10-pendulum (Fig. <ref>(a)-(e)), 50-spring (Fig. <ref>(f)-(j)), and 600-particle binary LJ systems (Fig. <ref>(k)-(o)). We observe that is able to generalize to larger system sizes accurately without any additional training or fine-tuning, exhibiting excellent match with the ground truth trajectory in terms of positions, energies, and forces. Additional results on 50-pendulum systems and 500-spring systems are included in the Supplementary Material. We also evaluate the ability of to simulate a hybrid spring-pendulum system (see Fig. <ref>(b) Hybrid system). To this extent, we model the Hamiltonian of the hybrid as the superposition of the Hamiltonian of spring and pendulum systems. Further, we model two graphs based on the spring and pendulum elements and use the trained on the spring and pendulum systems to obtain the Hamiltonian of the system. Fig. <ref>(p)-(t) shows the performance of on the hybrid system. provides the dynamics in excellent agreement with the ground truth for the unseen hybrid system as well in terms of positions, energies, and forces. Additional results on the force predicted on each particle by in comparison to the ground truth for a trajectory of 100 steps is shown in Supplementary Material. These results confirm that is able to learn the dynamics of systems directly from their trajectory and simulate the long-term dynamics for new initial conditions and system sizes. This is a highly desirable feature as can be used to learn the Hamiltonian from sparse experimental data of physical systems or ab-initio simulations of atomic systems. This learned model can then be used to simulate larger system sizes to investigate phenomena with higher length scales. § INTERPRETABILITY AND DISCOVERING SYMBOLIC LAWS Neural networks, while exhibiting excellent capability to learn functions, are notorious for their black-box nature allowing poor or no interpretability to the learned function. In contrast, we demonstrate the interpretability of the learned . Thanks to the modular nature of , we analyze the functions learned by the individual MLPs that represent the node and edge level potential energies (_v and _e, respectively) and kinetic energy (_T) of the particles as a function of the learned embeddings. Fig. <ref>(a)-(f) show the learned functions with respect to the input features such as positions, velocities, or inter-particle distances. We observe that learned functions by for the potential energies for (i) pendulum bob (mgx_2; Fig. <ref>(a)), (ii) spring (0.5k(r_ij-1)^2; Fig. <ref>(c)), and (iii) binary LJ systems (0-0, 0-1, 1-1; Figs. <ref>(d)-(f), respectively) and kinetic energy of particles (0.5m|_i|^2; Fig. <ref>(b)) exhibits a close match with the known governing equations. This shows the interpretability of the and the additional ability to provide insights into the nature of interactions between the particles directly from their trajectory. Thus, can be used to discover interaction laws directly from their trajectory, even when they are not accessible or available. While the interpretability of can provide insights into the nature of energy functionals, abstracting it further as a symbolic expression can enable discovering the underlying interaction laws and energy functions. Such functionals can then be used for simulating the system or understanding the dynamics independent of the . Thus, beyond learning the dynamics of systems, can be used to discover underlying energy functionals and interaction laws. To this extent, we apply SR <cit.> on the learned functions by . Specifically, we focus on the kinetic energy function, the harmonic function of the spring, gravitational potential, and the binary LJ systems. Specifically, we employ simple operations such as addition, multiplication, and polynomials to identify the governing equations that minimize the error between the values predicted by the discovered equation and those predicted by the . The optimal equation is identified based on a score that balances complexity and loss of the equation (see Methods for details). Table <ref> shows the original equation and the equation discovered based on SR of the learned functionals. Note for each system, the equation that exhibits the maximum score is chosen as the final equation (see Methods for details). All the equations discovered by SR with their loss, complexity, polynomials used, and other hyper-parameters are included in the Supplementary material. We observe that the recovered equations exhibit a close match for kinetic energy, harmonic spring, gravitational potential, and binary LJ. In the case of the binary LJ system, we observe that the equations reproduced for (0-0) and (1-1) interactions are very close to the original equation, while for (0-1) interaction, the equation is slightly different, although it exhibits low loss. Interestingly, we observe that for LJ (0-1) interaction, one of the equations provided by SR given by V_ij = (0.203/r_ij^12 - 0.773/r_ij^6) is closer to the original equation in its functional form. However, this predicted equation has a score of 2.22 with a loss of 0.000109. Thus, both the loss and the score of the equation are higher and lower, respectively, than the best equation obtained in Table <ref>. This also suggests that for more complex interactions, an increased number of data points, especially along the inflection points, might be required to improve the probability of discovering the original equation. § OUTLOOK Altogether, in this work, we present a framework that allows the discovery of energy functionals directly from the trajectory of physical systems. The could be extended to address several challenging problems where the dynamics depends on the topology such as the dynamics of polydisperse gels <cit.>, granular materials <cit.>, biological systems such as cells <cit.>, or even rigid body dynamics. A topology to graph mapping can be developed in such cases which can then be used to learn the dynamics and further abstracted it out in terms of the governing interaction laws. At this juncture, it is worth mentioning some outstanding questions the present work raises. Although presents a promising approach, it is applied to only particle-based systems with at most two-body interactions. Extending to more complex systems, such as complex atomic structures with multi-body interactions or to deformable bodies in continuum mechanics could be addressed as future challenges. Further, the graph architecture presented in could be enhanced by adding additional inductive biases such as equivariance <cit.>. Finally, extending the framework to non-Hamiltonian systems such as colloidal systems <cit.> exhibiting Brownian or Langevin dynamics could be pursued to widen the scope of the framework to capture realistic systems. § METHODS §.§ Experimental systems To simulate the ground truth, physics-based equations derived using Hamiltonian mechanics are employed. The equations for n-pendulum and spring systems are given in detail below. §.§ n-Pendulum For an n-pendulum system, n-point masses, representing the bobs, are connected by rigid (non-deformable) bars. These bars, thus, impose a distance constraint between two point masses as ||_i-_i-1||^2 = l_i^2 where, l_i represents the length of the bar connecting the (i-1)^th and i^th mass. This constraint can be differentiated to write in the form of a Pfaffian constraint as (_i-_i-1)(_i-_i-1)=0 Note that such constraint can be obtained for each of the n masses considered to obtain the constraint matrix. The Hamiltonian of this system can be written as H=∑_i=1^n ∑_j=1^2(1/2m_iẋ_̇i̇,̇j̇^2-m_igx_i,2) where j=1,2 represents the dimensions of the system, m_i represents the mass of the i^th particle, g represents the acceleration due to gravity in the 2-direction and x_i,2 represents the position of the i^th particle in the 2- direction. Here, we use l_i = 1.0 m, m_i = 1.0 kg, and g = 10.0 m/s^2. §.§ n-spring system Here, n-point masses are connected by elastic springs that deform linearly (elastically) with extension or compression. Note that similar to the pendulum setup, each mass m_i is connected to two masses m_i-1 and m_i+1 through springs so that all the masses form a closed connection. The Hamiltonian of this system is given by H=∑_i=1^n ∑_j=1^2(1/2m_iẋ_̇i̇,̇j̇^2) - ∑_i=1^n 1/2k(||_i-1-_i||-r_0)^2 where r_0 and k represent the undeformed length and the stiffness, respectively, of the spring, and j=1,2 represents the dimensions of the system. Here, we use r_0 = 1.0 m, m_i = 1.0 kg and k = 1.0 N/m. §.§ n-body gravitational system Here, n point masses are in a gravitational field generated by the point masses themselves. The Hamiltonian of this system is given by H=∑_i=1^n ∑_j=1^2(1/2m_iẋ_̇i̇,̇j̇^2) + ∑_i=1^n∑_k=1,j≠ i^n Gm_im_j/(||_i-_j||) where G represents the gravitational constant, and j=1,2 represents the dimension of the system. Here, we use G = 1.0 Nm^2kg^-2, m_i = 1.0 kg and m_j = 1.0 kg ∀ i,j. §.§ Binary Lennard Jones system Here, we consider a binary LJ system known as the Kob-Andersen mixture <cit.> composed of 80% particles of type 0 and 20% particles of type 1. The particles in this system interact based on a 12-6 LJ potential with the pair-wise potential energy V_ijgiven by V_ij = ϵ[(σ/r_ij)^12-(σ/r_ij)^6] where r_ij = ||_i-_j|| and σ and ϵ are the LJ parameters, which takes the values as ϵ_0-0=1.0, ϵ_0-1=1.5, ϵ_1-1=0.5 and σ_0-0=1.00, σ_0-1=0.80, σ_1-1=0.88, and r_ij represents the distance between particles i and j. The pair-wise interaction energy between all the particles is summed to obtain the total energy of the system. For the LJ system, all the simulations are conducted at a temperature of 1.2 in the microcanonical (NVE) ensemble, ensuring the system is in a liquid state. The system is initialized with atoms placed in random positions avoiding overlap in a cubic box with periodic boundary conditions with box size 3.968 and cutoff for atom type 0-0 = 2.5, 0-1 = 2.0 and 1-1 = 2.2. Further, the system is equilibrated in the NVE ensemble until the memory of the initial configuration is lost. The equations of motion are integrated with the velocity Verlet algorithm. §.§ architecture Pre-Processing: In the pre-processing layer, we generate a compact vector representation for particle and their interactions e_ij by employing Multi-Layer Perceptrons. ^0_i = (_em((t_i))) ^0_ij = (_em(e_ij)) Here, is an activation function. In our implementation, we use different _ems for node representation corresponding to kinetic energy, potential energy, and drag. For brevity, we do not separately write the _ems in Eq. <ref>. Kinetic energy and drag prediction. Given that the graph employs Cartesian coordinates, the mass matrix can be represented as a diagonal matrix. Consequently, the kinetic energy (τ_i) of a particle relies exclusively on the velocity (_i) and mass (m_i) of said particle. In this context, the parameterized masses for each particle type are acquired through the utilization of the embedding (^0_i). As such, the predicted value of τ_i for a given particle is determined by τ_i = (_T(^0_i ∥_i)), where the symbol ∥ denotes the concatenation operator. In this equation, _T denotes a multilayer perceptron responsible for learning the kinetic energy function, while represents the activation function employed. The overall kinetic energy of the system, denoted by T, is calculated as the sum of individual kinetic energies: T=∑_i=1^nτ_i. Potential energy prediction. Typically, the potential energy of a system exhibits significant dependence on the topology of its underlying structure. In order to effectively capture this information, we utilize a multiple layers of message-passing among interacting particles (nodes). During the l^th layer of message passing, the node embeddings are iteratively updated according to the following expression: _i^l+1 = ( (_i^l+∑_j ∈𝒩_i_^l·(_j^l || _ij^l) )) where, 𝒩_i={u_j ∈| (u_i,u_j)∈} is the set of neighbors of particle u_i. _^l is a layer-specific learnable weight matrix. _ij^l represents the embedding of incoming edge e_ij on u_i in the l^th layer, which is computed as follows. _ij^l+1 = ( (_ij^l + _^l·(_i^l || _j^l)) ) Similar to _^l, _^l is a layer-specific learnable weight matrix specific to the edge set. The message passing is performed over L layers, where L is a hyper-parameter. The final node and edge representations in the L^th layer are denoted as _i=_i^L and _ij=_ij^L respectively. The total potential energy of an n-body system is represented as V= ∑_i v_i + ∑_ij v_ij. Here, v_i denotes the energy associated with the position of particle i, while v_ij represents the energy arising from the interaction between particles i and j. For instance, v_i corresponds to the potential energy of a bob in a double pendulum, considering its position within a gravitational field. On the other hand, v_ij signifies the energy associated with the expansion and contraction of a spring connecting two particles. In the proposed framework, the prediction for v_i is given by v_i = (_v_i(_i^0 ∥_i)). Similarly, the prediction for the pair-wise interaction energy v_ij is determined by v_ij = (_v_ij(_ij)). The parameters of the model are trained end-to-end using the MSE loss discussed in Eq. <ref>. §.§ Model architecture and training setup For , all the MLPs are two layers deep. A square plus activation function is used for all the MLPs. We used 10000 data points from 100 trajectories divided into 75:25 (train: validation) to train all the models. The timestep used for the forward simulation of the pendulum system is 10^-5s, for the spring and gravitational system is 10^-3s, and for the LJ system is 0.0001 LJ units. All the equations of motion are integrated with the velocity-Verlet integrator. Detailed training procedures and hyper-parameters are provided in the Supplementary material. All models were trained until the decrease in loss saturates to less than 0.001 over 100 epochs. The model performance is evaluated on a forward trajectory, a task it was not explicitly trained for, of 10s in the case of the pendulum and 20s in the case of spring. Note that this trajectory is  2-3 orders of magnitude larger than the training trajectories from which the data has been sampled. The dynamics of n-body system are known to be chaotic for n ≥ 2. Hence, all the results are averaged over trajectories generated from 100 different initial conditions. §.§ Symbolic regression SR refers to an approach to search for equations that fit the data points and fit them rather than a parametric approach where an equation is chosen apriori to fit the data. Here, we employ the PySR package to perform the SR <cit.>. PySR employs a tree-based approach for fitting the governing equation based on the operations and variables provided. Since the parametric space available for SR can be too large with every additional operation, it is important to carefully provide the minimum required input features and the operations while providing meaningful constraints on the search space. In the present work, we choose the addition and multiplication operation. Further, we allow polynomial fit based on a set containing (square, cube, pow(n)) operations, where pow(n) refers to power from four to ten. The loss function to fit the SR is based on the mean squared error between the predicted equation and the data points obtained from . Further, the equations are selected based on a score S that balances complexity C and loss L. Specifically, the score is defined as S = dL/dC, that is, the gradient of the loss with respect to complexity. For each set of hyperparameters, we select the top 10 equations based on the scores. Further, the equation having the best score among these equations is chosen as the optimal equation. All the hyperparameters associated with the SR and the corresponding equations obtained are included in the Supplementary material. §.§ Simulation environment All the simulations and training were carried out in the JAX environment <cit.>. The graph architecture was developed using the jraph package <cit.>. The experiments were conducted on a machine with Apple M1 chip having 8GB RAM and running MacOS Monterey. Software packages: numpy-1.22.1, jax-0.3.0, jax-md-0.1.20, jaxlib-0.3.0, jraph-0.0.2.dev Hardware: Chip: Apple M1, Total Number of Cores: 8 (4 performance and 4 efficiency), Memory: 8 GB, System Firmware Version: 7459.101.3, OS Loader Version: 7459.101.3 unsrt § SUPPLEMENTARY MATERIAL §.§ Comparison with baselines Baselines: In order to analyze the role of the architecture of in obtaining superior performance, we consider two baselines. The first,  <cit.>, is a simple MLP that directly predicts the Hamiltonian of the system. Note that the decoupling of kinetic and potential energies is implemented in . Second,   <cit.> is a graph-based version of , albeit without decoupling the kinetic and potential energies. While the performance of has been demonstrated on several spring and pendulum systems,  <cit.> has been evaluated only on spring systems. Datasets and systems: To evaluate , we selected standard systems, viz, n-pendulums and springs, where n=(3,4,5). All the graph-based models are trained on 5-pendulum and 5-spring systems only, which are then evaluated on other system sizes. Further, to evaluate the zero-shot generalizability of to large-scale unseen systems, we simulate 5, 50, 500-link spring systems, and 5-, 10-, and 50-link pendulum systems. We also considered a hybrid spring-pendulum system unseen during training to evaluate and a gravitational system. The detailed data-generation procedure is given in Methods and Supplementary Material. The timestep used for the forward simulation of the pendulum system is 10^-5s with the data collected every 1000 timesteps, and for the spring system is 10^-3s with the data collected every 100 timesteps. Model architecture and training details are provided in Methods and Supplementary Material. Evaluation Metric: Following the work of <cit.>, we evaluate performance by computing the following three error metrics, namely, (1) momentum error, (ME(t)), (2) Energy violation error (EE(t)) given by ME(t)=||ℳ̂(t)-ℳ(t)||_2/||ℳ̂(t)||_2+||ℳ(t)||_2, EE(t)=||ℋ̂-ℋ||_2/(||ℋ̂||_2+||ℋ||_2. Note that all the variables with a hat, for example, x̂, represent the predicted values based on the trained model, and the variables without a hat, that is x, represent the ground truth. §.§ Energy and momentum errors of Here, we analyze the evolution of energy and momentum of the trajectory predicted by . We observe in the figures that the energy violation error by the remains stationary and does not explode even for 10^4 timesteps for spring and 10^5 timesteps for the pendulum systems (see Fig. <ref>). Similarly, for the spring system, we observe that the momentum error is close to zero confirming that the total force of the system remains zero. §.§ Forces on the hybrid system Fig. <ref> shows the predicted and actual force on the trajectory of all the particles for a trajectory of 100 timesteps. We observe that the predicted force is in excellent agreement with the actual force. Fig. <ref> shows the performance of , , and for spring and pendulum systems. We observe that outperforms both and on both spring and pendulum systems. Specifically, we observe that the energy violation error in remains saturated, suggesting a stable and realistic predicted trajectory. Note that is trained and evaluated on each of these systems separately, while and are trained in only one system and inferred for all other systems by performing the forward simulation. §.§ Complex systems In order to evaluate the performance of on more complex systems, we consider a gravitation system and a hybrid spring-pendulum system (see Figs. <ref>(a) and (b)). We observe that , trained on spring and pendulum systems separately, provides an excellent inference for the hybrid system unseen by the model. Despite best efforts, the and was unable to provide a forward trajectory for the hybrid system. The superior performance of could be attributed to the architecture, which decouples the potential and kinetic energies and learns them separately for each system. We also evaluate for a more complex interaction than springs and pendulums, that is, gravitational forces. Fig. <ref> shows that provides excellent inference for the gravitational system. Similar to the hybrid system, the baselines trained on the gravitational systems were unable to provide a stable trajectory and exploded after a few steps during the inference. §.§ Zero-shot generalization Finally, we evaluate the zero-shot generalizability of in comparison to (see Fig. <ref>). We observe that exhibits superior generalization to system sizes that are two orders of magnitude larger than the training system. In the case of the spring system, even for a system size two orders of magnitude error, we observe a comparable error in energy, which remains stable with time. §.§ Hyper-parameters The hyper-parameters used for training each of the architectures are provided below. ∙ Parameter Value Node embedding dimension 5 Edge embedding dimension 5 Hidden layer neurons (MLP) 5 Number of hidden layers (MLP) 2 Activation function squareplus Number of layers of message passing(pendulum) 2 Number of layers of message passing(spring) 1 Optimizer ADAM Learning rate 1.0e^-3 Batch size 100 ∙ Parameter Value Hidden layer neurons (MLP) 256 Number of hidden layers (MLP) 2 Activation function squareplus Optimizer ADAM Learning rate 1.0e^-3 Batch size 100 ∙ Parameter Value Node embedding dimension 8 Edge embedding dimension 8 Hidden layer neurons (MLP) 16 Number of hidden layers (MLP) 2 Activation function squareplus Number of layers of message passing 1 Optimizer ADAM Learning rate 1.0e^-3 Batch size 100 §.§ Symbolic Regression The equations obtained for each of the systems from the symbolic regression, the loss and the scores are provided in this section. ∙Kinetic Energy vs velocity ∙Spring potential energy vs position ∙Pair-wise LJ interactions Pairwise LJ interactions are obtained by conducting a parametric study with different polynomial orders. Tables 2, 3, and 4 show the best equations obtained for (0-0), (0-1), and (1-1) interactions. The polynomials considered corresponding to each equation are provided as Power in the table. The detailed results obtained for each combination of polynomials are included in the following tables.
http://arxiv.org/abs/2307.04845v1
20230710183122
On Pareto equilibria for bi-objective diffusive optimal control problems
[ "Pitágoras P. de Carvalho", "Enrique Fernández-Cara", "Juan Límaco", "Denilson Menezes", "Yuri Thamsten" ]
math.OC
[ "math.OC", "math.AP", "35Q93, 49J20, 49K20, 58E17" ]
2pt _#10.7ex_#1 α ε ∫ Λ λ | Ω ω ∂ ρ | θ φ ε lemmaLemma[section] propositionProposition[section] theoremTheorem[section] corollaryCorollary[section] remarkRemark[section] definitionDefinition[section] exampleExample ℝ ℕ 𝕃 Ø𝒪 ℋ̋ Łℒ div ^a̱ ^o̱ ł
http://arxiv.org/abs/2307.03954v1
20230708112025
Magnon influence on the superconducting DOS in FI/S bilayers
[ "A. S. Ianovskaia", "A. M. Bobkov", "I. V. Bobkova" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mes-hall" ]
National Research University Higher School of Economics, Moscow, 101000 Russia Moscow Institute of Physics and Technology, Dolgoprudny, 141700 Russia Moscow Institute of Physics and Technology, Dolgoprudny, 141700 Russia Moscow Institute of Physics and Technology, Dolgoprudny, 141700 Russia National Research University Higher School of Economics, Moscow, 101000 Russia Heterostuctures superconductor/ferromagnetic insulator (FI/S) are paradigmic systems for studying mutual influence of superconductivity and magnetism via proximity effects. In particular, spin-split superconductivity is realized in such structures. Recent experiments and theories demonstrate a rich variety of transport phenomena occurring in devices based on such heterostructures that suggest direct applications in thermoelectricity, low-dissipative spintronics, radiation detection and sensing. In this work we investigate the influence of the electron-magnon interaction at the superconductor/ferromagnetic insulator interface on the spin-split superconductivity. It is predicted that due to the magnon-mediated electron spin-flip processes the spin-split quasiparticle branches are partially mixed and reconstructed, and the BCS-like spin-split shape of the superconducting DOS, which is typical for superconductors in the effective exchange field, is strongly modified. An odd-frequency superconducting order parameter admixture to the leading singlet order parameter is also found. These findings expand the physical picture of spin-split superconductivity beyond the mean-field description of the ferromagnet exchange field. Magnon influence on the superconducting DOS in FI/S bilayers I.V. Bobkova August 12, 2023 ============================================================ § INTRODUCTION Long ago it was demonstrated that the exchange field of ferromagnetic insulators (FIs), such as EuS and EuO, can spin-split the excitation spectrum of an adjacent thin-film superconductor <cit.>. The spin splitting in the DOS observed in those experiments resembles the spin splitting created by a strong in-plane field applied to a thin superconducting film. This discovery opened up the way for performing spin-polarized tunneling measurements without the need of applying large magnetic fields. A renewed interest in studying ferromagnetic/superconductor (F/S) structures came with active development of superconducting spintronics <cit.>, caloritronics and spin caloritronics <cit.>. In particular, in F/S structures with spin-split density of states (DOS) a series of promising phenomena have been studied. Among them are giant thermoelectric <cit.>, thermospin effects <cit.>, highly efficient thermally-induced domain wall motion <cit.>, spin and heat valves <cit.>, cooling at the nanoscale <cit.>, low-temperature thermometry and development of sensitive electron thermometers <cit.>. The spin-split DOS in F/S structures has also been explored in the presence of magnetic inhomogeneities, such as textured ferromagnets and domain walls <cit.>. Characteristic signatures of equal-spin triplet pairing were reported <cit.>. It was shown that the characteristic spatial and energy dependence of the spin-dependent DOS allows to tomographically extract the structure of the spin-triplet Cooper pairs <cit.>. Furthermore, the influence of the domain structure on the position-averaged superconducting DOS in FI/S bilayer was studied <cit.>. Another important direction in the field of F/S hybrid structures is investigation of interplay between the superconducting state and ferromagnetic excitations - magnons. A series of interesting results, presumably related to the influence of the superconductor on the magnon spectrum have been reported. In particular, it was found that the adjacent superconductor works as a spin sink strongly influencing Gilbert damping of the magnon modes <cit.> and can result in shifting of k = 0 magnon frequencies (Kittel mode) <cit.>. The electromagnetic interaction between magnons in ferromagnets and superconductors also results in appearance of magnon-fluxon excitations <cit.> and efficient gating of magnons <cit.>. Further it was reported that the magnetic proximity effect in thin film F/S hybrids results in appearing of magnon-cooparons, which are composed of a magnon in F and an accompanying cloud of spinful triplet pairs in S <cit.>. Some aspects of back influence of magnons on superconducting state have already been investigated. For example, a possible realization of the magnon-mediated superconductivity in F/S hybrids has been proposed <cit.>. At the same time, the influence of magnons via the magnetic proximity effect on the superconducting DOS practically has not yet been studied, although the electron-magnon interaction and influence of this interaction on the DOS in ferromagnetic metals have been investigated long ago <cit.>. Here we consider how the effects of electron-magnon interactions in FI/S thin-film hybrids manifest themselves in the superconducting DOS and quasiparticle spectra of the superconductor. It is found that the magnon-mediated electron spin-flip processes cause the interaction and mixing of the spin-split bands resulting in their reconstruction, which is especially important near the edge of the superconducting gap. We demonstrate that the classical BCS-like Zeeman-split shape of the superconducting DOS can be strongly modified due to the electron-magnon interaction and this modification is temperature-dependent. The influence of magnons on the temperature dependence of the Zeeman splitting of the DOS and relevance of our findings to existing and future experiments are also discussed. The paper is organized as follows. In Sec. <ref> we describe the system under consideration and the Green's functions formalism taking into account magnon self-energies. In Sec. <ref> the modifications of the quasiparticle spectra in the superconductor due to the electron-magnon coupling are discussed. In Sec. <ref> we study signatures of the electron-magnon interaction in the Zeeman-split superconducting DOS and their temperature dependence. Our conclusions are summarized in Sec. <ref>. § SYSTEM AND FORMALISM We consider a thin-film bilayer as depicted in Fig. <ref>, in which a ferromagnetic insulator FI is interfaced with a conventional spin-singlet s-wave superconductor S. The thickness of the S layer d_S is assumed to be small as compared to the superconducting coherence length ξ_S. In this case the S layer can be considered as homogeneous along the normal to the interface plane. The FI layer in its ground state is magnetized in-plane, along the z-direction. The Hamiltonian of the system takes the form: Ĥ=Ĥ_S+Ĥ_FI+Ĥ_ex, where Ĥ_S is the standard mean-field BCS Hamiltonian describing electrons in the superconducting film: Ĥ_S = ∑_ k σξ_ k c_ k σ^† c_ k σ - ∑_ kΔ c_ k↑^† c_- k↓^† - ∑_ kΔ^* c_- k↓ c_ k↑ . ξ_ k = k^2/2m - μ is the normal state kinetic energy of the electrons in the S layer, counted from the chemical potential of the superconductor μ. Δ is the superconducting order parameter in S, which assumed to be of conventional isotropic s-wave type. c_ k σ^+ and c_ k σ are creation and annihilation operators of electrons with the wave vector k and spin σ. Ĥ_FI describes magnons in the FI. Assuming easy-axis magnetic anisotropy in the FI it can be written as Ĥ_FI = ∑_ q (ω_0 + D q^2) b_ q^† b_ q, where b_ q^+ and b_ q are creation and annihilation operators of magnons in FI with wave vector q, ω_0 = |γ| (μ_0 H_0 + 2 K_a/M_s) is the magnonic frequency at q=0, D is the magnon stiffness constant, γ is the typically negative gyromagnetic ratio, M_s is the saturation magnetization, μ_0 is the permeability of free space, K_a is the easy-axis anisotropy constant and H_0 is the external field (can be equal to zero in our consideration). Electronic and magnonic wave vectors k and q are assumed to be two-dimensional (2D), that is the electrons and magnons can only propagate in plane of the FI/S interface. The wave functions along the y-direction, perpendicular to the interface, are assumed to be quantized. For simplicity, in the formulas we leave only one transverse magnon mode. In fact, we have checked that different modes give quantitatively different, but qualitatively the same contributions to considered self-energies. Their effect can be accounted for by multiplying our results for the self-energy corrections by an effective number of working transverse modes (see below). Ĥ_ex accounts for the exchange interaction between S and FI: Ĥ_ex = -J∫ d^2 ρ S_FI(ρ) s_e(ρ) , where ρ is a two-dimensional radius-vector at the interface plane, S_FI and s_e are the spin density operators in the FI and S, respectively. J is the interface exchange constant. By performing the Holstein-Primakoff transformation to the second order in the magnonic operators in Eq. (<ref>) one obtains Ĥ_ex = Ĥ_1 + Ĥ_2 + Ĥ_3, with Ĥ_1 = ∑_ k, k' U_ k, k'(c_ k, ↑^† c_ k', ↑-c_ k,↓^† c_ k',↓) , U_ k, k' = JM_s/2|γ|∫ d^2 ρΨ_ k^*(ρ) Ψ_ k'(ρ), Ĥ_2 = ∑_ k, k', q, q' T_ k, k', q, q' b_ q^† b_ q' (c_ k, ↑^† c_ k', ↑-c_ k,↓^† c_ k',↓), T_ k, k', q, q' = - J/2∫ d^2 ρΨ_ k^*(ρ) Ψ_ k'(ρ) ϕ_ q^*(ρ) ϕ_ q'(ρ), Ĥ_3 = ∑_ k, k', q V_ k, k', q (b_ q c_ k, ↑^† c_ k', ↓ + b_ q^† c_ k', ↓^† c_ k, ↑), V_ k, k', q = J √(M_s/2|γ|)∫ d^2 ρΨ_ k^*(ρ) Ψ_ k'(ρ) ϕ_ q(ρ) , where Ĥ_1 describes a spin-splitting of the electronic energy spectrum in S in the mean-field approximation. The second term Ĥ_2 represents the Ising-term, which physically accounts for the renormalization of the spin-splitting by magnonic contribution. Since the processes of the spin transfer between electrons and magnons are of primary importance for our consideration, when calculating the electronic Green's function we simplify this term by substituting the magnon operator b_ q^† b_ q by its averaged value ⟨ b_ q^† b_ q⟩ = n_ qδ_ q q', where n_ q is the density of magnons with wave vector q. The third term Ĥ_3 transfers spin between electron and magnon operators and will turn out to be the most significant for effects under consideration. If we choose the wave functions of electrons Ψ_ k(ρ) and magnons ϕ_ q(ρ) at the interface in the form of plane waves propagating along the interface, that is Ψ_ k(ρ)=(1/√(d_S))e^i k ρ and ϕ_ q(ρ)=(1/√(d_FI))e^i q ρ, then Ĥ_ex can be simplified: Ĥ_ex = Ũ∑_k (c_k, ↑^† c_k, ↑-c_k,↓^† c_k,↓) + V ∑_k, q (b_q c_k, ↑^† c_k-q, ↓ + b_q^† c_k-q, ↓^† c_k, ↑) , where Ũ = -J (M_s-N_m |γ|)/(2|γ|d_S ) is the averaged spin-splitting field in the superconductor renormalized by the magnon density N_m, and V = J√(M_s/2|γ|d_FI A)(1/d_S) is the electron-magnon coupling constant, where A is the area of the FI/S interface. Introducing the following Nambu-spinor Ψ̌_ k = (c_ k ↑, c_ k ↓, -c_- k ↓^†, c_- k ↑^†)^T, we define the Gor'kov Green's function in the Matsubara representation Ǧ_ k(τ) = -⟨ T_τΨ̌_ kΨ̌_ k^†⟩, where ⟨ T_τ ... ⟩ means imaginary time-ordered thermal averaging. Turning to the Matsubara frequency representation the Green's function obeys the following equation: (iω - ξ_k τ_z - Ũσ_z - Δτ_x - Σ̌_m )Ǧ_ k (ω) = 1, where ω is the fermionic Matsubara frequency, σ_i and τ_i (i=x,y,z) are Pauli matrices in spin and particle-hole spaces, respectively. Σ̌_m is the magnonic self-energy, which describes corrections to the electronic Green's function due to the electron-magnon interaction and in the framework of the self-consistent Born approximation takes the form: Σ̌_m = - V^2 T ∑_ q,Ω B_ q(Ω) {σ_+ Ǧ_ k- q (ω - Ω)σ_- + . . σ_- Ǧ_ k+ q (ω + Ω)σ_+} , where σ_± = (σ_x ± i σ_y), Ω is the bosonic Matsubara frequency and B_ q(Ω) = [iΩ - (ω_0+Dq^2)]^-1 is the magnonic Green's function. From the spin structure of Eq. (<ref>) it is seen that Σ̌_m is diagonal in spin space. For this reason the electronic Green's function, which is given by the solution of Eq. (<ref>) is also diagonal matrix in spin space and Eq. (<ref>) can be written for the both spin subbands separately: (iω - ξ_k τ_z - σŨ - Δτ_x - Σ̂_m, σ )Ĝ_ k, σ (ω) = 1, where Ĝ_ k, σ is 2 × 2 matrix in the particle-hole space corresponding to the electron spin σ = ↑, ↓. Σ̂_m,σ is also 2 × 2 matrix in the particle-hole space representing the magnonic self-energy for the given spin subband σ: Σ̂_m,σ = - V^2 T ∑_ q,Ω B_ q(Ω) Ĝ_ k-σ q, σ̅ (ω - σΩ). As a factor in the expressions σ means ± 1 for the spin-up (spin-down) subbands, and σ̅ means the opposite spin subband. The dimensionless coupling constant quantifying the strength of the electron-magnon coupling is K=V^2 A / 4 πħ v_F √(D Δ). Our numerical estimates made for the parameters corresponding to EuS/Al or YIG/Nb structures suggest that K should be rather small, K ≪ 1, for the detailed discussion of the numerical estimates see Sec. <ref>. The smallness of the electron-magnon coupling constant allows us to use non self-consistent Born approximation when calculating magnon self-energy. That is, we substitute Ĝ_ k - σ q, σ̅ by the bare superconducting Green's function obtained without taking into account the magnon self-energy Ĝ_ k - σ q, σ̅^(0) in Eq. (<ref>). Then the explicit solution of Eq. (<ref>) takes the form: Ĝ_ k,σ (ω) = i ω_ k, σ +ξ_ k, στ_z + Δ_ k, στ_x/(i ω_ k, σ)^2 - (ξ_ k, σ)^2 - (Δ_ k, σ)^2 . where all the quantities marked by are renormalized by the electron-magnon interaction as follows: Δ_ k, σ (ω) = Δ + δΔ_ k,σ(ω) = Δ - - V^2 T ∑_ q, Ω B_ q(Ω) Δ/(i ω - iσΩ +Uσ)^2 - ξ^2_ k-σ q - |Δ|^2 , ξ_ k, σ (ω) = ξ_ k + δξ_ k,σ(ω)= ξ_ k - - V^2 T ∑_ q, Ω B_ q(Ω) ξ_ k-σ q/(i ω - iσΩ +Uσ)^2 - ξ^2_ k-σ q - |Δ|^2 , ε_ k, σ (ω) = i ω - Uσ + δε_ k,σ(ω)= i ω - Uσ + + V^2 T ∑_ q, Ω B_ q(Ω) i ω - iσΩ +Uσ/(i ω - iσΩ +Uσ)^2 - ξ^2_ k-σ q - |Δ|^2 . For the problem under consideration all the in-plane directions of k are equivalent. For this reason the magnonic corrections only depend on the absolute value k of the wave vector. Further in order to study the quasiparticle spectra and density of states we turn from Matsubara frequencies to the real energies in the Green's functions i ω→ε + i δ, where δ is an infinitesimal positive number. The magnonic corrections for spin-up electrons δΔ_ k, ↑, δξ_ k, ↑ and δε_ k, ↑ are presented in Figs. <ref>-<ref> as functions of the quasiparticle energy ε and ξ_ k≡ξ, which after linearization in the vicinity of the Fermi surface takes the form ξ_ k ≈v_F ( k - k_F). The key features of the corrections, which can be see in the presented plots are: (i) The dependence of the corrections on ξ is very weak. The reason is that the most important range of the magnonic wave numbers contributing to the corrections is q ≲ 1/ξ_S, where ξ_S = v_F/Δ is the superconducting coherence length. Then taking parameters of the magnon spectrum corresponding to YIG ω_0,YIG∼ 10^-1Δ, D_YIG≈ 5*10^-40J*m^2 or EuS ω_0,EuS∼ 10^-2Δ, D_EuS≈ 3*10^-42J*m^2, we obtain that D q^2 ≪ω_0 to very good accuracy for all reasonable parameters. Consequently, one can disregard D q^2 with respect to ω_0 in the magnonic Green's function B_ q and after linearization of ξ_ k - σ q≈v_F ( k - σ q - k_F) in the vicinity of the Fermi surface we see that the dependence on k drops from Eqs. (<ref>)-(<ref>). (ii) The correction to the normal state electron dispersion δξ is small with respect to all other corrections and is neglected below. (iii) The important corrections δΔ and δε have peaks at the energies corresponding to the superconducting coherence peaks of the opposite spin subbands. While the coherence peaks for the spin-up subband are located at ε = ±Δ +Ũ, the peaks of the corrections are at ε = ±Δ -Ũ. It is an obvious consequence of the process of electron spin flip accompanied by emission or absorption of a magnon. (iv) Correction δΔ represents an effective contribution to the superconducting order parameter induced from the pure singlet pairing Δ via the electron-magnon interaction. It depends on the Matsubara frequency and contains both singlet and triplet components. As can be seen from Eq. (<ref>), the correction obeys the condition δΔ_↑(ω) = δΔ_↓(-ω). It means that the triplet component δΔ_t (ω) = δΔ_↑(ω) - δΔ_↓(ω) = -δΔ_t(-ω) works as an effective odd-frequency superconducting order parameter. This situation is rather unusual because typically in F/S hybrid systems we encounter an odd-frequency anomalous Green's function, but at the same time the order parameter is still even frequency in the framework of the conventional BCS weak coupling theory. § QUASIPARTICLE SPECTRA Now we turn to discussion of how quasiparticle spectra in the S layer are modified by the electron-magnon interaction. In Fig. <ref>(a) we present the spectral functions for the both spins in the S layer calculated from the Green's function (<ref>) according to the relation A_σ(ε, k) = -1/π Tr{1+τ_z/2 Im[Ĝ_ k,σ^R(ε)]}. The spectral function is isotropic in momentum space and for this reason we plot it as a function of ξ_ k≡ξ. The electron-like and hole-like quasiparticle branches are clearly seen at positive and negative energies, respectively. Black dashed lines represent the quasiparticle spectra in the absence of the electron-magnon interaction. The electron-magnon interaction leads to the following main modifications of the quasiparticle spectra: (i) The Zeeman splitting of spin-up and spin-down quasiparticle branches is reduced due to the magnon-mediated interaction between quasiparticles with opposite spins. (ii) For positive energy branches, corresponding to electron-like quasiparticles, the lifetime of spin-up quasiparticles and quasiparticles at the upper part of the spin-down branch is considerably suppressed, what is seen as a broadening of the corresponding branches. For negative energies, corresponding to hole-like quasiparticles, the situation is symmetric if we interchange spins. The broadening of the spin-down branch only occurs in the energy region, where the spin-up branch also exists. The physical reason is that the spin-flip processes providing the broadening are nearly horizontal due to the fact that ω_0 + Dq^2 ≪Δ, that is the magnon energies are small as compared Δ in the whole range of ξ, considered in Fig. (<ref>). The lower (upper) part of the spin-down (up) positive (negative) energy branch is not broadened because there are no available states for the opposite spin quasiparticles at the appropriate energies and, consequently, the spin-flip processes are not allowed. (iii) In Fig. <ref>(a) we also see a reconstruction of the spin-down spectral branch in the energy range of the bottom of the spin-up branch. In order to investigate this effect in more detail we plot the same figure on a logarithmic scale in Fig. <ref>(b), what allows to clearly see weak spectral features. Figs. <ref>(c) and (d) represent the spectral functions for the spin-up band on the normal and on the logarithmic scale, respectively. From Figs. <ref>(b) and (d) it is seen that due to the electron-magnon interaction in the energy region of the extremum of the spin-up (down) branch, a nonzero density of states appears for the opposite spin branch. It looks like a horizontal line starting from the bottom of the corresponding branch. This line is horizontal due to the independence of the electron-magnon self-energy corrections (<ref>) and (<ref>) on ξ. This mixing of the spin-up and spin-down bands resulting from the magnon-mediated spin-flip processes is natural and exists at all energies, but the spectral weight of the opposite spin branch is too small except for the regions of the extrema of the bands corresponding to the coherence peaks of the superconducting DOS. Intersection of the additional lines with the original spin-down band results in its reconstruction, which looks like an avoided crossing point. The results for the spectral function presented and discussed above correspond to T=0.1Δ. This temperature is higher than the gap in the magnonic spectrum ω_0=0.03Δ, which we take in our calculations. Therefore, a large number of thermal magnons are excited at this temperature. In Fig. <ref> the spectral function is demonstrated for lower temperature T=0.01Δ<ω_0. It is seen that the characteristic signatures of the magnon-mediated spin-flip processes, that is the mixing, reconstruction and broadening of the branches are much less pronounced due to the suppression of the thermally excited magnons at such low temperatures. § DOS IN THE PRESENCE OF MAGNONS Now we turn to discussion of the local density of states (LDOS) in the S layer, which is calculated as the momentum integrated spectral function: N(ε) = ∫d^2k/(2π)^2 A(ε, k). Fig. <ref>(a) demonstrates the LDOS in the presence of electron-magnon interaction (solid line) as compared to the LDOS calculated at V=0 (dashed line). The LDOS at V=0, that is calculated assuming mean-field approximation for the exchange field, takes the conventional BCS-like shape. It manifests Zeeman-split coherence peaks, and the outer peak is always higher than the inner one. The electron-magnon interaction inverts the relative ratio of the peak heights and broadens the outer peaks, while the width of the inner peaks remains unchanged. The reason is the same as for the broadening of the spectra in Fig. <ref>: electron spin-flip processes accompanied by a magnon emission or absorption. The outer coherence peaks in Fig.<ref>(a) correspond to the energy regions of the bottom (top) of the positive(negative)-energy spin-up(down) bands. This type of broadening, which only affects outer peaks, differs from the other physical mechanisms resulting in the broadening of the coherence peaks, such as the orbital effect of the magnetic field, inelastic scattering or magnetic impurities, which affect all the peaks <cit.> and can be roughly described by the Dynes parameter. The other important manifestation of the electron-magnon interaction is that the shape of the LDOS strongly depends on temperature even at very low temperatures ∼ω_0 ≪Δ, in agreement with the discussed above behavior of the spectral function. The temperature evolution of the LDOS is presented in Fig. <ref>. It is seen that the broadening of the outer peak develops with increasing temperature in the temperature range ∼ω_0. It is clear if we remember that the broadening is caused by the spin-flip processes, which are mediated by the thermally excited magnons. We do not consider larger temperatures T ≫ω_0 comparable to the critical temperature of the superconducting film because in this temperature range the temperature dependence of the superconducting gap comes into play and the correct consideration of the problem requires solving of the self-consistency equation for the order parameter. Now let us discuss numerical estimates of the dimensionless constant K=V^2 A / 4 πħ v_F √(D Δ), which controls the strength of the electron-magnon coupling. Substituting V = J√(M_s/2|γ|d_FI A)(1/d_S) and expressing the interface exchange coupling constant via the experimentally accessible quantity Ũ as |J| = 2 |γ| Ũ d_S/M_s (where to the leading approximation we neglect magnonic contribution to the magnetization), we obtain K = Ũ^2 (2|γ|/M_s) 1/(4 π√(DΔ)v_F d_FI) for one transverse magnon mode. The effective number of working transverse modes N_⊥∼ d_FI/a, where a is the interatomic distance in the ferromagnet. According to our estimates for d_FI≈ 10 nm N_⊥∼ 2 ÷ 5. One can take the following parameters for YIG/Nb heterostructures: Ũ/Δ = 0.5, v_F = 10^6m/s, Δ_Nb = 2.7*10^-22J, a=1.2m, 2|γ|/M_s = 3.3*10^-27m^3, D = D_bare,YIG-δ D_YIG, where D_bare,YIG = 5*10^-40J*m^2<cit.> is the exchange stiffness of YIG and δ D_YIG is the renormalization of the stiffness in FI/S bilayers due to formation of magnon-Cooparon quasiparticles <cit.>. As it was predicted <cit.>, for the material parameters of YIG/Nb heterostructures δ D_YIG can be ∼ (0.5 ÷ 1) D_YIG,bare for d_FI∼ (1 ÷ 0.5) d_S. Therefore, the electron-magnon coupling constant for YIG/Nb heterostructures can vary in a wide range K_YIG/Nb≳ 10^-4. The considered here values K ∼ 0.01 can be realized in the regime of strong renormalization of the exchange stiffness constant D. For EuS/Al heterostructures one can take Ũ/Δ = 0.25 <cit.>, v_F = 10^6m/s, Δ_Al = 3.5*10^-23J, a=10^-10m, 2|γ|/M_s = 3.3*10^-28m^3, D = D_bare,EuS, where D_bare,EuS = 3*10^-42J*m^2<cit.>. The superconducting renormalization of the stiffness due to formation of magnon-Cooparon quasiparticles is predicted to be small for the parameters corresponding to EuS/Al heterostructures at reasonable thicknesses d_FI due to smaller values of Δ and larger M_s. Substituting these parameters to the expression for K we come to the conclusion that for EuS/Al heterostructures K_EuS/Al∼ 10^-7÷ 10^-6, that is the electron-magnon effects unlikely to be observed in such structures. In general, the electron-magnon effects in the LDOS and quasiparticle spectra should be more pronounced in ultra-thin superconducting films with high critical temperatures, where large absolute values of the effective exchange field Ũ can be realized. The smaller values of the exchange stiffness of the ferromagnet will also enhance the effect. The manifestations of the electron-magnon coupling become more pronounced at T ≳ω_0 and grow with temperature. Now we discuss the influence of the electron-magnon interaction on the effective Zeeman splitting, which is defined as the distance between the split coherence peaks of the LDOS divided by 2. Experimentally, the low-temperature reduction of the effective Zeeman splitting at T ≪Δ for EuS/Al heterostructures has been reported <cit.>. It was ascribed to the presence of weakly bound spins at the interface of the EuS/Al. The renormalization of the effective exchange field in the superconductor by the thermal magnons can also contribute to this effect. Indeed, the fit of experimentally observed temperature dependence of the distance between the Zeeman-split coherence peaks Δ V_peak(T) by 2|Ũ| = J (M_s-N_m |γ|)/(2|γ|d_S ) with the magnon density N_m = (1/S d_FI)∑_ q{exp[-(ω_0+Dq^2)/T]-1}^-1 and ω_0 ≈ 0.03K is in reasonable agreement with the experimental data. In addition, the broadening of the outer coherence peaks, predicted in this work, leads to enhancement of the distance between the spin-split coherence peaks. The broadening becomes stronger with increasing temperature. This effect leads to an apparent growth of the peaks splitting with temperature and, therefore, acts opposite to the renormalization of the effective Zeeman field by magnons. However, our numerical estimates suggest that the temperature growth is unlikely to be observed, at least for heterostructures, consisting of the materials discussed above, because the renormalization of the effective Zeeman field by magnons dominates. § CONCLUSIONS In this work the influence of the electron-magnon interaction at the superconductor/ferromagnetic insulator interface in thin-film FI/S heterostructures on the spectrum of quasiparticles and the LDOS in the superconducting layer is studied. It is predicted that due to the magnon-mediated electron spin-flip processes the spin-split quasiparticle branches are partially mixed and reconstructed. The reconstruction is the most pronounced in the region of the bottom of the energetically unfavorable spin band because of the enhanced density of the electronic states and existence of the available states in the opposite-spin band. The BCS-like Zeeman-split shape of the superconducting DOS, which is typical for superconductors in the effective exchange field, is strongly modified due to the electron-magnon interaction. The outer spin-split coherence peaks are broadened, and the inner peaks remain intact. This type of broadening is a clear signature of the magnon-mediated spin flips and strongly differs from other mechanisms of the coherence peaks broadening, which usually influence all peaks. The broadening grows with temperature due to the thermal excitation of magnons. The described above features in the electronic DOS are mainly caused by diagonal in the particle-hole space magnonic contributions to the electron self-energy, that is by the quasiparticle processes. Besides that we have also found an off-diagonal in the particle-hole space magnonic contribution to the electronic self-energy. It mimics an odd-frequency superconducting order parameter admixture to the leading singlet order parameter. The study of its influence on the superconducting properties of the system may be an interesting direction for future research. § ACKNOWLEDGMENTS We acknowledge the discussions of the exchange interaction hamiltonian with Akashdeep Kamra. The work was supported by the Russian Science Foundation via the RSF project No. 22-42-04408.
http://arxiv.org/abs/2307.07566v1
20230714181329
Reconstruction of 3-Axis Seismocardiogram from Right-to-left and Head-to-foot Components Using A Long Short-Term Memory Network
[ "Mohammad Muntasir Rahman", "Amirtahà Taebi" ]
physics.med-ph
[ "physics.med-ph", "cs.LG", "eess.SP" ]
IEEEexample:BSTcontrol Diverse Approximations for Monotone Submodular Maximization Problems with a Matroid Constraint Anh Viet Do Mingyu Guo Aneta Neumann Frank Neumann Optimisation and Logistics, School of Computer and Mathematical Sciences, The University of Adelaide ============================================================================================================================================================= empty empty This pilot study aims to develop a deep learning model for predicting seismocardiogram (SCG) signals in the dorsoventral direction from the SCG signals in the right-to-left and head-to-foot directions (SCGx and SCGy). The dataset used for the training and validation of the model was obtained from 15 healthy adult subjects. The SCG signals were recorded using tri-axial accelerometers placed on the chest of each subject. The signals were then segmented using electrocardiogram R waves, and the segments were downsampled, normalized, and centered around zero. The resulting dataset was used to train and validate a long short-term memory (LSTM) network with two layers and a dropout layer to prevent overfitting. The network took as input 100-time steps of SCGx and SCGy, representing one cardiac cycle, and outputted a vector that mapped to the target variable being predicted. The results showed that the LSTM model had a mean square error of 0.09 between the predicted and actual SCG segments in the dorsoventral direction. The study demonstrates the potential of deep learning models for reconstructing 3-axis SCG signals using the data obtained from dual-axis accelerometers. Clinical relevance— This work contributes to the advancement of cardiovascular monitoring techniques that rely on SCG signals obtained from single- or dual-axis accelerometers. § INTRODUCTION Cardiovascular diseases (CVDs) are the leading cause of death in the United States, claiming the life of one person every 34 seconds, resulting in a staggering 2,544 deaths per day based on 2020 data <cit.>. Beyond the devastating human toll, this places an immense strain on healthcare systems and society, with significant economic costs associated with treatment and lost productivity. The early detection of cardiac abnormalities is crucial for achieving better outcomes for patients with CVD, and improving diagnostic methods and accessibility is a key step in this direction. Current diagnostic methods, including non-invasive techniques such as electrocardiography (ECG), medical imaging, and cardiac catheterization, can aid in identifying CVDs. Advancements in technology have also led to the development of new diagnostic options, such as wearable and remote monitoring systems, which can provide continuous monitoring of patients' cardiovascular health outside of traditional healthcare settings. Seismocardiography (SCG) is another technique that noninvasively monitors cardiovascular activity by measuring cardiovascular-induced vibrations on the chest <cit.>. These vibrations result from a range of cardiac activities, including valve opening and closing, isovolumetric contraction, blood ejection, and rapid left ventricle filling <cit.>. Unlike other non-invasive techniques such as ECG and pulse oximetry, which focus on the electrical activity of the heart and the blood oxygen level, respectively, SCG provides complementary insights into the mechanical activity of the heart <cit.>. With its ability to evaluate these mechanical activities, SCG has the potential to offer valuable diagnostic information for cardiac conditions such as heart failure, myocardial infarction, ischemia, and hemorrhage, as changes in the mechanical function of the heart can be an early indication of these diseases <cit.>. As a result, SCG can enhance our understanding of the cardiac function and contribute to the development of more accurate diagnostic tools for patients with CVD. SCG signals are commonly measured using accelerometers that are placed on the chest surface. These signals are typically measured in three directions of right-to-left, head-to-foot, and dorsoventral. In that regard, while single or dual-axis accelerometers may be used for SCG measurement, three-axis accelerometers are more informative as they offer a more comprehensive understanding of the motion of the heart and chest wall <cit.>. This study aims to answer the question: "Can we generate three-axis SCG measurements using a dual-axis accelerometer?" More specifically, is it possible to generate SCG vibrations in the z direction using the measurements from the x and y axes of a dual-axis accelerometer? In this work, we propose a deep neural network model based on long short-term memory (LSTM) to predict the SCG component in the dorsoventral direction (SCGz) using the vibrations in right-to-left and head-to-foot directions (SCGx, SCGy). LSTM models have been successfully applied to a wide range of time series-related problems, including but not limited to stock price prediction, energy load forecasting, weather forecasting, speech recognition, and natural language processing. In this paper, we designed a regression model based on a stacked LSTM neural network with two layers to process the SCGx and SCGy sequences corresponding to a single cardiac cycle and generate an output sequence of SCGz of the same length. The use of a stacked LSTM network can potentially improve the accuracy of the predictions by allowing the network to learn more complex temporal patterns in the data. § MATERIALS AND METHODS §.§ Long Short-Term Memory Network The basic building block of an LSTM is the LSTM cell, which consists of a memory cell and three gates: the input gate, the forget gate, and the output gate. The memory cell is responsible for storing information over time and passing it forward through the sequence. The input gate controls how much new information is added to the cell state, the forget gate controls how much old information is removed from the cell state, and the output gate controls how much information from the cell state is used to compute the hidden state. The LSTM cell's computations are performed using various activation functions, such as the sigmoid function and the hyperbolic tangent function. These activation functions help to ensure that the information flow in and out of the cell is regulated and controlled, leading to better long-term memory retention and more effective learning. Fig. <ref>a shows a schematic diagram of an LSTM unit. At each time step t, the LSTM cell takes as input the current input sequence x_t, the previous hidden state h_t-1, and the previous cell state c_t-1. Using these inputs, the LSTM cell generates the current hidden state h_t and the updated cell state c_t. The computations involved in this process include several gates that control the flow of information in and out of the cell and can be calculated by: i_t = σ(W_x^(i)x_t + W_h^(i)h_t-1 + b^(i)) f_t = σ(W_x^(f)x_t + W_h^(f)h_t-1 + b^(f)) o_t = σ(W_x^(o)x_t + W_h^(o)h_t-1 + b^(o)) c̃_t = tanh(W_x^(c)x_t + W_h^(c)h_t-1 + b^(c)) where i_t, f_t, and o_t are the input, forget, and output gates, respectively, and c̃_t provides the change contents. Finally, updated cell state c_t and hidden state h_t are computed as: c_t = f_t ⊙c_t-1 + i_t ⊙c̃_t h_t = o_t ⊙tanh(c_t) where ⊙ performs element-wise product. §.§ Study Population The study enrolled a total of 15 participants, of whom 4 were female, all of whom had no prior history of cardiovascular diseases (age: 25.93 ± 10.65 year, height: 171.31 ± 9.22 cm, weight: 74.83 ± 22.83 kg, body mass index: 25.29 ± 6.58 kg/m^2). The study sample was diverse, with participants from various racial and ethnic backgrounds, including 53.3% White, 20% Black, 20% Asian, and 6.7% mixed. The Mississippi State University Institutional Review Board approved the study protocol. §.§ Data Acquisition Protocol To minimize any potential movement artifacts that could affect the quality of the data, all subjects were instructed to lay supine on a bed without additional body movements. Three triaxial accelerometers (356A32, PCB Piezotronics, Depew, NY) were attached to three locations on the sternum including the manubrium, the fourth costal notch, and the xiphoid process. The accelerometer outputs were amplified using a signal conditioner (482C, PCB Piezotronics, Depew, NY) with a gain factor of 100 to increase the signal-to-noise ratio. The amplified signals were then recorded using a data acquisition system (416, iWorx Systems, Inc., Dover, NH), with a sampling frequency of 5000 Hz. A microphone was connected to the system and tapped at the beginning and end of each recording. These taps were then located in the sound signals to identify the start and end of the intended part of the recording. An ECG module was also used to simultaneously record the ECG signal. Data were collected from all accelerometers during a 15-second breath-hold at the end of inhalation and exhalation, and 2 additional minutes of normal breathing. §.§ SCG Dataset To prepare the dataset, the following pre-processing steps were carried out to eliminate noise from the raw signals. The first step involved applying a moving average filter to smooth the SCG signals. This step helps reduce high-frequency noise in the data, making it easier to detect underlying patterns or features. Subsequently, a band-pass filter was applied to the accelerometer outputs with cutoff frequencies of 1 and 30 Hz. This eliminated the low-frequency respiration vibrations and the higher-frequency SCG components above 30 Hz. SCG signals were then segmented using the ECG R waves as reference points. To detect the R waves, we utilized the widely-used Pan-Tompkins algorithm <cit.>, which is known for its ability to identify the peaks corresponding to ventricular depolarization. We then computed the average duration of the cardiac cycle for each subject from the ECG RR intervals. This information was then used to determine the window size to segment the SCG signals for each subject. Specifically, we set the start of the window to 1/4 of the average cardiac cycle duration before the R wave and the end of the window to 3/4 of the average cardiac cycle duration after the R wave, resulting in consistent SCG segments. After segmenting the SCG signals for each cardiac cycle, the segments were downsampled to a fixed number of sample points (100 points in this study). This allowed us to obtain consistent segment lengths across all samples for different subjects, ensuring that the input data for the LSTM model was uniform. This is crucial because the neural network takes one segment at a time as input, and the original segments may have varied lengths for different subjects (due to different cardiac cycle durations). By treating each SCG segment as a separate sample in the dataset, the neural network can learn patterns and features specific to each cycle, which can be useful for predicting SCG signals in the z-direction. Next, the segments were normalized to have values between -1 and 1 to ensure that the input data falls within a suitable range for activation functions. Then, the mean value was subtracted from each segment to center the signals around zero. This step was useful in mitigating any DC offset or baseline drift present in the signal that could affect the performance of the neural network. In the last step, the pre-processed and normalized SCG segments from all subjects were combined to form a single dataset, which was used for training, validating and testing the neural network model. A total of 7492 SCG segments were used for training and validation and the model was tested on 475 segments. This dataset was expected to capture the common patterns and features present in the SCG signals of the population studied, and was thus representative of the entire cohort. §.§ Network Architecture and Training Fig. <ref>b provides an overview of a stacked LSTM network's architecture employed in this study. A stacked LSTM refers to the use of multiple LSTM layers, one on top of the other. This allows the network to learn more complex patterns and relationships in the data. The network has two stacked LSTM layers, each with a hidden size of 512. The hidden size refers to the number of hidden units or neurons in the LSTM layer, which determines the capacity of the network to capture complex patterns in the data. The network processes 100-time steps of SCGx and SCGy as a sample input, which corresponds to approximately one cardiac cycle (as described in Sect. <ref>). Utilizing dual-axis rather than single-axis SCG data may help the network to capture more information and learn the relationship between the input (SCGx and SCGy) to predict the SCGz and make accurate predictions on new, unseen data. The input then flows into the stacked LSTM layers. The final LSTM layer outputs a vector h_i, which is subsequently fed into a fully connected layer featuring 100 output neurons. This fully connected layer maps the input vector to a set of output values that correspond to the target variable predicted by the model. To prevent overfitting and improve the generalization, a dropout layer is added after each LSTM layer. The dataset was divided into training, validation, and testing sets to train and evaluate the network. Since we collected data from three accelerometers for each subject, we separated breath-hold data captured at the end of inhalation and at the end of exhalation from the accelerometer attached to the xiphoid process to test the model. The remaining data were used for training and validation. The training set, which contained 90% of the data, was used to determine the weight and bias parameters through forward and backward propagation during the training process. The validation set, which contained 10% of the data, was used to evaluate the performance of the model during the training process. We trained the network with a maximum of 1000 epochs and used early stopping technique to prevent overfitting during training. Early stopping is a regularization technique to monitor the performance of the model on the validation set during training and stop the training when the performance on the validation set no longer improves. The initial learning rate was 0.001, the learning rate was reduced using learning rate decay when the validation set stopped improving for a specified number of epochs. This technique helped the model converge to a better solution and avoid getting stuck in local minima. § RESULTS AND DISCUSSION Our goal was to investigate the possibility of predicting SCGz from the SCG signals in the x and y directions through the use of an LSTM neural network. We used the testing set to evaluate the model's performance. The accuracy of the model in predicting SCGz based on SCGx and SCGy was assessed by comparing the predicted and actual SCG signal in the z direction, using mean-square error (MSE) as the measure of accuracy. Table <ref> shows the model's performance in terms of MSE, while Fig. <ref> provides a box plot to illustrate the results for each subject's end-exhalation and end-inhalation data, as well as the total performance with all end-exhalation and end-inhalation data. The box plot shows the median, interquartile range, and outliers of the prediction in terms of MSE. The prediction MSE for each subject varied from 0.05 to 0.14, with a total MSE of 0.09 for all subjects combined. Four samples of the predicted and actual SCGz segments from subjects 6 and 12 are presented in Fig. <ref>. The close resemblance between the predicted and actual SCGz signals suggests that there is a correlation between the SCG components in the x, y, and z directions, and that our LSTM neural network was able to learn it. § CONCLUSION The study investigated whether the SCGz signal can be predicted from SCG signals in the x and y directions using an LSTM neural network. Results showed that the predicted SCGz closely matched the actual signal, indicating a relationship between the SCG components in the three directions. The study provides insights into the potential of using a dual-axis accelerometer to monitor SCG signals in all three directions, which can improve cardiovascular monitoring methods. IEEEtran
http://arxiv.org/abs/2307.03914v1
20230708062942
Mixed Precision Iterative Refinement with Adaptive Precision Sparse Approximate Inverse Preconditioning
[ "Noaman Khan", "Erin Carson" ]
math.NA
[ "math.NA", "cs.NA" ]
Phased Geometric Controls of V-Shaped Three-Level System for Zero-field Quantum Sensing Jiangfeng Du August 12, 2023 ======================================================================================= Hardware trends have motivated the development of mixed precision algorithms in numerical linear algebra, which aim to decrease runtime while maintaining acceptable accuracy. One recent development is the development of an adaptive precision sparse matrix-vector produce routine, which may be used to accelerate the solution of sparse linear systems by iterative methods. This approach is also applicable to the application of inexact preconditioners, such as sparse approximate inverse preconditioners used in Krylov subspace methods. In this work, we develop an adaptive precision sparse approximate inverse preconditioner and demonstrate its use within a five-precision GMRES-based iterative refinement method. We call this algorithm variant BSPAI-GMRES-IR. We then analyze the conditions for the convergence of BSPAI-GMRES-IR, and determine settings under which BSPAI-GMRES-IR will produce similar backward and forward errors as the existing SPAI-GMRES-IR method, the latter of which does not use adaptive precision in preconditioning. Our numerical experiments show that this approach can potentially lead to a reduction in the cost of storing and applying sparse approximate inverse preconditioners, although a significant reduction in cost may comes at the expense of increasing the number of GMRES iterations required for convergence. § INTRODUCTION We consider the problem of solving large, sparse linear systems Ax=b using iterative methods, where A is a nonsingular n× n matrix. In recent years, the emergence of low precision, such as half precision, on modern hardware has received renewed attention. Lower precision has many benefits, including a reduction in computation, storage, and data movement costs. However, with fewer bits, we have greater round off error and a smaller range of re-presentable numbers. This has motivated the development of mixed precision algorithms, in which lower and higher precisions are used selectively in order to improve the performance, memory, and energy consumption without sacrificing accuracy; for details, see the recent surveys <cit.>. Iterative refinement (IR) is a long-standing technique for iteratively improving the solution to a linear system. The idea of iterative refinement is to first compute an initial solution x_0 to Ax=b , often using a direct solver like LU factorization. The refinement steps in iterative refinement consist of computing the residual r_i=b-Ax_i, solving the correction equation Ad_i=r_i, and updating the solution x_i+1=x_i+d_i. In the case that LU factorization is used for computing the initial solution, the LU factors can be reused for solving the correction term d_i. This is what we refer to as “standard IR” (SIR). Iterative refinement was originally proposed by Wilkinson in 1948, who suggested performing all the computations in a working precision denoted by u except the residual computation in precision u^2. This variant has been analyzed by Wilkinson <cit.> and Moler <cit.>. In 1977, Jankowski and Woźniakowski <cit.> and Skeel <cit.> introduced fixed precision iterative refinement, performing all computations in precision u. Langou et al. in 2006 used single precision in the computation of the LU factorization, which can be as twice faster as double precision, and a working precision in other parts of the computation <cit.>. The availability of half precision in modern GPUs motivated the development of iterative refinement which uses three or more hardware precisions. Carson and Higham in 2018 proposed an iterative refinement scheme that uses three different precisions u_f, u, and u_r, which denote the factorization, working, and the residual precisions respectively; for an explanation see <cit.>. The authors also proposed a fourth precision, called the “effective precision”, denoted by u_s, which allows for general solvers to be used for the correction term d_i. For example, in standard iterative refinement, the LU factors computed in precision u_f results in u_s = u_f. With u_f ≥ u and u_r ≤ u^2, then the relative forward and backward errors will converge to level u when κ_∞(A)≤ u_f^-1, where κ_∞(A)=‖ A^-1‖_∞‖ A‖_∞ denotes the infinity-norm condition number of A. In <cit.>, the authors develop a GMRES-based iterative refinement algorithm (GMRES-IR) which uses the computed LU factors as preconditioners within GMRES to solve for the correction in each refinement step. Under the assumption that GMRES is executed in the working precision u, with matrix vector product and with preconditioned matrix computed in double the working precision, u_s = u, and thus GMRES-IR is guaranteed to produce forward and backward errors to the working precision for more ill-conditioned problems than standard iterative refinement. Assuming that u_f ≥ u and u_r ≤ u^2 a relative forward and backward errors to the level u is obtained for κ_∞(A)≤ u^-1/2u_f^-1. From a performance perspective, the requirement that the preconditioned matrix is applied in double the working precision is not attractive. In 2021, Amestoy et al. <cit.> proposed and analyzed a five-precision variant of GMRES-IR which, in addition to the working precision u, factorization precision u_f, and residual precision u_r, added two more precisions, namely u_g for the working precision within GMRES and u_p for precision in which the preconditioned matrix is applied to a vector within GMRES. The variant with setting u=u_g=u_p is used commonly in practice, although it is guaranteed to converge for a smaller range of condition numbers than the algorithm in <cit.>. Again assuming u_f ≥ u and u_r ≤ u^2, the relative forward and backward error to the level working precision is obtained for the matrices having κ_∞(A) ≤ u^-1/3u_f^-2/3, although this restriction is likely overly pessimistic in practice. Most existing analyses of GMRES-based iterative refinement schemes assume that an LU factorization is computed for use as a left preconditioner within GMRES in each refinement step. But when A is very sparse, the performance of this approach may not be attractive since the LU factorization of A may have considerable fill-in. In practice, inexact preconditioners are often used, such as incomplete LU factorizations or sparse approximate inverses (SPAI). Using SPAI has an advantage because it is, in theory, highly parallelizable, as each column can be computed independently, and its application involves only a sparse matrix-vector product (SpMV). In <cit.>, the authors propose a new variant called SPAI-GMRES-IR which, instead of LU factors, uses a sparse approximate inverse preconditioner (computed in a precision u_f with a given accuracy threshold ε, which controls the residual in each column) as a preconditioner within five-precision GMRES-IR. The analysis of SPAI-GMRES-IR shows that as long as ε and u_f satisfy the constraints u_f_2(A^T) ≲ε≲ u^-1/2κ_∞(A)^-1/2, then the constraints on condition number for forward error and backward error to converge are the same as for five-precision GMRES-IR with the full LU factors, although it is clear that convergence of the GMRES solves may be slower. In 2022, Graillat et al. proposed an adaptive, mixed precision algorithm for computing sparse matrix-vector products that adaptively selects the precision in which each matrix element is stored and applied by splitting them into buckets based on their magnitude and then using progressively lower precisions for the buckets with smaller elements <cit.>. In this work, we apply the idea proposed in <cit.> to the application of the computed SPAI M within SPAI-GMRES-IR. We call this approach BSPAI-GMRES-IR, where the `B' stands for `bucketed'; the components of M are split into different buckets, with a different precision associated with each bucket. In Section <ref> we give background on SPAI preconditioners and the adaptive precision sparse matrix-vector product approach in <cit.>, and discuss bucketed SPAI and recent related approaches. In Section <ref>, we analyze under which conditions the BSPAI-GMRES-IR will converge and bound the forward and backward errors. In Section <ref> we perform a set of numerical experiments which illustrate the behavior of BSPAI-GMRES-IR. In Section <ref> we conclude and discuss future work. § BACKGROUND §.§ Notation First we mention some notation which will be used in rest of the text. Important for us will be the condition numbers. For a given matrix A, and a vector x, and a norm p, we define κ_p(A) = ‖ A^-1‖_p‖ A‖_p,_p(A) = ‖ |A^-1||A|‖_p,_p(A,x) = ‖ |A^-1||A||x|‖_p/‖ x ‖_p, where |A|=(|a_ij|). In case p is not specified we assume the norm to be infinity. For unit roundoffs we will use the notation u and subscripts on u to distinguish various precisions. For rounding error analysis, we will use the notation γ_k = ku/1-ku, γ̃_k=cku/1-cku, where c is a small constant independent of problem dimension. A superscript on γ indicates that the corresponding u has that superscript as a subscript; for example, γ_k^f = ku_f/(1-ku_f). The quantities computed in finite precision will be denoted by hats. §.§ Sparse Approximate Inverse Preconditioners Sparse approximate inverse preconditioning is based on the idea of explicitly constructing a matrix M≈ A^-1. Although SPAI is a general algebraic preconditioning technique and is thus not expected to be effective for every problem, the use of SPAI-type preconditioners within Krylov subspace methods has the advantage that the application of the preconditioner involves only matrix-vector products, unlike, e.g., LU-based preconditioners which require two triangular solves. There are many potential techniques for computing a sparse approximate inverse M; see the survey <cit.>. A popular approach based on Frobenius norm minimization produces a sparse approximate inverse in unfactored form (i.e., a single matrix M), in which M is computed as the solution to min_𝒥∈𝒮‖ I-AM‖_F, where 𝒥∈𝔹^n× n is a prescribed binary sparsity pattern in the set of all possible binary sparsity patterns 𝒮∈𝔹^n× n. The benefit is that we can decouple this minimization problem as min_𝒥∈𝒮‖ I-AM‖_F^2 = ∑_k=1^n min_𝒥_k∈𝒮_k‖ e_k-Am_k‖_2^2, where 𝒥_k, m_k, and e_k represent the kth columns of 𝒥, M, and I, respectively. The computed M is then reduced to solving a linear least squares problem for each column m_k of M. From a performance point of view, the benefit is that these linear least squares problems are solved independently and in parallel. Early works based on this approach used a fixed prescribed sparsity pattern 𝒥. The set 𝒥_k extracts column indices of A that are relevant for solving for a column m_k. The nonzero rows of the submatrix A(:, 𝒥_k) are represented by the so-called “shadow” of 𝒥_k, ℐ_k = { i∈{1,…, n}: ∑_j∈𝒥_k |a_ij|≠ 0}, where a_ij is the (i,j) entry of A. Thus each term in the summation on the right in (<ref>) can be reduced to min_𝒥(m̅_k) = 𝒥_k‖e̅_k - A̅_k m̅_k ‖_2, where A̅_k = A(ℐ_k, 𝒥_k)∈ℝ^|ℐ_k|,|𝒥_k|, m̅_k = m_k(𝒥_k)∈ℝ^|𝒥_k|, e̅_k = e_k(ℐ_k)∈ℝ^|ℐ_k|, and 𝒥(m̅_k) is the binary sparsity pattern of m̅_k. This results in small least squares problems which can be solved directly, for example, via QR factorization. The deficiency of this approach is that it is hard to predict a sparsity pattern a priori that will ensure an effective preconditioner. Mostly common choices used are the sparsity pattern of A, A^T, or a power of a sparsified A, although generally its not guaranteed that the preconditioner produced will be effective. For overcoming this difficulty, many authors proposed iterative approaches. In one such approach, one starts with an initial sparsity pattern and adds nonzeros to this pattern until ‖ e_k - Am_k‖_2≤ε becomes true for some threshold ε or the maximum number of nonzeros has been reached. For a more detailed explanation of this type algorithm, see, e.g., the work by Cosgrove et al. <cit.>, Grote and Huckle <cit.>, and Gould and Scott <cit.>. The most successful among these algorithms is that of Grote and Huckle <cit.> which is commonly used to compute a SPAI preconditioner <cit.>, and which we use in the present work. To overcome the difficulty of choosing the sparsity pattern a priori for a resulting effective preconditioner, the authors in <cit.> proposed an adaptive approach that dynamically determines the most beneficial nonzero indices to include. Algorithm <ref> is one specific variant of Grote and Huckle's algorithm, which is taken from <cit.>. The algorithm requires an input matrix A, 𝒥 as the initial binary sparsity pattern, ε as the convergence tolerance, α, for the maximum number of iterations for each column, and β, for the maximum number of nonzeros added to the pattern in each iteration. The algorithm for each column solves the linear least squares problem (<ref>) for a given initial sparsity pattern 𝒥 and computes the residual s̅_k (lines <ref>-<ref>). This column is considered finished when the 2-norm of the residual is less than the threshold ε. Otherwise, we continue adding entries to 𝒥. We construct an index set ℒ_k in line <ref> which contain the nonzeros entries in s̅_k. From the index set ℒ_k, for every element ℓ we go through that ℓth row of A and choose the column indices of the nonzero entries for which we define a set name 𝒥̃_k which are not 𝒥_k. The set 𝒥̃_k is the union of the sets 𝒩_ℓ which contain the potential indices that can be added to 𝒥_k, out of which we select only a subset of the “most important” indices. There are many ways to determine which indices are most important. Grote and Huckle's technique considers a univariate minimization problem, through which the quantity ρ_jk computed in line <ref> gives a measure of the 2-norm of the new residual if index j is added to 𝒥_k. A well-known heuristic (see, e.g., <cit.>) is to mark indices as “acceptable” if their ρ_jk is less than the arithmetic mean ρ̅_k over all j. Then we choose up to β of the best (smallest ρ_jk) indices acceptable to add (lines <ref>-<ref>) in each of the α iterations. In line <ref> there is no need to recompute the QR factorization fully in each step; the factorization can be updated by using the QR factorization computed in the previous step and the entries added to A̅_k; see <cit.>. Typical values for the parameters are ε∈ [0.1,0.5], α∈{1,…,5}, and β∈{3,…,8} <cit.>. In SPAI, although each column can theoretically be computed in parallel, the construction is often costly, specially for large-scale problems; see, e.g., <cit.>. SPAI memory requirements scale quadratically and the computational cost scales cubically in the number of nonzeros per row <cit.>. Thus applying the bucketing idea to sparse approximate inverse preconditioner in which low precision is used for the buckets containing elements of smaller magnitude has the potential to significantly reduce this cost. For modern hardware like GPUs, the construction of efficient sparse approximate inverse computations has been the subject of much recent work; see, e.g., <cit.>. §.§ Adaptive Precision Sparse Matrix-Vector Products As mentioned, with the emergence of low precision arithmetic, such as half precision fp16 or bfloat16 on modern computers, mixed precision algorithms in numerical linear algebra have received renewed attention. Many variants of mixed precision algorithms have been recently proposed; see, for example, the works <cit.> on matrix multiplication. The works <cit.> proposed mixed precision iterative refinement methods based on preconditioned Krylov subspace methods. The authors in <cit.> proposed a general preconditioning technique based on a low-rank approximation of the error. A particularly fruitful idea is the concept of adaptive precision algorithms, in which the precisions used need not be determined a priori, but are instead dynamically set based on the data involved in the computation and perhaps some user-specified accuracy constraints. Often, the precisions chosen are proportional to importance of the data, which is inherently application dependent. For example, the authors in <cit.> introduced an adaptive precision block Jacobi preconditioner with idea of choosing the precision of each block based on its condition number. Amestoy et al. <cit.> introduced mixed precision block low rank compression that partitions a low rank matrix into several low-rank components of decreasing norm and stores each of them in a correspondingly decreasing precision. Ahmad et al. <cit.> introduced an algorithm for sparse matrix-vector products that switches the elements in the range of [-1, 1] to single precision while keeping the other elements in double precision. The authors in <cit.> develop a “quantized” dot product algorithm, adapting the precision of each vector element based on its exponent. In recent work, which is the focus of the present paper, Graillat et al. <cit.> develop an adaptive precision sparse matrix-vector product algorithm with the idea of adapting the precision of each matrix element based on its magnitude. The elements of the matrix are split into different buckets and different precisions are used to store and compute with elements in each bucket. Buckets with smaller elements are stored in lower precision. This approach is used to apply the matrix A to a vector within GMRES-IR with Jacobi preconditioning. We now give an overview of the results of <cit.>. For matrix-vector products in a uniform precision, the Oettli-Prager<cit.>, <cit.> and Rigal-Gaches<cit.>, <cit.> theorems give the formula for normwise backward error, ε_nw=min{ε:ŷ = (A+Δ A)x, Δ A ≤εA } = ŷ-y/yx. A bound on the normwise backward error for the uniform precision case is ε_nw≤ pu, where p is the maximum number of nonzero elements per row of A; see, e.g., <cit.>. The idea of the adaptive precision sparse-matrix vector product approach of Graillat et al. <cit.> is, for a given set of q precisions i.e u_1 < u_2 … < u_q, to split the elements of the matrix A into q buckets based on the magnitude of the elements. Using this approach splits the nonzeros elements in each row i of the computed M into up to q buckets and then computes the partial inner products associated with each bucket in up to q different precisions. The partial inner products are then all summed in precision u_1. We briefly recall the notation, algorithm, and key points of the error analysis given in <cit.>. Let J_i denote the set of column indices of the nonzero elements in row i of A. Each row i of the matrix A will be partitioned into the q buckets B_ik⊂ [1,n] for k=1:q. How we define the buckets will affect the resulting normwise (or componentwise) backward error. Assume that we want to construct the buckets B_ik in such a way that the backward error obtained is at most of order O(ϵ), where ϵ is the user defined target accuracy with ϵ ≥ u_1. We can define the buckets as B_ik = { j∈ J_i : |a_ij| ∈ P_ik}, with P_ik= (ϵ‖ A ‖/u_2, +∞) for k=1 (ϵ‖ A ‖ /u_k+1, ϵ‖ A ‖/u_k] for k=2:q-1 [0, ϵ‖ A ‖/u_q] for k=q . The procedure for placing elements of a matrix A into buckets according to this rule is given in Algorithm <ref>. The partial inner product y_i^(k) = ∑_j∈ B_ik a_ijx_j associated with bucket B_ik is computed in precision u_k, and all partial inner products are accumulated in precision u_1 (the highest precision). This procedure is given in Algorithm <ref>. Theorem 3.1 in <cit.> states that if y=Ax is computed using this approach, then we have ε_nw≤ (q-1) u_1 + cϵ, where c= (1+ (q-1)u_1 )+ max_i∑_k=1^qp_ik^2(1+u_k)^2, and p_ik is the number of elements in B_ik. We note that Graillat et al. also provide different bucketing strategies that give guaranteed bounds on the componentwise backward error. The drawback of these is that the bucketing scheme depends on the values in the vector x to be multiplied, and thus the bucketing would need to be redone for each matrix-vector product encountered. Thus for practical reasons we restrict ourselves to the variant which provides normwise error bounds. § GMRES BASED ITERATIVE REFINEMENT WITH BSPAI Our approach will be to apply the adaptive precision sparse-matrix vector product described in Section <ref> to the application of a sparse approximate inverse preconditioner M computed using <ref> within GMRES-based iterative refinement. The resulting algorithm, which we refer to as BSPAI-GMRES-IR, is given as Algorithm <ref>. Our aim is to derive the conditions under which BSPAI-GMRES-IR (Algorithm <ref>) will converge. We can determine the resulting backward and forward errors in GMRES when we use the adaptive precision SpMV to apply the preconditioner M within each GMRES iteration. We will assume here that matrix-vector products with A are computed in precision u_p within GMRES (where we will generally take u_p=u_g=u, using the notation of <cit.>). Note that we could, in principle, also use the adaptive precision SpMV to apply A to a vector; extending the analysis to this case is simple and the results will not be significantly different as long as u_p ≈ϵ_bspai. We give backward and forward error bounds for GMRES for this case as well below. Following <cit.> and <cit.>, let z_j = MA v̂_j be computed in each iteration of MGS-GMRES as described above, where A is applied in precision u_p and M is applied using the adaptive precision SpMV approach (Algorithm <ref>). Then (A+Δ A) v̂_j = ŵ_j, Δ A _F≤γ_q^p A_F (M + Δ M) ŵ_j = ẑ_j, Δ M_F ≤((q-1)u_1 + cϵ) M_F. Then ẑ_j = (M+Δ M)(A+Δ A) v̂_j ≈ (MA + MΔ A + Δ MA)v̂_j = MAv̂_j + f_j, where f_j = (MΔ A + Δ A M)v̂_j. We can bound the norm of this quantity by f_j _2 ≲γ_q^p M_F A_F + ( (q-1) u_1 + c ϵ) MA ≤ (q u_p + (q-1)u_1 + cϵ) M _F A_F v̂_j _2. This means that we can apply <cit.> with ϵ_p = (qu_p + (q-1)u_1 + c ϵ) M_FA_F/MA_F. Note also that we must apply the preconditioner M to the right-hand side r̂_i. Denoting s_i = Mr̂_i, the computed ŝ_i satisfies ŝ_i = (M+Δ M) r̂_i = s_i + Δ M r̂_i. We then have ŝ_i - s_i _∞ ≤((q-1)u_1+cϵ) M_∞r̂_i _∞ ≤((q-1)u_1+cϵ) κ_∞(M) s_i _∞. Letting Ã=MA, and assuming we are solving the n× n linear system Ãd_i=ŝ_i, the conclusions of <cit.> say that for MGS-GMRES in working precision u_g, except for products with à which satisfy fl(Ãv) = Ãv + f, f_2 ≲ϵ_p Ã_F v_2, as long as σ_min(Ã) ≳(k^1/2(qu_p + (q-1)u_1 + c ϵ) M_FA_F/Ã_F + γ̃_kn^g) Ã_F, then for some step k≤ n, the algorithm produces an approximate solution d̂_i satisfying (à + ΔÃ) d̂_i = ŝ_i + Δŝ_i, Ã_F ≲(k^1/2(qu_p + (q-1)u_1 + c ϵ) M_FA_F/Ã_F + γ̃_kn^g) Ã_F, Δŝ_i_2 ≲γ̃_kn^g ŝ_i_2 ≲ n^1/2γ̃_kn^g s_i_∞. From (<ref>), we can write s_i - Ãd̂_i = ΔÃd̂_i - (ŝ_i - s_i ) - Δŝ_i, which we can bound using (<ref>), (<ref>), and (<ref>), giving ‖ s_i - Ãd̂_i ‖_∞ ≤ΔÃ_∞d̂_i_∞ + ŝ_i - s_i_∞ - Δŝ_i ≤ n (k^1/2(qu_p + (q-1)u_1 + c ϵ) M_FA_F/Ã_F + γ̃_kn^g) Ã_∞d̂_i_∞ ≤+ ((q-1)u_1+cϵ) κ_∞(M) s_i _∞ + n^1/2γ̃_kn^g s_i_∞ ≤ n (k^1/2n(qu_p + (q-1)u_1 + c ϵ) κ_∞(M) + γ̃_kn^g) Ã_∞d̂_i_∞ ≤+ ((q-1)u_1+cϵ) κ_∞(M) s_i _∞ + n^1/2γ̃_kn^g s_i_∞ ≤ kn^2 ( u_g + (qu_p + (q-1)u_1+cϵ)κ_∞(M) ) ( Ã_∞d̂_i_∞ + s_i _∞). Thus the normwise relative backward error of the system Ãd̂_i = s_i is bounded by s_i - Ãd̂_i_∞/Ã_∞d̂_i_∞ + s_i _∞≲ f(n,k)( u_g + (qu_p + (q-1)u_1+cϵ)κ_∞(M) ), and thus the relative error of the computed d̂_i is bounded by d_i -d̂_i_∞/d_i_∞≲ f(n,k)( u_g + (qu_p + (q-1)u_1+cϵ)κ_∞(M) ) κ_∞(Ã), where f(n,k) = kn^2. From (<ref>) and (<ref>), we can say that if u_1≈ϵ≈ u_p, then the backward and forward errors in MGS-GMRES with adaptive precision SpMV used to apply M will be approximately the same as the case of uniform precision SpMV; see <cit.>. We note that in the case where we use the adaptive precision SpMV also in applying the matrix A to a vector within GMRES, the bound for the normwise relative backward error in (<ref>) becomes s_i - Ãd̂_i_∞/Ã_∞d̂_i_∞ + s_i _∞≲ f(n,k)( u_g + (2(q-1)u_1+(c_A+c_M)ϵ)κ_∞(M) ), where we assume that the same buckets are used for both M and A, and c_A and c_M are the values of c in (<ref>) associated with A and M, respectively. Similarly, the relative forward error becomes d_i -d̂_i_∞/d_i_∞≲ f(n,k)( u_g + (2(q-1)u_1+(c_A+c_M)ϵ)κ_∞(M) ) κ_∞(Ã). Thus if u_1≈ϵ, MGS-GMRES with the adaptive precision SpMV used for applying both M and A will produce backward and forward errors similar to the MGS-GMRES variant in <cit.> with the setting u_p = u_1. § NUMERICAL EXPERIMENTS We perform numerical experiments to evaluate the performance of BSPAI-GMRES-IR by comparing it with SPAI-GMRES-IR in <cit.>. We stress that we only expect BSPAI-GMRES-IR to have a clear potential advantage over SPAI-GMRES-IR for the case u_f=u. Otherwise, for example, if u_f= half and u= single, SPAI-GMRES-IR stores the preconditioner entirely in precision u_f but applies it in precision u. BSPAI-GMRES-IR, on the other hand, stores the preconditioner in multiple precisions, where we must have u_1= single in order to enable reading effective application precision ϵ≈ u. We also note that this motivates future work in the direction of decoupling the storage and application precisions in adaptive precision sparse matrix-vector products. All the experiments are performed in MATLAB R2021a. The matrices we tested are taken from the SuiteSparse Matrix Collection <cit.>. We run the experiments using four precisions which are half, single, double, and quadruple. For properties of these precision, see Table <ref>. For half precision, we use the library[]. We use MATLAB built-in datatypes for single and double precision and the Advanpix Multiprecision Computing Toolbox for quadruple precision; see <cit.>. The code for reproducing the experiments in this paper is available online[]. Matrices used in the experiments along with their key properties are listed in Table <ref>. We set the right-hand side to the vector with equal components and unit 2-norm in all tests. For the GMRES tolerance, we set τ = 10^-4 in the case working precision is single and τ = 10^-8 for the case working precision is double, which responds to roughly the square root of the working precision. These values are set by default used in the previous works by <cit.>, <cit.>, and are also used in practical applications. In all invocations, we use the GMRES setting u_g=u_p=u, which is commonly used in practice. We tested the matrices in Table <ref> with a subset of the settings (u_f, u, u_r)= (double, double, quad), (u_f, u, u_r)= (single, double, quad), and (u_f, u, u_r)= (half, single, double), depending on whether SPAI-GMRES-IR converges with the given precisions and value of τ. We choose the identity matrix as the initial sparsity pattern for SPAI in all tests. When A has zero entry on the diagonal, this results in a zero column in the SPAI preconditioner, as mentioned in Sedlacek <cit.>. Therefore we only choose problems with nonzero entries on the diagonal, but note that this could be remedied by either permuting A or using the initial sparsity pattern of A, which, when SPAI is run on A^T, guarantees that we obtain a with nonzero rows <cit.>. In all tests, the matrices are preprocessed with column scaling such that the absolute value of the largest value in every column of A^T is 1. This one-sided scaling was proposed in <cit.> to avoid overflow in the computation of QR due to low precision. To be specific, for obtaining M, we run SPAI on the scaled A^T D and then set M = M^T D, where is the D diagonal scaling matrix. In both BSPAI-GMRES-IR and SPAI-GMRES-IR we use the variant in which u_g=u_p=u, which is commonly used in practice. For all tests, we use β=8, which is in the range suggested by Sedlacek <cit.>. For BSPAI-GMRES-IR, when u is double, we use the precisions with u_1= double, u_2 = single, u_3 = half, and u_4=1. When u is single, we use the precisions u_1 = single, u_2 = half, and u_3=1. Note that the choice u_1=1 enables the dropping of elements in M, as described in <cit.>. For each linear system and given combination of precisions, we run BSPAI-GMRES-IR with various values of ϵ≥ u_1, and use the same value of ε for both BSPAI-GMRES-IR and SPAI-GMRES-IR. We report our results in a series of tables. The first column of the table lists the matrix name, and the second column indicates whether we use BSPAI or SPAI and the corresponding parameters. The third column gives the infinity-norm condition number of the preconditioned coefficient matrix. The fourth column gives information about the number of nonzeros and their storage precisions. The first number gives the total number of nonzeros, and the tuple that follows gives information about the precisions: element i in the tuple gives the number of nonzeros stored in precision u_i. The fifth column gives the storage cost of the BSPAI preconditioner with mixed precision storage as a percentage of the cost of the SPAI preconditioner with uniform precision storage (the lower the better). The final column gives information about convergence of the iterative refinement process. The first number gives the total number of GMRES iterations over all refinement steps, and element i of the tuple that follows gives the number of GMRES iterations in refinement step i. Thus the number of elements in the tuple gives the number of iterative refinement steps required until convergence of the forward and backward errors to the level of the working precision. For each setup we form one table with five columns in which first one represent matrices names, second for the preconditioner( SPAI and BSPAI), third for the condition number of the preconditioned system, fourth for the total number of nonzeros (number of nonzeros in each bucket) and the last column is for the information about the number of GMRES-IR refinement steps and GMRES iterations per refinement step. §.§ Experiments with (u_f, u, u_r) = (double, double, quad) Table <ref> shows the experiments for the setting (u_f, u, u_r)= (double, double, quad), with both ϵ=2^-53 and ϵ=2^-37. First, we note that where SPAI-GMRES-IR converges, BSPAI-GMRES-IR also converges, as predicted by our theoretical results, although of course the adaptive precision storage can result in a different total number of GMRES iterations across the refinement steps. The performance for the matrix using ε=0.1 with ϵ=2^-53 and ϵ=2^-37, BSPAI-GMRES-IR takes 21 total GMRES iterations to converge to double precision accuracy while SPAI-GMRES-IR takes total 14 GMRES iterations. The storage (and computation) savings of the adaptive precision approach can be significant for this case; using ϵ = 2^-53 and ϵ=2^-37, requires only 74.9% and 42.6% of the storage/computation cost as the uniform precision approach, respectively. This matrix perhaps represents a best-case scenario. For , we also see reasonable reductions in storage cost for the two choices of ϵ; note that although the choice ϵ=2^-37 results in significant storage savings, the number of GMRES iterations required increases significantly. For some matrices, such as and , there appears to be no benefit to the adaptive precision approach. §.§ Experiments with (u_f, u, u_r) = (single, single, double) Table <ref> shows the experiments for the setting (u_f, u, u_r)= (single, single, double) with u_1 = single, u_2= half, and u_3= 1, and with ϵ=2^-24 and ϵ=2^-18. In all tests, both SPAI-GMRES-IR and BSPAI-GMRES-IR converge to single precision accuracy. For the value ϵ=2^-24, BSPAI-GMRES-IR takes about the same number of iterations as SPAI-GMRES-IR and requires an average of 98.6% of the storage cost as the uniform precision approach. The performance of BSPAI-GMRES-IR for the matrix using ϵ=2^-18 requires 76.5% of the storage cost as uniform precision and converges in the same number of iteration as SPAI-GMRES-IR. For matrices , , and , although using ϵ=2^-18 results in storage costs of 66.7%, 70.7% and 73.8% of the uniform precision approach, respectively, a greater number of iterations are required than in the uniform precision SPAI-GMRES-IR. § CONCLUSIONS AND FUTURE WORK In this work we use an adaptive precision sparse approximate inverse preconditioner within mixed precision GMRES-based iterative refinement. Using the approach of Graillat et al. <cit.>, after computing a sparse approximate inverse in low precision, we place elements of the sparse approximate inverse preconditioner into buckets for a given set of precisions based on their magnitude. We then apply the preconditioner to a vector in mixed precision within five precision GMRES-IR; we call this algorithm variant BSPAI-GMRES-IR. We then analyze the behavior of the backward and forward errors of mixed precision left-preconditioned GMRES method, which uses the bucketed sparse approximate inverse as a left preconditioner. Our analysis shows that if we choose u_1≈ε≈ u_p, then the normwise backward and forward errors will be close to those we get in the case that we use uniform precision. This indicates that BSPAI-GMRES-IR will converge under the same conditions as SPAI-GMRES-IR. We performing a set of numerical experiments which shows that the adaptive sparse-matrix vector product approach can reduce the cost of storing and applying the sparse approximate inverse preconditioner, although a significant reduction in cost often comes at the expense of increasing the number of GMRES iterations required for convergence. We note that it is possible to extend this approach to other preconditioners for Krylov subspace methods. We again stress that a fruitful potential area of future work is to extend the adaptive sparse-matrix vector product approach to decouple the storage and computation precisions. This would make this approach beneficial for existing cases where we would ideally like to store a matrix in lower precision and apply it to a vector in a higher precision, which is often the case within SPAI-GMRES-IR. Other potential future work involves the development and analysis of other adaptive-precision matrix computations, such as triangular solves. siamplain
http://arxiv.org/abs/2307.05636v2
20230711091309
Impact of the $^6$Li asymptotic normalization constant onto $α$-induced reactions of astrophysical interest
[ "Chloë Hebborn", "Melina L. Avila", "Konstantinos Kravvaris", "Gregory Potel", "Sofia Quaglioni" ]
nucl-th
[ "nucl-th", "astro-ph.SR", "nucl-ex" ]
[1] [1] [1] [1] #1 et al. [1] (<ref>) [1] Eq. (<ref>) [2] Refs. <cit.> [1] Sec. <ref> [1] Chapter <ref> [1] Appendix <ref> [1] Table <ref> [1] Fig. <ref> [1] ^#1 Schrödinger [2] ⟶_#1→#2 ^∘ [1]#1 [1]#1 [1]#1 [1]#1 [2]∂#1∂#2 [2]d#1d#2 [email protected] Facility for Rare Isotope Beams, Michigan State University, East Lansing, Michigan 48824, USA Lawrence Livermore National Laboratory, P.O. Box 808, L-414, Livermore, California 94551, USA Physics Division, Argonne National Laboratory, Lemont, IL 60439, USA Lawrence Livermore National Laboratory, P.O. Box 808, L-414, Livermore, California 94551, USA Lawrence Livermore National Laboratory, P.O. Box 808, L-414, Livermore, California 94551, USA Lawrence Livermore National Laboratory, P.O. Box 808, L-414, Livermore, California 94551, USA LLNL-JRNL-851130 Indirect methods have become the predominant approach in experimental nuclear astrophysics for studying several low-energy nuclear reactions occurring in stars, as direct measurements of many of these relevant reactions are rendered infeasible due to their low reaction probability. Such indirect methods, however, require theoretical input that in turn can have significant poorly-quantified uncertainties, which can then be propagated to the reaction rates and have a large effect on our quantitative understanding of stellar evolution and nucleosynthesis processes. We present two such examples involving α-induced reactions, ^13C(α,n)^16O and ^12C(α,γ)^16O, for which the low-energy cross sections have been constrained with (^6Li,d) transfer data. In this Letter, we discuss how a first-principle calculation of ^6Li leads to a 21% reduction of the ^12C(α,γ)^16O cross sections with respect to a previous estimation. This calculation further resolves the discrepancy between recent measurements of the ^13C(α,n)^16O reaction and points to the need for improved theoretical formulations of nuclear reactions. Impact of the ^6Li asymptotic normalization constant onto α-induced reactions of astrophysical interest S. Quaglioni August 12, 2023 ======================================================================================================== Introduction: Nuclear fusion reactions involving the capture of a helium-4 nucleus (α-particle) from a light or medium-mass isotope to form a heavier nucleus and a neutron (n) or a high-energy photon (γ) are central to understanding the lifecycle of massive stars, from driving the nucleosynthetic processes that make them shine and evolve <cit.> to determining their remnants after their eventual death <cit.>. For example, the ^13C(α,n)^16O and ^22Ne(α,n)^25Mg reactions are the principal sources of neutrons fueling the slow neutron capture process (s-process) in asymptotic giant branch (AGB) stars, which is responsible for the formation of half of the elements heavier than iron <cit.>. Similarly, the ^12C(α,γ)^16O reaction is not only a key process in the sequence of helium burning reactions that produces carbon and oxygen in red giant and supergiant stars <cit.> but also determines the ratio of the amount of ^12C to that of ^16O, which has profound repercussions on the later evolutionary phases and nucleosynthesis events of these stars, and their ultimate fate once they explode as supernovae <cit.>. Arriving at a quantitative and more fundamental understanding of the life and death of massive stars requires accurate and precise knowledge of α-induced reaction rates at stellar energies. In the case of ^12C(α,γ)^16O, ideally the reaction rate would need to be known within ∼ 10% uncertainty or less at center of mass energies of ∼300 keV <cit.>. Typically, α-induced reactions are very difficult or impossible to measure in the range of (low) energies where they occur in stars because the Coulomb repulsion between the α particle and the nucleus suppresses the reaction probability (cross section) below the background of cosmic rays. Underground facilities, such as the Gran Sasso National Laboratories (LUNA) <cit.> or the China Jinping Underground Laboratory (JUNA) <cit.>, help reduce this background and reach the low energies relevant for astrophysics. However, a direct measurement of, e.g., the ^12C(α,γ)^16O reaction below ∼300 keV remains unfeasible and down extrapolations from higher-energy measurements rely on theory. For the description of low-energy reactions involving systems made of more than A=12 nucleons, where accurate microscopic predictions based on validated models of the nuclear interactions <cit.> are out of reach and few-body models using effective potentials between structureless reactants <cit.> do not provide the required predictive power, phenomenological R-matrix theory <cit.> has been the tool of choice. In this technique, cross sections are reconstructed from a relatively small number of parameters that have a physical meaning and are adjusted to reproduce available experimental data. R-matrix analyses have been used extensively to evaluate S-factors[Astrophysical S-factors correspond to the cross sections rescaled to remove the effect of the Coulomb barrier.] of astrophysical interest, including ^13C(α,n)^16O <cit.> and ^12C(α,γ)^16O <cit.>, and extrapolate them down to stellar energies. However, the fit of the parameters can result in sizeable uncertainties on the extrapolated S-factors, in particular when different data sets are inconsistent with each other. Moreover, the presence of loosely-bound states can further influence the extrapolation process <cit.>. To reduce uncertainties, parameters related to the asymptotic normalization of bound-state wave functions can be fixed using information gleaned from measurements of (^6Li,d) [or (^7Li,t)] α-transfer processes, in which an α-particle is transferred from a loosely-bound ^6Li (^7Li) to another nucleus A to form the A-α bound state of interest <cit.>. In this Letter, we discuss how a recent first-principle prediction for the ^6Li nucleus <cit.> influences the low-energy properties extracted from (^6Li,d) transfer data and reconciles two recent R-matrix evaluations of the ^13C(α,n)^16O reaction by the LUNA <cit.> and JUNA collaborations <cit.>. We also estimate the significant impact of this ^6Li prediction on the R-matrix evaluation of the ^12C(α,γ)^16O rate, and argue for future theoretical and experimental studies to improve the evaluation of additional α-induced reactions. Peripheral reactions to extract ANCs: Non-resonant low-energy radiative α-capture reactions A+α→ B +γ dominated by electric transitions are peripheral, i.e., do not probe the interior of the A-α bound state (Fig. <ref>) and their cross sections scale with the square of its asymptotic normalization constant (ANC, denoted as 𝒞_A-α) <cit.>. The low-energy α-capture cross section can then be accurately approximated as <cit.> σ_α,γ≃ (𝒞_A-α)^2σ̂_α,γ/b^2_A-α, where σ̂_α,γ and b_A-α are the cross section and ANC obtained in a two-body model calculation that treats both the target nucleus (A) and the α as point particles (b_A-α is also referred as the single-particle ANC). Transfer reactions at energies around the Coulomb barrier are also peripheral, and hence exhibit a similar proportionality with ANCs. For example, the cross section for A+^6Li→ B (≡ A+α)+d at low energies can be accurately evaluated as σ_^6 Li,d≃ (𝒞_A-α)^2(𝒞_α-d)^2σ̂_^6 Li,d/b^2_A-αb^2_α-d , where, similar to before, σ̂_^6 Li,d and b_α-d are the cross section and ANC obtained in simplified calculations that treat ^6 Li, d and α as point particles. If the ANC of the ^6Li nucleus 𝒞_α-d is well known, one can use Eq. (<ref>) and experimental data on the transfer reaction to accurately extract 𝒞_A-α by rescaling the theoretical cross sections σ̂_^6 Li,d, typically evaluated within the distorted-wave Born approximation (DWBA), to the data <cit.>. These calculations usually consider only the s-wave ANC of ^6Li and neglect the d-wave ANC, which is two order of magnitude smaller <cit.>. A frequently adopted value of the s-wave α-d ANC in the analysis of peripheral (^6Li,d) transfer reactions <cit.> has been (𝒞_α-d)^2=5.3 ± 0.5 fm^-1. This value was determined by Blokhintsev  <cit.> using three different methods: 1) by analytic continuation of the α-d scattering phase shifts, 2) by computing the (two-body) α-d bound state using an interaction fitted to reproduce these phase shifts, and 3) by solving the Faddeev equations for the (three-body) α-n-p system using phenomenological α-nucleon interactions and neglecting the α-p Coulomb repulsion. Unfortunately, the uncertainties associated with the extrapolation procedure used in the first two methods and the ambiguity in the choice of the α-nucleon interactions combined with the omission of the Coulomb potential in the third method have not been quantified, raising the prospect for previously unrecognised systematic errors in the ANC determination. Other evaluations <cit.> of 𝒞_α-d relying on α-d phenomenological potentials are consistent with the values provided by Blokhintsev  but none of them have quantified the parametric uncertainties associated with the fit of the α-d interaction, which can be sizeable <cit.>. A new accurate determination of 𝒞_α-d through (six-body) predictions of the ^6Li system starting from two- and three-nucleon forces derived within chiral effective field theory, recently became available <cit.>. These calculations, obtained within the framework of the ab initio no-core shell model with continuum (NCSMC) <cit.>, treat the α-d scattering and bound ^6Li state on equal footing and accurately reproduce the low-energy properties of the system, including the α(d,γ)^6Li capture rate and α-d elastic scattering at energies below 3 MeV. Contrary to the determination of Blokhintsev , the uncertainties of these microscopic calculations stem from two clearly identified sources: the choice of chiral Hamiltonian and the convergence with respect to the size of the NCSMC model space. Both uncertainties were significantly reduced by introducing a fine-tuning correction to exactly reproduce the experimental binding energy of the ^6Li ground state. Compared to this first-principle prediction, (𝒞_α-d)^2=6.864 ± 0.210 fm^-1, the ANC of Blokhintsev  <cit.> is 22% smaller and exhibits larger uncertainties. Impact on s-process nucleosynthesis: Because of its key role in s-process nucleosynthesis, the ^13C(α,n)^16O reaction has been measured in underground facilities in the last two years[We cite here the year of the publications of the Phys. Rev. Lett. in which the data and analysis were published.] at low energies relevant for astrophysics, i.e., at 0.23–0.3 MeV by the LUNA and at 0.24-1.9 MeV by the JUNA collaborations <cit.>. Although the two data sets are consistent, the R-matrix analyses used to extrapolate the data to lower energies lead to different S-factors. The JUNA collaboration suggested that this discrepancy is explained by the use of a different ANC for the barely bound 1/2^+ state in ^17O, which determines the overall normalization of the S-factor rate at the energies of astrophysics interest <cit.>. Indeed, in the work of Ref. <cit.> this ANC is treated as a floating parameter in the R-matrix analysis of the data, while in the analysis of Ref. <cit.> it was fixed to the value inferred by Avila  from a sub-Coulomb (^6Li,d) transfer experiment <cit.> (Fig. <ref>). In turn, however, the work of Ref. <cit.> had adopted the s-wave α-d ANC of Blokhintsev  to interpret their data. By reanalyzing the transfer data of Ref. <cit.> with the more accurate ANC obtained in Ref. <cit.>, we extract[The so-called Coulomb-modified ANC 𝒞̃ is directly proportional to the usual ANC and given by 𝒞̃ = 𝒞×ℓ !/Γ(ℓ+1+η), with ℓ being the relative angular momentum between the fragments and η the Sommerfeld parameter <cit.>.] (𝒞̃^1/2+_α-^13 C)^2=2.8 ± 0.5 fm^-1 consistent with the value obtained by the JUNA collaboration, thus reconciling the two R-matrix analyses. The re-evaluated ANC from the (^6Li,d) transfer data also agrees with the values extracted from (^11B,^7Li) <cit.> and (^7Li,t) <cit.> transfer data (Fig. <ref>). This work supports the JUNA analysis <cit.> of the thermonuclear reaction rate <cit.>, which is smaller than previous evaluations. This decrease in the flux of neutrons impacts the abundances of s-process branching point and heavier elements, in particular ^60Fe, ^152Gd and ^205Pb <cit.>. Interestingly, the rate of the other major neutron source in the s-process, ^22Ne(α,n)^25Mg, has also been constrained using the α-partial width of resonances of ^26Mg extracted from (^6Li,d) data using the the s-wave α-d ANC of Blokhintsev  <cit.>. Based on the larger value of the α-d ANC of Ref. <cit.>, we expect that this ^22Ne(α,n)^25Mg rate is overestimated, and that the overall neutron flux in the s-process is even smaller than what is currently evaluated. Because the ^22Ne(^6Li,d)^26Mg transfer reaction populates unbound ^26Mg states, the DWBA analysis of the data should be revisited to account for model uncertainties associated with the approximation introduced by treating these states as bound and then extrapolating their properties up to energies in the continuum <cit.>. We reserve this study for future work, as some development in reaction theory is first needed to adequately describe transfer reactions to states in the energy continuum, e.g., through the generalization of the Green's function based formalisms <cit.> to α-transfer reactions. Impact on ^12 C(α,γ)^16 O: A recent R-matrix analysis of ^16O data (including ^12C(α,γ)^16O capture, β-delayed α-emission, α-^12C elastic scattering measurements, and recent evaluations of bound- and resonant-state properties) found that the extrapolated ^12C(α,γ)^16O S-factor at stellar energies of ∼ 300 keV depends strongly on the 𝒞_α-^12 C ANC of the 1^- and 2^+ loosely-bound states in ^16O <cit.>. To arrive at their best R-matrix fit, the authors adopted (as fixed parameters) the ANCs of these and the other ^16O bound states determined from (^6Li,d) reactions <cit.> using the 𝒞_α-d value of Blokhintsev  (fourth column of Table <ref>). As in the previous case, the new accurate prediction of 𝒞_α -d from Ref. <cit.> impacts the extracted ^16O (𝒞_α-^12 C^J^π)^2 values, yielding ANCs that are ∼ 22% smaller and with reduced uncertainties (fifth column of Table <ref>). These new evaluations are consistent with the (𝒞^1^-_α-^12 C)^2 and (𝒞^2^+_α-^12 C)^2 values extracted from (^7Li,t) transfer reactions in Ref. <cit.> and <cit.> respectively, but are in tension with the 2^+ ANC from Ref. <cit.> (Table <ref>). The ^7Li ANC used in these DWBA analyses was determined using similar approaches as in Ref. <cit.> and model uncertainties are similarly not fully quantified <cit.>. In this respect, a first-principle prediction of the ^7Li ANC 𝒞_α-t would be desirable and may resolve the tension between the 𝒞_α-^12 C values extracted from (^6Li,d) and (^7Li,t) transfer data. To illustrate the impact of the 𝒞_α-^12 C^J^π values determined in this work on the low-energy ^12C(α,γ)^16O S-factor, we perform a reduced R-matrix calculation[We also included the cascade transitions, resulting from γ-ray de-excitations from the excited states to the ground state of ^16O.] starting from the best fit of Ref. <cit.> and keeping only the ^16O states with excitation energies up to 10.36 MeV. At low energies, the S-factor from such reduced R-matrix calculation is close to the comprehensive fit of Ref. <cit.> (Fig. <ref>). As expected for a peripheral reaction, upon changing the α-^12C ANCs to the values determined in this work we obtain a reduced S-factor. At 300 keV, the S-factor is almost exactly proportional to (𝒞_α-^12 C^J^π)^2 and is reduced by 21% with respect to the original fit. It is worth noting that none of the S-factors are consistent with both data sets <cit.>, demonstrating the need to renormalize measurements in R-matrix fits and the lack of predictive power of such analyses. Moreover, it also indicates that these data alone do not sufficiently constrain the ^16O ANCs and the S-factor at stellar energies. The 21% reduction of the S-factor at stellar energies will increase the ratio of the ^12C to ^16O abundances. Quantifying the impact of our new evaluations of the 𝒞_α-^12 C ANCs on the other carbon burning scenarios at temperature 1-10 GK would require reevaluating the S-factor up to 6 MeV by means of a more complete R-matrix fit, similar to the one presented in Ref. <cit.>, which included multiple reactions channels and data sets and used a robust statistical framework to quantify the uncertainties. Conclusions and prospects: We have demonstrated how a recent first-principle prediction for 𝒞_α-d impacts S-factors of astrophysical interest, for both the s-process and helium burning nucleosynthesis. Using this new 𝒞_α-d, the values for the ^13C-α ANC extracted from (^6Li,d) data now agree with values extracted from other α-transfer probes. Our analysis further provides an explanation for the discrepancy between the two recent LUNA and JUNA evaluations of the ^13C(α,n)^16O S-factors and reaction rates, favoring the recent JUNA evaluation <cit.>. Since the other principal neutron source in the s-process, i.e., ^22Ne(α,n)^25Mg, was also evaluated using (^6Li,d) data, our analysis suggests that the neutron flux in the s-process is smaller than what is currently evaluated. The first-principle prediction of 𝒞_α-d also impacts the ^12C(α,γ)^16O reaction rate. We find that the S-factor at 300 keV is reduced by 21%, compared to Ref. <cit.>. This reduction, when propagated into a nucleosynthesis reaction network, will increase the abundance ratio of ^12C to ^16O and impact the abundances of heavier elements. This work advocates for a new R-matrix analysis, constrained with our new evaluation of the 𝒞_^12 C-α, of the ^12C(α,γ)^16O reaction rate over the all range of energies relevant for astrophysics. In addition, we have also pointed out some developments in reaction theory that should be conducted to improve our knowledge of any α-induced reaction rate of astrophysical interest. Because inaccuracies in the reaction model used to extract structure information from transfer data propagate to the astrophysical S-factor, it is crucial to push first-principle calculations, for which uncertainties are more straightforwardly quantifiable, to heavier systems <cit.>. In parallel, analysis of transfer data using few-body models could be improved to further constrain the S-factors: these methods should be generalized to the transfer to unbound states, without requiring any extrapolation techniques, and the treatment of the reaction dynamics formalism should be improved, i.e., by going beyond the one-step DWBA description. Finally, since uncertainties can be reduced by comparing the structure information extracted from various transfer probes, first-principle predictions of ^7Li and ^11B nuclei should be conducted and compared with experiments that provide stringent constraint on these ANCs, e.g., low-energy d(^7Li,t)^6Li measurements. Acknowledgments. C. H. would like to thank B. P. Kay for inviting her for a visit at Argonne National Laboratory, during which this work started. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under the FRIB Theory Alliance award no. DE-SC0013617, and under the Work Proposal no. SCW0498 (S.Q., K.K.). Prepared in part by LLNL under Contract No. DE-AC52-07NA27344. M.A. acknowledges the support of the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under contract number DE-AC02-06CH11357. Computing support for this work came from the LLNL institutional Computing Grand Challenge program. apsrev
http://arxiv.org/abs/2307.07372v1
20230714142654
Norm-variation of triple ergodic averages for commuting transformations
[ "Polona Durcik", "Lenka Slavíková", "Christoph Thiele" ]
math.CA
[ "math.CA", "math.DS", "Primary 42B20, Secondary 42B15, 37A30" ]
We prove an r-variation estimate, r>4, in the norm for ergodic averages with respect to three commuting transformations. It is not known whether such estimates hold for all r≥ 2 as in the analogous cases for one or two commuting transformations, or whether such estimates hold for any r<∞ for more than three commuting transformations. Bounds on Fourier coefficients and global sup-norms for Siegel cusp forms of degree 2 Félicien Comtat, Jolanta Marzec-Ballesteros, Abhishek Saha ===================================================================================== § INTRODUCTION We prove the following norm variation bound for three commuting transformations. For all r>4, there exists a constant C>0 such that the following holds. Let (X,ℱ,μ) be a σ-finite measure space, T_0,T_1,T_2 X→ X mutually commuting measure preserving transformations, and let J and n_0<n_1<⋯<n_J be positive integers. For any f_0,f_1∈ L^8(X) and f_2 ∈ L^4(X), each of respective norms one, we have the bound ∑_j=1^JM_n_j(f_0,f_1,f_2) - M_n_j-1(f_0,f_1,f_2)_L^2(X)^r ≤ C , where we have defined for almost every x∈ X M_n(f_0,f_1,f_2)(x) := 1/n∑_i=0^n-1 f_0(T_0^i x) f_1(T_1^i x) f_2(T_2^i x). Norm variation bounds with r≥ 2 for one transformation are proven in <cit.> and for two commuting transformations in <cit.>, following earlier work <cit.> in the discrete setting. Norm variation bounds with any r<∞ for more than three commuting transformations remain unknown. It is natural to conjecture norm variation bound for r≥ 2 for any number of commuting transformations. The passage from two to three commuting transformations is a critical transition as present techniques very clearly fail to address the sharp variation threshold r≥ 2. The present paper can be seen as a proof of concept that for more commuting transformations one can still prove variation bounds if one is willing to negotiate the power r. Norm variation bounds for any r are strong quantitative forms of norm convergence. Qualitative norm convergence for three or more commuting transformations was proven by Tao in <cit.> by finitary methods. The case for two commuting transformations had been known before by ergodic theory. Ergodic theoretic proofs of Tao's result were given in <cit.>, <cit.>, and a generalization in particular to transformations generating a nilpotent group was proven in <cit.>. By a variant of the well known Calderón transference principle, Theorem <ref> follows from Theorem <ref> below. We do not elaborate on the transference principle in the present paper. It is standard and in the case of two commuting transformations presented in <cit.>. It reduces quantitative convergence results to the understanding of convergence on individual orbits of the action of the group spanned by the commuting transformations. For all r>4, there exists a constant C>0 such that the following holds. For any positive integer J and positive real numbers t_0<t_1<⋯ < t_J, any f_0,f_1∈ L^8(^3) and f_2∈L^4(ℝ^3) with respective norms one, we have ∑_j=1^JM_t_j(f_0,f_1,f_2) - M_t_j-1(f_0,f_1,f_2)_L^2(^3)^r ≤ C where, with e_0,e_1,e_2 the standard unit vectors in ^3, we have defined for almost every x∈^3: M_t(f_0,f_1,f_2)(x) := 1/t∫_0^t f_0(x+se_0) f_1(x+se_1) f_2(x+se_2) ds. Only the choice of tuple of exponents (8,8,4) breaks the symmetry between the three functions in the above theorems. One therefore concludes the analogous estimates for permutations of these exponents. Interpolation gives further tuples of exponents, for example the symmetric tuple (6,6,6). Theorem <ref> is proven in Section <ref> using the theory of singular Brascamp-Lieb forms. A singular Brascamp-Lieb datum D=(n,S,Π,(Π_s)_s∈ S) consists of an integer n≥ 1, a finite set S, and surjective linear maps Π and Π_s for s∈ S on the domain ^n. Together with some singular integral kernel K on the range of Π, the singular Brascamp-Lieb form Λ_D,K is defined as Λ_D,K((f_s)_s∈ S)=∫_^n K(Π x)∏_s∈ S f_s(Π_sx) dx , where the integral is defined in a principal value sense or, if the kernel has additional qualitative regularity as mostly the case in the present paper, in the Lebesgue integral sense. The form acts on a tuple of functions (f_s)_s∈ S, where for each s the function f_s is a locally integrable function on the range of Π_s. A singular Brascamp-Lieb inequality estimates this form by a constant times the product of Lebesgue norms ∏_s∈ Sf_s_p_s for some tuple of exponents p_s. A collection of singular Brascamp-Lieb inequalities needed in this paper is listed in Section <ref> and proved in Sections <ref>, <ref>, <ref>, <ref>, <ref>, <ref>. The logical order is that each proof may use propositions with higher index, two proofs use more standard estimates taken from <cit.> and <cit.>. The proof sections are independent of each other, their reading requires only memorizing the setup of Sections <ref> and <ref>. Singular Brascamp-Lieb inequalities with the kind of data appearing in this paper are studied in <cit.>, <cit.>, <cit.>, and <cit.> when K is a Calderón-Zygmund kernel. The difficulty in the present paper is that the kernels K do not satisfy uniform Calderón-Zygmund bounds but rather multi-parameter symbol estimates of a specific nature typical for variation norm bounds. Such multi-parameter singular Brascamp-Lieb forms appear in more basic form already in the case of two commuting transformations <cit.>. This difficulty is addressed by telescoping arguments in the variation sequence. A further difficulty for three versus two commuting transformations is a loss described by growth in the parameter J, for example in (<ref>), raising difficulties to carry the information of J carefully through the argument. With our present techniques, this loss is unavoidable and leads to the increase of the threshold for the variation parameter r. The idea to allow and utilize the powers of J appears similarly in the work on cancellation for the simplex Hilbert transform <cit.> and is also used in <cit.>, <cit.>. For further history on the ergodic means discussed here, we refer to the paper on two commuting transformations <cit.>. More recent developments in the wider area include some stronger pointwise almost everywhere convergence results as in the Furstenberg-Bergelson-Leibman conjecture, for example the bilinear not completely linear polynomial averages in <cit.> or the multi-parameter polynomial averages as in <cit.>. It is a longstanding open problem to prove pointwise almost everywhere results in our setting, even in the case of two commuting transformations. Acknowledgment. P. D. was partially supported by the NSF grant DMS-2154356. L. S. was supported by the Primus research programme PRIMUS/21/SCI/002 of Charles University. C. T. was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC-2047/1 – 390685813 as well as SFB 1060. P. D. and C. T. were also supported by the NSF grant DMS-1929284 while the authors were in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the inspiring Harmonic Analysis and Convexity program. § A COLLECTION OF PROPOSITIONS ON SINGULAR BRASCAMP-LIEB FORMS This section contains a number of propositions stating cancellation estimates for singular Brascamp-Lieb forms for some data and some class of kernels and with symmetric tuples of test functions. The first four propositions and two corollaries share a common datum D_1, which, after suitable change of variables, arises directly out of the original problem in Theorem <ref>. Put coordinates x=(x_0,x_1,x_2,x_3^0,x_3^1) on ^5. Define D_1:=(5,S, Π,(Π_s)_s∈ S) with S={0,1,2}×{0,1}, with Π mapping ^5 to ^2 as Π(x)=(x_3^0-x_0-x_1-x_2, x_3^1-x_0-x_1-x_2), and with Π_s for s=(k,j) mapping ^5 to ^3 as Π_(k,j)(x)=(x_0,x_1,x_2)-x_ke_k+ x_3^j e_k. Each of the three following propositions will have a constant C, a parameter J and formulate a class of kernels K such that the singular Brascamp-Lieb estimate |Λ_D_1,K ((f_s)_s∈ S)|≤ CJ^1/2 holds for all tuples of real-valued Schwartz functions (f_s)_s∈ S such that f_(k,0)=f_(k,1) for each k∈{0,1,2} and f_(0,j)_8=f_(1,j)_8 =f_(2,j)_4 =1 for each j∈{0,1}. Define for any function ϕ on ^d the L^1 normalized scaling ϕ_(t)(x)=t^-dϕ(t^-1x). Define the Fourier transform ϕ of ϕ by integration against the kernel e^-2π i x ·ξ. The kernels in the next proposition satisfy standard two dimensional symbol estimates with bounds depending on the parameter k. They consist of pieces satisfying a positivity assumption. Such positivity assumption is used in the proof by adding further positive terms so as to achieve better behaviour on some frequency diagonal. The complexity of these kernels is bounded by J. Let λ=3/2. There exists a constant C>0 such that the following holds for all k ≤ 0. Let J be a positive integer and (k_j)_j=1^J a finite strictly monotone increasing sequence of integers. Let K =∑_j=1^J Φ_j, where for each 1≤ j≤ J we assume Φ_j is a real valued function on ^2, with symmetry Φ_j(u,v)=Φ_j(v,u) and positivity in the sense ∫_^2 f(u)f(v)Φ_j(u,v) dudv≥ 0 for all complex-valued f. We assume further supp (Φ_j)⊂ ([-2^-k_j+20,-2^-k_j-20]∪ [2^-k_j-20,2^-k_j+20])^2 and, for all (u,v)∈^2, |(Φ_j)_(2^-k_j)(u, v)| ≤ 2^λ k(1+2^k|u+v|)^-10 (1+|u-v|)^-10. Then estimate (<ref>) holds for any tuple as in (<ref>), (<ref>). We note that the particular value λ=3/2 is not essential for the proof of Proposition <ref>. Evidently, the analogous statement of the proposition becomes stronger for smaller values of λ. Our proof can be pushed to λ>1 at the expense of allowing the constant in (<ref>) to depend on λ. On the other hand, the upper bound λ<2 is needed to apply Proposition <ref> to prove Theorem <ref>. There are also constants 10 and 20 chosen in this proposition which need to be large enough but also need to relate to similar other constants in other propositions to follow. If, in the above proposition, each Φ_j is an elementary tensor of a suitable function ϕ_j with itself, then symmetry and positivity is automatic, and k is naturally chosen as 0. We formulate this as an immediate corollary. There exists a constant C>0 such that the following holds. Let J be a positive integer and (k_j)_j=1^J a finite strictly monotone increasing sequence of integers. Let K =∑_j=1^J ϕ_j⊗ϕ_j, where for each 1≤ j≤ J we assume ϕ_j is a real-valued function on with supp (ϕ_j)⊂ [-2^-k_j+20,-2^-k_j-20]∪ [2^-k_j-20,2^-k_j+20] and for all u∈, |(ϕ_j)_(2^-k_j)(u)| ≤ (1+|u|)^-20. Then estimate (<ref>) holds for any tuple as in (<ref>), (<ref>). We need the following technical notion of pairs in the next proposition. Let N=2^18. This large number is necessitated by a somewhat inefficient referral in the proof of Proposition <ref> to a theorem in <cit.>. A more hands-on approach should be able to make this number much more moderate. A c-pair is a pair (ϕ_0,ϕ_1) of two real valued integrable even functions satisfying the following assumptions. Their Fourier transforms ϕ_0,ϕ_1 map to [0,1], are supported on [-1,1] and constant 1 on [-2^-1, 2^-1], they satisfy (ϕ_0)^2+(1-ϕ_1)^2=1, and ϕ_0^(N+30)_∞ , ϕ_1^(N+30)_∞≤ c . Lemma <ref> below shows that there exists c such that a c-pair exists. When c is at most one million times the infimum of all positive numbers c' such that a c'-pair exists, then (ϕ_0,ϕ_1) is called a universal pair. A left window is a function ϕ such that there exists a function ψ such that (ϕ,ψ) is a universal pair. A right window is a function ϕ such that there exists a function ρ such that (ρ,ϕ) is a universal pair. Note that functions ϕ that are both left and right window may exist, but a notion of two sided windows needs caution as the corresponding functions ψ and ρ may not satisfy this notion. The kernels of the next proposition do not satisfy two dimensional symbol estimates, at least not uniformly in the choices of sequences k_j and l_j. They still consist of pieces with a positivity assumption and elementary tensor structure with only two different scales in it and have complexity controlled by J. There exists C>0 such that the following holds. Let J be a positive integer and (k_j)_j=1^J and (l_j)_j=1^J two finite sequences of integers that are interlaced in the sense that k_j+10< l_j for 1≤ j≤ J and l_j< k_j+1 for 1≤ j<J-1. Consider a kernel K=∑_j=1^J (ϕ_0,j-ϕ_1,j)⊗ (ϕ_0,j-ϕ_1,j), where, for each j, (ϕ_0,j)_(2^-k_j) is a left window and (ϕ_1,j)_(2^-l_j) is a right window. Then estimate (<ref>) holds for any tuple as in (<ref>), (<ref>). Using Corollary <ref>, we have the following Corollary of Proposition <ref>, The variant of Proposition <ref>, where the assumption k_j+10<l_j is replaced by the assumption k_j<l_j, holds. To see this corollary, we split the sequence into terms with k_j+10 ≥ l_j and k_j+10<l_j. The former terms are estimated with Corollary <ref>, while the latter are estimated with Proposition <ref>. In contrast to the last proposition, the kernel of the next proposition does not oscillate on the critical frequency diagonal ξ+η=0. The complexity still is controlled by J. We no longer have the positivity assumptions, but we do satisfy standard symbol estimates, with bounds depending on the parameter k. Let λ=3/2. There exists a constant C>0 such that the following holds for all k≤ 0. Let J be a positive integer and let (k_j)_j=1^J be a finite strictly increasing sequence of integers. Let (Φ_j)_j=1^J be a finite sequence of real valued functions on ^2. Assume that (Φ_j) ⊆{(ξ,η)∈^2 : 2^-k_j-30≤ |(ξ,η)| ≤ 2^-k_j+30}. Assume further that for all (u,v)∈^2, |(Φ_j)_(2^-k_j)(u,v)| ≤ 2^λ k(1+2^k|u+v|)^-4 (1+|u-v|)^-4 +(1+|u+v|)^-4 (1+|u-v|)^-4. Let K be defined by K=∑_j=1^J Φ_j and assume that K vanishes on the diagonal {(ξ,η)∈^2: ξ+η=0}. Then estimate (<ref>) holds for any tuple as in (<ref>), (<ref>). The kernel of the next proposition also vanishes on the critical diagonal. It does not satisfy standard two dimensional symbol estimates uniformly in k_j. It has no positivity assumption, but similarly to some of the positive kernels above it is a sum of J tensors with few scales in it. There exists a constant C>0 such that the following holds. Let J be a positive integer and (k_j)_j=0^J a finite increasing sequence of integers with k_j-1+10 ≤ k_j for 1≤ j ≤ J. For 1≤ j≤ J, let ϕ_0,j, ϕ_1,j, ϕ_2,j be functions such that (ϕ_0,j)_(2^-k_j-1) is a left window, while (ϕ_1,j)_(2^-k_j) and (ϕ_2,j)_(2^4-k_j) are right windows. Define K = ∑_j=1^J (ϕ_0,j-ϕ_2,j)⊗ϕ_1,j . Then estimate (<ref>) holds for any tuple as in (<ref>), (<ref>). The remaining propositions share a singular Brascamp-Lieb datum D_2. The datum D_2 arises as a reduction from D_1 after a Cauchy-Schwarz inequality. Put coordinates x=(x_0,x_1,x_2^0,x_3^0,x_2^1,x_3^1) on ^6. Define D_2:=(6,S, Π,(Π_s)_s∈ S) with S={0,1}×𝒞, where 𝒞 is the set of functions j:{0,1}→{0,1}, with Π mapping ^6 to ^3 as Π(x) = (x_2^0-x_0-x_1-x_3^0, x_2^1-x_0-x_1-x_3^0, x_3^1-x_3^0), and with Π_s for s=(k,j) mapping ^6 to ^3 as Π_(k,j)(x)= (x_k, x_2^j(0), x_3^j(1)). For this datum D_2 and a kernel K, we are interested in a loss free estimate |Λ_D_2,K ((f_s)_s∈ S)|≤ C for any tuple of real-valued Schwartz functions (f_s)_s∈ S with f_(k,j)=f_(k,j') for all k∈{0,1} and j,j'∈𝒞, and f_s_8=1 for all s∈ S. The next proposition is a variant of Proposition <ref>, adjusted to the datum D_2. The kernel has some positivity properties and pieces arising from suitable elementary tensor structure. The complexity J here is not relevant, as we obtain estimates independent of J. We write g for the Gaussian g(x)=e^-π |x|^2, typically in one dimension but occasionally in more than one dimension. We have g=g. We write h for the derivative of the Gaussian in one dimension, h(x)=-2π x g(x). Recall N=2^18. There exists a constant C>0 such that the following holds. Let α≥ 1. Let J be a positive integer and (k_j)_j=0^J a finite increasing sequence of integers with k_j-1+10 ≤ k_j for 1≤ j ≤ J. Let (m_j)_j=1^J be a sequence of real numbers with k_j-1≤ m_j≤ k_j for 1≤ j≤ J. For 0≤ j≤ J, let χ_j be a function such that (χ_j)_(2^2-k_j) is a left window and let ϕ_j be such that ϕ_j≥ 0 and (ϕ_j)^2=(χ_j-1)^2- (χ_j)^2. Let K(u,v,z)=α^-N∑_j=1^J ∫_g_(α 2^m_j)(u+p) g_(α 2^m_j)(v+p) ϕ_j(z+p)ϕ_j(p) dp. Then estimate (<ref>) holds for any tuple as in (<ref>), (<ref>). Proposition <ref> will be proven using the next two propositions. Both involve the datum D_2. Both exploit a vanishing of the function K on the critical space ξ+η=0. There is a constant C such that the following holds. Let J be a positive integer. For 1≤ i≤ 2, let (a_i,j)_j=1^J be increasing sequences of positive real numbers. For 1≤ j ≤ J, let ρ_j:^4→ be a continuous function satisfying ∫_^2 |ρ_j | (u_1+p,u_2+p,u_3+r,u_4+r) dp dr ≤ a_1,j^-1(1+a_1,j^-1|u_1-u_2|)^-2 a_2,j^-1(1+a_2,j^-1|u_3-u_4|)^-2 for every (u_1,u_2,u_3,u_4)∈^4. Let (c_j)_j=0^J be an increasing sequence of positive real numbers, well separated in that 2c_j-1≤ c_j for 1≤ j≤ J. Let χ be a left window. For 1≤ j≤ J let ϕ_j:→ be a continuous function, which exists due to the left window property of χ, satisfying ϕ_j≥ 0 and (ϕ_j)^2 = (χ_(c_j-1))^2 - (χ_(c_j))^2. Let K be defined by K(u,v,z)= ∑_j=1^J ∫_^3ϕ_j(p) ϕ_j(q) ρ_j(u+p+q+r,v+p+q+r,z+r,r) dpdqdr. Then estimate (<ref>) holds for any tuple as in (<ref>), (<ref>). The orthogonal complement V^⊥ of the subspace V={(ξ,η,τ, -(ξ+η+τ), -(ξ+η), -(ξ+η)): ξ,η, τ∈}, of ^6 can be parameterized as {(p+q+r,p+q+r,r,r,p,q): p,q,r∈}. As (<ref>) is an integral over V^⊥ of a function F in ^6, its Fourier transform is the restriction to V of the Fourier transform of F to that subspace. Hence, for some universal constant C, K(ξ,η,τ)= C∑_j=1^J ϕ_j(ξ+η)^2 ρ_j(ξ,η,τ,-ξ-η-τ). This expression shows the vanishing of K(ξ,η,τ) on the hyperplane ξ+η=0. Also in the following proposition, K vanishes on ξ+η. It is made up by a very specific part in the variables ξ,η and a rather general part in the variables τ and τ+ξ+η. There is a constant C such that the following holds. Let J be a positive integer and (a_j)_j=0^J, (b_j)_j=1^J be increasing sequences of positive real numbers. For 1≤ j≤ J let ϕ_j :^2→ be a continuous function satisfying |ϕ_j(u_1,u_2)| ≤ (b_j)^-2(1+b_j^-1|(u_1, u_2)|)^-4 . Let K be a kernel such that K(ξ,η,τ) = ∑_j=1^J ∫_a_j-1^a_j t^2(ξ+η)^2 g(tξ) g(tη) dt/t ϕ_j(τ,-ξ-η-τ). Then estimate (<ref>) holds for any tuple as in (<ref>), (<ref>). We remark on a symmetry in the datum D_2. We do a change of variables in the kernel using the linear map L(a,b,c)= (a+b-c, a-b,c). Define Π(x):=L∘Π(x)= (x_2^0+x_2^1-x_3^0-x_3^1-2(x_0+x_1), x_2^0-x_2^1, x_3^1-x_3^0). Define D_2 from D_2 by replacing Π by Π and choose K so that K∘ L=K. We obtain Λ_D_2,K((f_s)_s∈ S)=Λ_D_2,K((f_s)_s∈ S). The map Π has a symmetry under interchanging the last two entries at the same time as precomposing with the involution (x_0,x_1,x_2^0,x_3^0,x_2^1,x_3^1)↦ (x_0,x_1,-x_3^0,-x_2^0,-x_3^1,-x_2^1) This involution can be seen as acting on the tuple of functions f_s, and hence we have the following consequence for the associated form. Define K^*(a,b,c)=K(a,c,b). For j∈𝒞, define j^*∈𝒞 by j^*(l)=j(1-l) and define f^*_(k,j)(a,b,c)=f_(k,j^*)(a,-c,-b). Then Λ_D_2,K((f_s)_s∈ S) =Λ_D_2,K^*((f_s^*)_s∈ S). We finally introduce a further datum D_A, which is associated with a regular 3× 3 matrix A and has n=6. Let S be the set of functions S:{0,1,2}→{0,1}. We put coordinates x=(x_1^0,x_2^0,x_3^0,x_1^1,x_2^1,x_3^1) on ^6. We define D_A:=(6,S, Π,(Π_s)_s∈ S), where the projection Π:^6→^3 is given by Π(x)^T = (I,A) x^T, where I is the 3× 3 identity matrix. For s∈ S, Π_s: ^6 →^3 is given by Π_s(x)= (x_1^s(0), x_2^s(1), x_3^s(2)). Note that after the relabelling of the coordinates, this datum has the same components as the datum D_2 except for the choice of the projection Π. We have used transposes in (<ref>) as we usually write vectors as rows while the matrix equation (<ref>) expects columns. The datum D_A will be used the proofs of Propositions <ref>, <ref>, and <ref>. In the latter two cases we will only use it with A=-I. We conclude this section with the previously announced existence result. There exists a c>0 and a c-pair as defined near (<ref>). Let ψ :[0,∞) → be a smooth monotone decreasing function with ψ(x)=1/2 for x∈ [0, 5/6], ψ(x)=0 for x∈ [1, ∞). Let ρ:[0,∞) → be a smooth monotone increasing function with ρ(x)=0 for x∈ [0, 1/2], ρ(x)=3^1/2 for x∈ [4/6, ∞). There exists a smooth even function ϕ_0 on such that its Fourier transform is nonnegative and satisfies on [0,∞) (ϕ_0)^2= (4-ρ^2)ψ^2, because the right-hand side equals ψ^2 on [4/6,∞) and is bounded below by 1/4 on [0,5/6] and constant one on [0,1/2). There exists a smooth even function ϕ_1 on such that its Fourier transform is nonnegative and fulfills on the interval [0,∞) (1-ϕ_1)^2=1-(4 -ρ^2)ψ^2, because the right-hand side equals ρ^2/4 on [0,5/6] and is bounded below by 3/4 on [4/6,∞) and constant on [0,1/2]. The pair (ϕ_0,ϕ_1) then satisfies the assumptions for a c-window with c=max(ϕ_0^(N+30)_∞, ϕ_1^(N+30)_∞). This proves the lemma. We write A≲ B if there exists a constant C>0 such that |A|≤ C|B| uniformly over all values of parameters appearing in the expressions A and B. § PROOF OF THEOREM <REF> FROM PROPOSITION <REF> AND COROLLARIES <REF> AND <REF> This section follows the corresponding argument in <cit.> for two commuting transformations with minor modifications. We summarize and streamline the argument. Let J be given, without loss of generality we may assume J>2. Let also positive real numbers t_0<t_1⋯ < t_J be given. Let f_0,f_1,f_2 be real valued measurable functions on ^3, normalized as f_0_4 = f_1_8=f_2_8=1. We will prove a weak-type endpoint estimate at r=4, namely for any f_0∈ L^4(^3) and f_1,f_2∈L^8(ℝ^3) with respective norms one, ∑_j=1^JM_t_j(f_0,f_1,f_2) - M_t_j-1(f_0,f_1,f_2)_2^2 ≲ J^1/2. We call (<ref>) an endpoint estimate as it would follow from the hypothetical inequality (<ref>) with r=4 by the Cauchy-Schwarz inequality, and conversely (<ref>) implies (<ref>) for parameters r>4. Namely, (<ref>) allows by Chebyshev's inequality to estimate the number of λ-jumps of the norm by O(λ^-4), which then allows to deduce (<ref>) by a layer cake representation of the r-variation. Theorem <ref> will thus follow as soon as we prove (<ref>). We decompose the characteristic function 1_[0,1) into smoother functions. Let χ be a left window and define θ:=χ-χ_(2). Then θ is supported in [-1,-2^-2]∪ [2^-2,1] and, as detailed as in <cit.>, 1_[0,1) = 1_[0,1)∗χ + ∑_k=-∞^-11_[0,∞)∗θ_(2^k) -∑_k=-∞^-11_[1,∞)∗θ_(2^k) =: φ+ ∑_k=-∞^-1φ_0,k+ ∑_k=-∞^-1φ_1,k. For ϑ∈ L^1() we define in analogy with (<ref>) for x ∈^3 M_t^ϑ(f_0,f_1,f_2)(x) := ∫_ f_0(x+ue_0) f_1(x+ue_1) f_2(x+ue_2) ϑ_(t)(u) du. Using (<ref>) and the triangle inequality on the sum in k, it suffices to show in place of (<ref>) for every k≤ -1, ∑_j=1^JM^φ_t_j(f_0,f_1,f_2) - M^φ_t_j-1(f_0,f_1,f_2)_2^2 ≲ J^1/2 , ∑_j=1^J M^φ_0,k_t_j(f_0,f_1,f_2) - M^φ_0,k_t_j-1(f_0,f_1,f_2)_2^2 ≲ 2^2k J^1/2, ∑_j=1^J M^φ_1,k_t_j(f_0,f_1,f_2) - M^φ_1,k_t_j-1(f_0,f_1,f_2)_2^2 ≲ 2^γ k J^1/2, where γ =1/2. In fact, it will follow from our argument that inequality (<ref>) continues to hold with any γ<1, at the expense of allowing the constant in that inequality to depend on γ. The estimate (<ref>) is acceptable and the estimates (<ref>) and (<ref>) give a geometric series over k≤ -1 and are thus acceptable as well. We first prove (<ref>). We reduce further (<ref>) to the analogous estimate but with the bump function φ replaced by one whose Fourier transform is constant near the origin. Let c:=φ(0) and write φ = cχ + (φ - cχ) = cχ + ∑_l=-2^∞ (φ-cχ)∗θ_(2^l) =: cχ + ∑_l=-2^∞φ_2,l. It then suffices to show ∑_j=1^JM^χ_t_j(f_0,f_1,f_2) - M^χ_t_j-1(f_0,f_1,f_2)_2^2 ≲ J^1/2, ∑_j=1^JM^φ_2,l_t_j(f_0,f_1,f_2) - M^φ_2,l_t_j-1(f_0,f_1,f_2)_2^2 ≲ 2^-lJ^1/2. We first prove (<ref>). We split into long and short variation as in <cit.>. Enlarging the sequence t_j if necessary while at most doubling the number of terms and retaining at least a quarter of the left-hand side of (<ref>), we may assume that for each t_j there is a t_i which is an integer power of two with t_i≤ t_j<2t_i. Let (k_i)_i=0^I be the increasing sequence of all k_i such that the power 2^k_i occurs in the sequence (t_j)_j=1^J. We have I≤ J. It then suffices to show the short and long variation bounds ∑_i=0^I ∑_j: 2^k_i< t_j≤ 2^k_i+1M^χ_t_j(f_0,f_1,f_2) - M^χ_t_j-1(f_0,f_1,f_2)_2^2 ≲ J^1/2 , ∑_i=1^IM^χ_2^k_i(f_0,f_1,f_2) - M^χ_2^k_i-1(f_0,f_1,f_2)_2^2 ≲ J^1/2 . We first discuss the short variation (<ref>). We denote Tχ(s):= (sχ(s))', so that (Tχ)_(t)(s)=-t∂_t (χ_(t)(s)), and we will use T throughout the section. By the fundamental theorem of calculus and the Cauchy-Schwarz inequality, we have for x∈^3 and every 1≤ i ≤ I, ∑_j: 2^k_i< t_j≤ 2^k_i+1 |M^χ_t_j(f_0,f_1,f_2)(x)-M^χ_t_j-1(f_0,f_1,f_2)(x)|^2 ≤∫_1^2 ( M_2^k_it^Tχ(f_0,f_1,f_2)(x))^2 dt/t . It then suffices to show ∑_i=0^I∫_ℝ^3∫_1^2 ( M_2^k_it^Tχ(f_0,f_1,f_2)(x) )^2 dt/t dx ≲ J^1/2. Expanding the square and moving the integral in t outside, the left-hand side becomes ∫_1^2∑_i=0^I ∫_ℝ^5[∏_n=0^2 f_n(x+ue_n) f_n(x+ve_n) ] (Tχ)_(2^k_it) (u) (Tχ)_(2^k_it) (v) dx du dv dt/t. The expression (<ref>) takes the form ∫_1^2 Λ_D_1,K_t((f_s)_s∈ S) dt/t, where for s=(k,j)∈{0,1,2}×{0,1} and y=(y_0,y_1,y_2) we have set f_s(y)=f_k(y-(y_0+y_1+y_2-y_k)e_k ) and K_t(u,v) := ∑_i=0^I (Tχ)_(2^k_it)(u) (Tχ)_(2^k_it)(v). Indeed, writing x=(x_0,x_1,x_2) and changing variables u=x_3^0-x_0-x_1-x_2 v=x_3^1-x_0-x_1-x_2, we obtain with the projections Π_s of the datum D_1, f_(k,0)(Π_(k,0)(x,x_3^0, x_3^1))=f_k(x+ue_k) f_(k,1)(Π_(k,1)(x,x_3^0, x_3^1))=f_k(x+ve_k) It suffices to prove bounds uniformly for fixed t∈ [1,2] on the integrand of (<ref>). For this we apply Corollary <ref> with the sequence (k_i)_i=0^I and ϕ_i suitable multiples of (Tχ)_(2^k_it) and use supp(Tχ)⊂ [-1,-2^-1]∪[2^-1,1] , |Tχ (u)| ≲ (1+|u|)^-20. This proves (<ref>). Next, we prove the long variation bound (<ref>). Recalling the universal pair (χ,ϕ), by the triangle inequality, it suffices to show ∑_i=1^I M^χ_2^k_i-1(f_0,f_1,f_2)-M^ϕ_2^k_i(f_0,f_1,f_2) _2^2 ≲ J^1/2, ∑_i=1^IM^χ_2^k_i(f_0,f_1,f_2) - M^ϕ_2^k_i(f_0,f_1,f_2)_2^2 ≲ J^1/2 . We first prove (<ref>). We expand out the square of the L^2 norm to reduce matters to estimating ∑_i=1^I ∫_ℝ^5[∏_n=0^2 f_n(x+ue_n) f_n(x+ve_n) ] (χ_(2^k_i-1)-ϕ_(2^k_i))(u) (χ_(2^k_i-1)-ϕ_(2^k_i))(v) dx du dv. Performing the same change of variables in the Brascamp-Lieb datum as in (<ref>), we rewrite it as Λ_D_1,K((f_s)_s∈ S) with K(u,v)=∑_i=1^I (χ_(2^k_i-1)-ϕ_(2^k_i))(u) (χ_(2^k_i-1)-ϕ_(2^k_i))(v). We estimate this with Corollary <ref> of Propositions <ref> and <ref>, using that χ is a left window and ϕ is a right window, and after splitting the sum into even and odd indices j to assure spacing of the sequences k_j and l_j. This completes the discussion of (<ref>). Similarly, estimating (<ref>) reduces to estimating a form (<ref>) with kernel K(u,v)=∑_i=1^I (χ_(2^k_i)-ϕ_(2^k_i))(u) (χ_(2^k_i)-ϕ_(2^k_i))(v). This is done with Corollary <ref>. This completes the discussion of (<ref>) and thus the discussion of (<ref>). Next, we consider the decaying lacunary pieces near the origin (<ref>). We define φ_3,l(x):= 2^l(φ_2,l)_(2^-l)(x) and we replace t_j by 2^l t_j, using that the sequence t_j was arbitrary, to turn (<ref>) into ∑_j=1^JM^φ_3,l_t_j(f_0,f_1,f_2) - M^φ_3,l_t_j-1(f_0,f_1,f_2)_2^2 ≲ J^1/2. Analogously to our discussion of (<ref>), we pass to short and long variation. The short variation we estimate analogously using in place of (<ref>) and (<ref>) supp(Tφ_3,l)⊂ [-1,-2^-1]∪[2^-1,1] |Tφ_3,l(u)| ≲ (1+|u|)^-20, which follows because φ-cχ vanishes at the origin. This completes the estimate for the short variation. The long variation we expand similarly as (<ref>) above into Λ(f_0,f_1,f_2):= ∑_i=1^I ∫_ℝ^5[∏_n=0^2 f_n(x+ue_n) f_n(x+ve_n) ] ×((φ_3,l)_(2^k_i-1)-(φ_3,l)_(2^k_i))(u) ((φ_3,l)_(2^k_i-1)-(φ_3,l)_(2^k_i))(v) dx du dv. By the distributive law, (<ref>) is the difference of the two terms of the form ∑_i=1^I ∫_ℝ^5[∏_n=0^2 f_n(x+ue_n) f_n(x+ve_n) ] (φ_3,l)_(2^m_i)(u) ((φ_3,l)_(2^k_i-1)-(φ_3,l)_(2^k_i))(v) dx du dv, with m_i = k_i and with m_i = k_i-1, respectively. We write (<ref>) as ∑_i=1^I ∫_ℝ^3[∫_∏_n=0^2 f_n(x+ue_n) (φ_3,l)_(2^m_i)(u) du ] ×[∫_∏_n=0^2 f_n(x+ve_n) ((φ_3,l)_(2^k_i-1)-(φ_3,l)_(2^k_i))(v) dv] dx and apply the Cauchy-Schwarz inequality in x and in the summation. This gives Λ(f_0,f_1,f_2)≤Λ(f_0,f_1,f_2)^1/2Λ(f_0,f_1,f_2)^1/2 with Λ(f_0,f_1,f_2)= [ ∑_i=1^I ∫_ℝ^5[ ∏_n=0^2 f_n(x+ue_n) f_n(x+ve_n) ] (φ_3,l)_(2^m_i)(u)(φ_3,l)_(2^m_i)(v) dxdudv ]^1/2. By bootstrapping, it suffices to prove a bound on Λ(f_0,f_1,f_2) in place of Λ(f_0,f_1,f_2). This should be compared with the integrand in (<ref>) for fixed t. By the same change of variables as there, (<ref>) equals Λ_D_1,K((f_s)_s∈ S) with K(u,v) = ∑_i=1^I (φ_3,l)_(2^m_i)(u)(φ_3,l)_(2^m_i)(v). Applying Corollary <ref> of Proposition <ref> yields a bound for this term and finishes the proof of (<ref>). The assumptions of Corollary <ref> are satisfied, which can be verified similarly as inequalities (<ref>) and (<ref>) observed earlier. This completes the proof of the estimate (<ref>). Now we prove (<ref>). We write φ_0,k = 1_(-∞,0)*θ_(2^k) = 2^k θ_(2^k) , where θ:=1_(-∞,0)*θ is the primitive of θ. It has high order decay since θ has integral zero. By rescaling, it suffices to show ∑_j=1^JM^θ_t_j(f_0,f_1,f_2) - M^θ_t_j-1(f_0,f_1,f_2)_2^2 ≲ J^1/2. This now follows in the same way as (<ref>), using (θ) ⊂ [-1,-2^-2]∪ [2^-2,1] and high order decay of θ. This completes the proof of (<ref>). It remains to prove (<ref>). Define φ_4,k(u):= 2^kθ(u-2^-k). We have φ_1,k(u)=(2^kθ(u-2^-k))_(2^k) = (φ_4,k)_(2^k)(u) . By rescaling, it suffices to show 2^-γ k∑_j=1^JM^φ_4,k_t_j(f_0,f_1,f_2) - M^φ_4,k_t_j-1(f_0,f_1,f_2)_2^2 ≲ J^1/2. We split into long and short variation as in (<ref>). To estimate the short variation, we use the fundamental theorem of calculus and the Cauchy-Schwarz inequality, which bounds ∑_i=0^I ∑_j: 2^k_i< t_j ≤ 2^k_i+1 2^-γ kM^φ_4,k_t_j(f_0,f_1,f_2) - M^φ_4,k_t_j-1(f_0,f_1,f_2)_2^2 ≲[[ ∑_i=0^I 2^-(γ+1)k∫_ℝ^3∫_1^2 ( M_2^k_it^φ_4,k(f_0,f_1,f_2)(x) )^2 dt/t dx ] ×[ ∑_i=0^I 2^(1-γ)k∫_ℝ^3∫_1^2 ( M_2^k_it^Tφ_4,k(f_0,f_1,f_2)(x) )^2 dt/t dx ]]^1/2. We are going to estimate each factor in the square brackets as ≲ J^1/2. We begin with the first factor, that we expand as ∑_i=0^I ∫_ℝ^5[∏_n=0^2 f_n(x+ue_n) f_n(x+ve_n) ] ×[2^-(γ+1)k∫_1^2 (φ_4,k)_(2^k_it) (u) (φ_4,k)_(2^k_it) (v) dt/t] dx du dv. Similarly as (<ref>) and (<ref>), this takes the form Λ_D_1,K((f_s)_s∈ S) with K(u,v) = ∑_i=0^I [ ∫_1^2 2^-(γ+1)k (φ_4,k)_(t) (u) (φ_4,k)_(t) (v) dt/t]_(2^k_i) =: ∑_i=0^I Φ_(2^k_i)(u,v). We apply Proposition <ref> with λ=2-γ>1, using that Φ is symmetric and positive as a superposition of positive terms, and using (Φ) ⊆ ([-1,-2^-3]∪ [2^-3,1])^2, |Φ(u,v)| ≤ 2^-(γ+1)k∫_1^2 |φ_4,k(t^-1u)φ_4,k(t^-1v)| dt ≤ 2^(1-γ)k∫_1^2|θ(t^-1(u-t2^-k))θ(t^-1(v-t2^-k))| dt ≲ 2^(1-γ)k∫_1^2 (1+|u+v-t2^1-k|)^-10 (1+|u-v|)^-10 dt ≲ 2^(2-γ)k∫_2^1-k^2^2-k (1+|u+v-t|)^-10 (1+|u-v|)^-10 dt ≲ 2^(2-γ)k(1+2^k|u+v|)^-10 (1+|u-v|)^-10. Here we estimated the integral for |u+v|<2^3-k by the integral over and for |u+v|>2^3-k we estimated the integrand by its supremum norm. We used along the way decay estimates of θ it inherits from the window χ. We turn to the second factor in (<ref>). We proceed as above, in place of (<ref>) we compute 2^(1-γ)k|∫_1^2 t∂_t((φ_4,k)_(t)(u))t∂_t((φ_4,k)_(t)(v)) dt/t| = 2^(3-γ)k|∫_1^2 t∂_t(t^-1θ(t^-1(u-t2^-k)))t∂_t(t^-1θ(t^-1(v-t2^-k))) dt/t |. Applying Leibniz and chain rules, most terms will be analogous to the above. However, when a derivative falls on t2^-k, we obtain a factor 2^-k. The worst term be the one where both derivatives fall on the t2^-k. Thus we get the estimate ≲ 2^(1-γ)k∫_1^2 (1+|u+v-t2^1-k|)^-10 (1+|u-v|)^-10 dt/t. As above, this is estimated by ≲ 2^(2-γ)k(1+2^k|u+v|)^-10 (1+|u-v|)^-10. To treat the long variation, we proceed as for (<ref>), where after a bootstrapping estimate we are led to estimate, analogously to (<ref>), Λ_D_1,K((f_s)_s∈ S) with K(u,v) = 2^-γ k∑_i=1^I(φ_4,k)_(2^m_i)(u)(φ_4,k)_(2^m_i)(v). Similarly as in (<ref>) we estimate 2^-γ k|φ_4,k(u)φ_4,k(v)| = 2^(2-γ)k |θ(u-2^-k)θ(v-2^-k)| ≲ 2^(2-γ)k(1+|u+v-2^1-k|)^-10 (1+|u-v|)^-10 ≲ 2^(2-γ)k(1+2^k|u+v|)^-10 (1+|u-v|)^-10. Applying Proposition <ref> again completes the proof of (<ref>). § PROOF OF PROPOSITION <REF> USING PROPOSITIONS <REF> AND <REF> Let λ=3/2. Let k ≤ 0, let J be a positive integer and (k_j)_j=1^J a strictly increasing sequence of integers. By splitting into hundred subsequences, using the triangle inequality to separate these sequences, we may assume k_j+100≤ k_j+1 for 1≤ j<J. Let Φ_j for 1≤ j≤ J be as in Proposition <ref>. In particular, Φ_j(ξ,-ξ) is continuous and even in ξ by the symmetry assumption on the kernel Φ_j. Furthermore, we claim that Φ_j(ξ,-ξ) is positive for all ξ∈. To see this, first apply Plancherel to the positivity assumption (<ref>) in Proposition <ref> to conclude 0≤∫_^2f(-ξ)f(η)Φ_j(ξ,η) dξ dη for all Schwartz functions f. Now we see the claim by using testing functions f which approximate the Dirac delta at ξ. As Φ_j_1 has a universal bound, for suitable universal constant c we have Φ_j(ξ,-ξ)≤ c(ϕ_0,j-ϕ_1,j)(ξ)^2 with even real functions ϕ_0,j and ϕ_1,j, such that (ϕ_0,j)_2^-k_j+25 is a left window and (ϕ_1,j)_2^-k_j-25 is a right window. Moreover, there exists a real even function ψ_j so that ψ_j(ξ)^2:=2c (ϕ_0,j-ϕ_1,j)(ξ)^2- Φ_j(ξ,-ξ). Namely, outside the support of ξ↦Φ_j(ξ,-ξ), the function ψ_j can be chosen to equal √(2c)(ϕ_0,j-ϕ_1,j), while on a neighborhood of this support, the function on the right-hand side is at least c and thus has square root. The function (ψ_j)_(2^-k_j+25) has support in [-1,1]. To understand derivative bounds for this function, let F(ξ)=(Φ_j)_(2^-k_j)(ξ,-ξ). Then we have, for 0≤ a≤ 8, |F^(a)(ξ)|= |(-2π i)^a ∫_^2 (Φ_j)_(2^-k_j)(u,v) (u-v)^a e^-2π i ξ (u - v) du dv |≲ 2^(λ-1) k, by (<ref>). Thus, |((ψ_j)_(2^-k_j))^(a)| ≲ 1, as one can see outside the support of (Φ_j)_(2^-k_j) from bounds for derivatives of the windows and on the support using a lower bound on the right-hand side of (<ref>) and upper bounds on the derivative of the right-hand side of (<ref>). To show a bound on Λ_D_1,K((f_s)_s∈ S) with K=∑_j=1^JΦ_j, which is positive, it suffices to show a bound on Λ_D_1,K_0((f_s)_s∈ S) with K_0=∑_j=1^J Φ_j + ψ_j⊗ψ_j because the form associated with the datum D_1 and the difference K_0-K is positive as well. By Proposition <ref>, the form Λ_D_1,K_1((f_s)_s∈ S) is bounded, where K_1 =2c∑_j=1^J (ϕ_0,j-ϕ_1,j)⊗ (ϕ_0,j-ϕ_1,j). Hence it suffices to prove a bound on Λ_D_1,K_3((f_s)_s∈ S), where K_3=K_0-K_1. This is done by an application of Proposition <ref>. Note that we have on the diagonal K_3(ξ,-ξ)= ∑_j=1^J Φ_j(ξ,-ξ) + ψ_j(ξ)^2 - 2c (ϕ_0,j-ϕ_1,j)(ξ)^2=0. We verify the remaining assumptions of Proposition <ref> for Ψ_j:=Φ_j + ψ_j⊗ψ_j- 2c (ϕ_0,j-ϕ_1,j)⊗ (ϕ_0,j-ϕ_1,j). We have (Φ_j) ⊆ ([-2^-k_j+20,-2^-k_j-20]∪ [2^-k_j-20,2^-k_j+20])^2, ((ϕ_0,j-ϕ_1,j)⊗ (ϕ_0,j-ϕ_1,j) ) ⊆ ([-2^-k_j+25,-2^-k_j-26]∪ [2^-k_j-26,2^-k_j+25])^2, (ψ_j⊗ψ_j) ⊆ ([-2^-k_j+25,-2^-k_j-26]∪ [2^-k_j-26,2^-k_j+25])^2. Thus, (Ψ_j) ⊆{(ξ,η)∈^2: 2^-k_j-30 < |(ξ,η)| ≤ 2^-k_j+30}. Note also that, using in particular (<ref>), |(Φ_j)_(2^-k_j)(u,v)| ≲ 2^λ k (1+2^k|u+v|)^-10 (1+|u-v|)^-10, |((ϕ_0,j- ϕ_1,j)⊗ (ϕ_0,j- ϕ_1,j))_(2^-k_j)(u,v)| ≲ (1+|u+v|)^-4 (1+|u-v|)^-4, |(ψ_j⊗ψ_j)_(2^-k_j)(u,v)| ≲ (1+|u+v|)^-4 (1+|u-v|)^-4. Hence, |(Ψ_j)_(2^-k_j)(u,v)| ≲ 2^λ k (1+2^k|u+v|)^-4 (1+|u-v|)^-4+ (1+|u+v|)^-4 (1+|u-v|)^-4. The final claim now follows from Proposition <ref>. § PROOF OF PROPOSITION <REF> USING PROPOSITIONS <REF> AND <REF> Let J be a positive integer and (k_j)_j=1^J and (l_j)_j=1^J two finite sequences of integers with k_j+10< l_j for 1≤ j≤ J and l_j< k_j+1 for 1≤ j<J-1. By splitting the sequence into subsequences of even and odd j if necessary, we may assume without loss of generality that l_j+10<k_j+1 for each 1≤ j<J. Assume a tuple (f_s)_s∈ S as in (<ref>) and (<ref>) is given. Assume we are given ϕ_0,j and ϕ_1,j for each j such that (ϕ_0,j)_(2^-k_j) is a left window and (ϕ_1,j)_(2^-l_j) is a right window. Pick corresponding functions ψ_0,j and ψ_1,j so that the rescaled functions give universal pairs, and hence (1-ϕ_1,j)^2+(ψ_0,j)^2=1, (ϕ_0,j)^2+(1-ψ_1,j)^2=1. Then (1-ψ_1,1)^2+∑_j=1^J (ϕ_0,j-ϕ_1,j)^2+ ∑_j=1^J-1 (ψ_0,j-ψ_1,j+1)^2+(ψ_0,J)^2 =1. To see this, note that at every point at most one of the functions ϕ_0,j, ϕ_1,j, 1≤ j≤ J is neither 0 nor 1, and the functions ψ_0,j, ψ_1,j are neither zero nor one precisely when the respective function ϕ_1,j, ϕ_0,j is not zero or one. Therefore, at any point at most one pair (ψ_0,j, ϕ_1,j) or (ψ_1,j, ϕ_0,j) takes values other than zero and one, and we can apply (<ref>) or (<ref>) respectively. As Λ_D_1,K((f_s)_s∈ S) in Proposition <ref> is positive, it suffices to estimate its sum with another positive term, and thus it suffices to estimate Λ_D_1,K_1((f_s)_s∈ S) with K_1(ξ,η) = (1-ψ_1,1)(ξ) (1-ψ_1,1)(η)+∑_j=1^J (ϕ_0,j-ϕ_1,j)(ξ)(ϕ_0,j-ϕ_1,j)(η) + ∑_j=1^J-1 (ψ_0,j-ψ_1,j+1)(ξ) (ψ_0,j-ψ_1,j+1)(η)+ (ψ_0,J)(ξ)(ψ_0,J)(η). This can be rewritten in a more compressed form K_1=∑_j=0^2J (φ_0,j-φ_1,j)⊗ (φ_0,j-φ_1,j) where φ_0,0=1, for 1≤ j≤ J φ_0,2j-1=ϕ_0,j, φ_0,2j=ψ_0,j, φ_1,2j-1=ϕ_1,j, φ_1,2j-2=ψ_1,j, and φ_1,2J=0. Define for 1≤ j≤ J m_2j-2=k_j and for 1≤ j≤ J m_2j-1=l_j. Observe that for each 1≤ j≤ 2J-1 we have (φ_0,j)_(2^-m_j-1) is a left window and (φ_1,j)_(2^-m_j) is a right window. In order to apply Proposition <ref>, we introduce for 0≤ j≤ 2J the functions φ_2,j=(φ_1,j)_(2^-4). Observe that (φ_2,j)_2^4-m_j is a right window whenever 0 ≤ j ≤ 2J-1. We write for K_1 -∑_j=0^2J(φ_0,j -φ_2,j)⊗φ_1,j +φ_1,j⊗ (φ_0,j -φ_2,j) -∑_j=0^2J(φ_2,j -φ_1,j)⊗φ_1,j +φ_1,j⊗ (φ_2,j -φ_1,j) +∑_j=0^2Jφ_0,j⊗φ_0,j -φ_1,j⊗φ_1,j. In (<ref>), the bound for the sum of these terms over 1≤ j≤ 2J-1 follows from Proposition <ref>, applied to the sequence (m_j)_j=0^2J-1 and the rescaled windows φ_0,j, φ_1,j, φ_2,j for 1≤ j≤ 2J-1. The term for j=2J in (<ref>) vanishes. To deal with the term for j=0 in (<ref>), we use φ_0,0=1 and rewrite this term as -φ_1,0(η) +φ_2,0(ξ) φ_1,0(η) +φ_1,0(ξ) - φ_1,0(ξ) φ_2,0(η). Denoting by f_k, k=0,1,2, the functions defined via (<ref>) and using the change of variables as in (<ref>) and (<ref>), we estimate |Λ_D_1, φ_1,0⊗δ((f_s)_s∈ S)| = | ∫_^4[ ∏_k=0^2 f_k(x+ue_k)f_k(x) ] φ_1,0(u) dx du | ≤φ_1,0_1 ≲ 1 where for a fixed u we used Hölder's inequality in x and δ denotes the Dirac delta at the origin. Similarly, | Λ_D_1, φ_1,0⊗φ_2,0((f_s)_s∈ S)| = | ∫_^5[ ∏_k=0^2 f_k(x+ue_k)f_k(x+ve_k) ] φ_1,0(u)φ_2,0(v) dxdudv| ≤φ_1,0_1 φ_2,0_1 ≲ 1. By symmetry, this bounds the form associated with the j=0 summand in (<ref>). It remains to estimate the form associated with K_2 where K_2 is the sum of (<ref>), (<ref>). As K_1 is constant 1 on the diagonal ξ+η=0 by (<ref>) and the stick terms (<ref>) vanish on this diagonal, the function K_2 is still constant one on this diagonal. We define K_3 by K_3:=K_2-1. It suffices to prove bounds for the form associated with K_3, because K_2-K_3 is the Dirac delta and |Λ_D_1,K_2-K_3((f_s)_s∈ S)|= |∫_^3∏_k=0^2 f_k^2(x) dx |≤ 1, where the functions f_k are as in (<ref>). We rewrite K_3 as -∑_j=0^2J(φ_2,j -φ_1,j)⊗φ_1,j +φ_1,j⊗ (φ_2,j -φ_1,j) +∑_j=1^2Jφ_0,j⊗φ_0,j -φ_1,j-1⊗φ_1,j-1, where we have reshuffled (<ref>) and used ϕ_0,0=1 and ϕ_1,2J=0. Bounds for the sum of (<ref>) and (<ref>) follow from Proposition <ref>. Indeed, for each 0≤ j ≤ 2J, ((φ_2,j -φ_1,j)⊗φ_1,j) ⊆ ([-2^-m_j+4,-2^-m_j-1]∪ [2^-m_j-1,2^-m_j+4])× [-2^-m_j,2^-m_j]. By symmetry, the j-th summand in (<ref>) is supported in {(ξ,η)∈^2: 2^-m_j-30 < |(ξ,η)| ≤ 2^-m_j+30} =: A. The j-th summand also satisfies a bound by |(φ_2,j -φ_1,j)⊗φ_1,j +φ_1,j⊗ (φ_2,j -φ_1,j)|_(2^-m_j) (u,v) ≲ (1+|u+v|)^-4 (1+|u-v|)^-4 due to the functions being windows. Similarly, for 0≤ j ≤ 2J-1 we have (φ_0,j+1⊗φ_0,j+1- φ_1,j⊗φ_1,j) ⊆ [-2^-m_j,2^-m_j]^2∖ [-2^-m_j-1,2^-m_j-1]^2 ⊆ A and the decay |φ_0,j+1⊗φ_0,j+1- φ_1,j⊗φ_1,j|_(2^-m_j)(u,v) ≲ (1+|u+v|)^-4 (1+|u-v|)^-4. Thus, bounds for Λ_D_1, K_3 follow from Proposition <ref>. § PROOF OF PROPOSITION <REF> USING LEMMA 3 IN Given a regular 3× 3 matrix A, let D_A be the datum defined in (<ref>). We recall the following lemma, which is a special instance of a more general result proved in <cit.>. For all 0<ε<1, there exists a constant C such that the following holds. Let A be a regular 3 × 3 matrix which differs from -I by at most one row and satisfies |detA| >ε and A_HS≤ε^-1, where A_HS stands for the Hilbert-Schmidt norm of A. With S as in the datum D_A, let (f_s)_s∈ S be a tuple of real-valued Schwartz functions such that f_s_8=1 for all s∈ S. Let i=1,2,3 and let K be the kernel satisfying K(Π x)= ∫_0^∞∫_^3 (∂_i∂_i+3 g)_(t)(x+((-Ap^T)^T,p)) dp dt/t. Then |Λ_D_A,K((f_s)_s∈ S)| ≤ C. Let λ=3/2. Let k≤ 0 be given. Let an integer J ≥ 1 and a strictly increasing sequence (k_j)_j=1^J of integers be given. Let (Φ_j)_j=1^J and K be given as in the proposition. Let (f_s)_s∈ S be given as in (<ref>) and (<ref>). Set f_k:=f_(k,0)=f_(k,1) for each k=0,1,2. Let θ: → be a function whose Fourier transform is supported in [-2,-1/2] ∪ [1/2,2] and whose derivatives up to order 8 are ≲ 1. Assume further that ∫_0^∞θ(rξ) dr/r=1 for all ξ≠ 0. We do the two parameter lacunary decomposition of K in directions ξ+η and ξ-η and collect these pieces into lacunary cones away from the line ξ+η=0 centered at the origin. In detail, we write K(ξ,η)=∫_0^∞K^(z) (ξ,η) dz/z with K^(z)(ξ,η)=∫_0^∞K(ξ,η) θ(t(ξ-η)) θ(z^-1t(ξ+η)) dt/t. We break the integral in (<ref>) into the integrals over the domains (0,1) and (1,∞) and do the estimates for these integrals separately. We begin with the case z∈ (0,1). Here we do an estimate for each z separately and show for all z< 1 that |Λ_D_1,K^(z)((f_s)_s∈ S)| ≲ z^(λ-1)^2/2λ J^1/2, which is an integrable upper bound with respect to the measure dz/z. Fix z∈ (0,1). Let g be the one-dimensional Gaussian and let h=g'. Set ω=(h)^-1θ. The function ω satisfies similar support and derivative estimates as θ since h and its derivatives are essentially constant on the support of θ. In addition, let ϕ be a function supported in the annulus 1/16≤ |(ξ,η)| ≤ 16 such that its derivatives up to order 8 are ≲ 1 and ϕ(ξ,η) g(ξ) g(η)=1 if 1/8 ≤ |(ξ,η)| ≤ 8. Then, for all ξ,η∈, θ(ξ-η) θ(z^-1(ξ+η)) = θ(ξ-η)ω (z^-1(ξ+η)) h(z^-1(ξ+η))ϕ(ξ,η) g(ξ) g(η). Note that this equality holds since the left-hand side is supported in the set where 1/8 ≤ |(ξ,η)| ≤ 8. Indeed, on the support of the left-hand side of (<ref>) we have |ξ+η| ≤ 2z ≤ 2 and 1/2 ≤ |ξ-η| ≤ 2. This yields (<ref>). For z∈ (0,1) and t>0 we define the function w^z,t via w^z,t(ξ,η)= K(t^-1(ξ,η))ϕ(ξ,η)θ(ξ-η) ω (z^-1(ξ+η)). Let Π be the projection associated with the datum D_1. Using the Fourier inversion formula and equations (<ref>), (<ref>) and (<ref>), we write K^(z)(Π x) as ∫_0^∞∫_^2w^z,t(t(ξ,η)) h(z^-1t(ξ+η)) g(tξ) g(tη) e^2π i( ξ (x_3^0-x_0-x_1-x_2)+η(x_3^1-x_0-x_1-x_2)) dξ dηdt/t. Since the Fourier transform of w^z,t is supported in the set where 1/8≤ |(ξ,η)| ≤ 8, we observe that w^z,t vanishes unless t is in the set M:= ⋃_j=1^J [2^k_j-33 , 2^k_j+33]. We may thus restrict the region of t-integration in (<ref>) to M. Further, we may interpret the inner integral in (<ref>) as the integral of the Fourier transform of the function (y_0,y_1,y_2,y_3,y_4) ↦ w^z,t_(t)(y_0+x_0+x_1,y_1+x_0+x_1)h_(z^-1t)(y_2+x_2) g_(t)(y_3+x_3^0) g_(t)(y_4+x_3^1) over the hyperplane {(-ξ,-η,-ξ-η,ξ,η): ξ∈,  η∈}. It is therefore up to universal multiplicative constant equal to the integral of the function itself over the orthogonal complement {(p+q-r,q-r,r,p+q,q): p, q, r∈}. The form Λ_D_1,K^(z)((f_s)_s∈ S) can then be rewritten as ∫_M ∫_^8[∏_s∈ S f_s(Π_s x)] w^z,t_(t)(x_0+x_1+p+q-r, x_0+x_1+q-r) × h_(z^-1t)(x_2+r) g_(t)(x_3^0+p+q) g_(t)(x_3^1+q) dx dp dq dr dt/t. We write the integral in x_2 as the innermost and use the Cauchy-Schwarz inequality in the remaining variables. This bounds (<ref>) by the geometric mean of ∫_M∫_^7[∏_i=0,1 |f_2(x_0,x_1,x_3^i)|^2 ] |w^z,t_(t)(x_0+x_1+p+q-r, x_0+x_1+q-r)| × g_(t)(x_3^0+p+q) g_(t)(x_3^1+q) dx_0 dx_1 dx_3^0 dx_3^1 dp dq dr dt/t and ∫_0^∞∫_^7[ ∫_ [ ∏_i=0,1f_0(x_3^i,x_1,x_2)f_1(x_0,x_3^i,x_2) ] h_(z^-1t)(x_2+r) dx_2]^2 × |w^z,t_(t)(x_0+x_1+p+q-r, x_0+x_1+q-r)| × g_(t)(x_3^0+p+q) g_(t)(x_3^1+q) dx_0 dx_1 dx_3^0 dx_3^1 dp dq dr dt/t. In order to bound (<ref>) and (<ref>), we prove a pointwise estimate for w^z,t. We first claim |w^z,t(u,v)| ≲ z^λ. To verify the claim, we observe that since K vanishes on the diagonal ξ+η=0, the function K_(t^-1)∗ϕ has the same property. Therefore |K_(t^-1)∗ϕ(ξ,η)| =|K_(t^-1)∗ϕ(ξ,η)-K_(t^-1)∗ϕ((ξ-η)/2,-(ξ-η)/2)| =|∫_^2 K_(t^-1)∗ϕ(u,v) e^-π i(ξ-η)(u-v) (e^-π i(ξ+η)(u+v)-1) du dv| ≲∫_^2 |K_(t^-1)∗ϕ(u,v)| min{|ξ+η||u+v|,1} du dv ≤ |ξ+η|^λ-1∫_^2 |K_(t^-1)∗ϕ(u,v)|   |u+v|^λ-1 du dv, as λ-1 ∈ (0,1). We observe that |K_(t^-1)∗ϕ(u,v)| ≲ 2^λ k(1+2^k|u+v|)^-4 (1+|u-v|)^-4 + (1+|u+v|)^-4 (1+|u-v|)^-4, thanks to the derivative estimates on ϕ, to the support properties of ϕ and Φ_j and to (<ref>). Therefore, ∫_^2 |K_(t^-1)∗ϕ(u,v)|  |u+v|^λ-1 du dv ≲ 2^k∫_^2 (1+2^k|u+v|)^λ-5 (1+|u-v|)^-4 du dv + ∫_^2 (1+|u+v|)^λ-5 (1+|u-v|)^-4 du dv ≲ 1. Combining this with (<ref>) and passing to w^z,t, we thus obtain |w^z,t(ξ,η)| ≲ z^λ-1. Estimating the Fourier inversion formula by L^1 → L^∞ bounds, inequality (<ref>) follows. We note that the right-hand side of (<ref>) has the desired decay as z tends to 0, however, it does not have a good behavior with respect to (u,v). We therefore derive a yet another estimate for w^z,t in which the right-hand side possesses merely L^1 scaling in z but decays sufficiently fast as |(u,v)| tends to infinity. We set F(u,v)=ω_(z^-1)((u+v)/2) θ((u-v)/2). By (<ref>), we have w^z,t=K_(t^-1)∗ϕ∗ F. Recall that the functions ω and θ are supported in [-2,2] and have derivatives up to order 8 bounded by ≲ 1. Using (<ref>), we therefore obtain |w^z,t(u,v)| ≲ 2^λ k(1+2^k|u+v|)^-4 (1+|u-v|)^-4 +z(1+z|u+v|)^-4 (1+|u-v|)^-4 if 2^k ≤ z and |w^z,t(u,v)| ≲ z(1+z|u+v|)^-4 (1+|u-v|)^-4 if z ≤ 2^k. Finally, we write |w^z,t|=|w^z,t|^λ-1/2λ |w^z,t|^λ+1/2λ and use the estimate (<ref>) for the first factor and the estimates (<ref>) and (<ref>) for the second factor. This yields the desired bounds |w^z,t(u,v)| ≲ z^(λ-1)^2/2λ [z(1+z|u+v|)^-2-2/λ+2^k(1+2^k|u+v|)^-2-2/λ] (1+|u-v|)^-2-2/λ if 2^k ≤ z, and |w^z,t(u,v)| ≲ z^(λ-1)^2/2λ z(1+z|u+v|)^-2-2/λ (1+|u-v|)^-2-2/λ if z ≤ 2^k. Having inequalities (<ref>) and (<ref>) at our disposal, we proceed to bound the term (<ref>). We observe that this term can be written as ∫_M∫_^5[∏_i=0,1 |f_2(x_0,x_1,x_3^i)|^2 ] × [|w^z,t| ∗ (g ⊗ g)]_(t)(x_3^0-x_0-x_1+r, x_3^1-x_0-x_1+r) dx_0 dx_1 dx_3^0 dx_3^1 dr dt/t. Applying the Cauchy-Schwarz inequality, we bound (<ref>) with v_z,t:=[|w^z,t| ∗ (g ⊗ g)]_(t) by ∫_M ∏_i=0,1[∫_^5|f_2(x_0,x_1,x_3^i)|^4 v_z,t(x_3^0-x_0-x_1+r,x_3^1-x_0-x_1+r) dx_0 dx_1 dx_3^0 dx_3^1 dr ]^1/2 dt/t. The product of the square roots of the integrals for i=0,1 equals f_2_4^4 v_z,t_1≲ z^(λ-1)^2/2λ. The last identity can be seen by integrating first in x_3^1-i and then in r to obtain the L^1 norm of v_z,t. What remains is then the L^4 norm of f_2 raised to the fourth power. Using that ∫_M dt/t≲ J, we deduce that (<ref>) is bounded by a multiple of z^(λ-1)^2/2λJ. We next focus on the term (<ref>). Using the estimates (<ref>) and (<ref>), bounding the form (<ref>) reduces to estimating ∫_0^∞∫_^7[ ∫_[ ∏_i=0,1 f_0(x_3^i,x_1,x_2) f_1(x_0,x_3^i,x_2) ] h_(z^-1t)(x_2+r) dx_2 ]^2 × t^-1γ (1+t^-1γ|x_0+x_1+p/2+q-r|)^-2-2/λ t^-1(1+t^-1|p|)^-2-2/λ × g_(t)(x_3^0+p+q) g_(t)(x_3^1+q) dx_0 dx_1 dx_3^0 dx_3^1 dp dq dr dt/t, where γ=z, or γ=2^k if 2^k ≤ z. We will prove a bound independent of z and k, which will bound (<ref>) by ≲ z^(λ-1)^2/2λ thanks to the extra factor z^(λ-1)^2/2λ in (<ref>) and (<ref>). We dominate t^-1γ (1+t^-1γ|x_0+x_1+p/2+q-r|)^-2-2/λ t^-1(1+t^-1|p|)^-2-2/λ ≲ t^-2γ (1+t^-1|(γ(x_0+x_1+p/2+q-r), 2p)|)^-2-2/λ ≲∫_2^∞ g_(αγ^-1t)(x_0+x_1+p/2+q-r) g_(α t)(2p) dα/α^1+2/λ. It thus suffices to estimate the form ∫_0^∞∫_^7[ ∫_[ ∏_i=0,1 f_0(x_3^i,x_1,x_2) f_1(x_0,x_3^i,x_2) ] h_(z^-1t)(x_2+r) dx_2 ]^2 × g_(αγ^-1t)(x_0+x_1+p/2+q-r) g_(α t)(2p) g_(t)(x_3^0+p+q) g_(t)(x_3^1+q) dx_0 dx_1 dx_3^0 dx_3^1 dp dq drdt/t with a bound ≲α and then integrate over α, using that 2/λ>1. We claim that ∫_ g_(αγ^-1t)(x_0+x_1+p/2+q-r) g_(α t)(2p) g_(t)(x_3^0+p+q) dp ≲ g_(2^1/2αγ^-1t)(x_0+x_1+q-r) g_(α t)(x_3^0+q). Indeed, we have g_(α t)(2p) ≲ e^-πγ^2 α^-2 t^-2 (p/2)^2 g_(2^-1/2α t)(p). The elementary inequality e^-2(a+b)^2 e^-2b^2≤ e^-a^2 yields g_(αγ^-1t)(x_0+x_1+p/2+q-r) e^-πγ^2 α^-2 t^-2 (p/2)^2≲ g_(2^1/2αγ^-1t)(x_0+x_1+q-r). Thus, the left-hand side of (<ref>) is bounded by g_(2^1/2αγ^-1t)(x_0+x_1+q-r) (g_(2^-1/2α t)∗ g_(t))(x_3^0+q) ≲ g_(2^1/2αγ^-1t)(x_0+x_1+q-r) g_(α t)(x_3^0+q), as desired. Expressing further g_(2^1/2αγ^-1t)(x_0+x_1+q-r) as a convolution of two Gaussians and using the evenness of the Gaussian, (<ref>) is bounded by α∫_0^∞∫_^7[ ∫_[ ∏_i=0,1 f_0(x_3^i,x_1,x_2) f_1(x_0,x_3^i,x_2) ] h_(z^-1t)(x_2+r) dx_2 ]^2 × g_(αγ^-1t)(x_0+p) g_(αγ^-1t)(x_1-p+q-r) g_(α t)(x_3^0+q) g_(α t)(x_3^1+q) dx_0 dx_1 dx_3^0 dx_3^1 dp dq dr dt/t. After renaming of variables, naming the variable x_2 that is twice an integration variable once as x_2^0 and once as x_2^1, then renaming the variables x_0,x_1,x_2^0,x_2^1,x_3^0,x_3^1 in this order as x_1^1,x_1^0,x_3^0,x_3^1,x_2^0,x_2^1, and finally introducing functions f_0(a,b,c)=f_0(b,a,c) and f_1=f_1, we write (<ref>) as α∫_0^∞∫_^7[ ∏_i=0,1∫_f_0(x_1^0, x_2^0,x_3^i) f_1(x_1^1,x_2^0,x_3^i) f_0(x_1^0,x_2^1,x_3^i) f_1(x_1^1,x_2^1,x_3^i) h_(z^-1t)(x_3^i+r) dx_3^i ] × g_(αγ^-1t)(x_1^0-p+q-r) g_(αγ^-1t)(x_1^1+p) g_(α t)(x_2^0+q) g_(α t)(x_2^1+q) dx_1^0 dx_1^1 dx_2^0 dx_2^1 dp dq dr dt/t. Let S and (Π_s)_s∈ S be as in the datum D_A. Introducing f_s=f_s(1) for s ∈ S, we may write the last display as α∫_0^∞∫_^9[ ∏_s∈ S f_s(Π_s x) ] g_(αγ^-1t)(x_1^0-p+q-r) g_(αγ^-1t)(x_1^1+p) × g_(α t)(x_2^0+q) g_(α t)(x_2^1+q) h_(z^-1t)(x_3^0+r) h_(z^-1t)(x_3^1+r) dx dp dq dr dt/t. Let V: ^3 →^3 be a mapping given by V(v_0,v_1,v_2) = (αγ^-1v_0, α v_1,z^-1v_2). We perform the change of variables with respect to this mapping for each of the triples (p,q,r), (x_1^0,x_2^0,x_3^0) and (x_1^1,x_2^1,x_3^1). After this transformation, the above form becomes α∫_0^∞∫_^9[∏_s∈ Sα^1/4γ^-1/8 z^-1/8 f_s(V Π_s x) ] g_(t)(x_1^0-p+γ q-γ z^-1α^-1r) g_(t)(x_1^1+p) × g_(t)(x_2^0+q) g_(t)(x_2^1+q) h_(t)(x_3^0+r) h_(t)(x_3^1+r) dx dp dq dr dt/t. This can be recognized as α multiple of Λ_D_A,K(( α^1/4γ^-1/8 z^-1/8 f_s∘ V)_s∈ S), where K has the form (<ref>) with i=3 and A= ( [ 1 -γ γ z^-1α^-1; 0 -1 0; 0 0 -1 ] ). Since 0<γ≤ z ≤ 1 ≤α, the matrix A satisfies the assumption (<ref>) with ε=5^-1/2. Observing further that the function α^1/4γ^-1/8 z^-1/8 f_s∘ V has the same L^8-norm as f_s, we deduce from Lemma <ref> that (<ref>) is bounded by ≲α. This yields the desired bound for (<ref>). Combining the estimates for (<ref>) and (<ref>), we obtain (<ref>). It remains to consider the part of the integral in (<ref>) where z∈ (1,∞). Let φ be the function defined via its Fourier transform by φ(ξ)=∫_1^∞θ(zξ) dz/z. Then we can write ∫_1^∞K^(z) (ξ,η) dz/z =∫_0^∞K(ξ,η) φ(t(ξ-η)) θ(t(ξ+η)) dt/t. Formally, this expression has the same form as (<ref>) when z=1, except that the function θ is at one occurrence replaced by φ. Due to this similarity, we will denote (<ref>) by K^(1)(ξ,η). Note that θ is supported in [-2,-1/2] ∪ [1/2,2], φ is supported in [-2,2] and the support properties of θ and φ ensure that φ(ξ-η) θ(ξ+η) is supported in the set where 1/8 ≤ |(ξ,η)| ≤ 8. We may therefore apply an argument analogous to the case z∈ (0,1), arriving at the estimate |Λ_D_1,K^(1)((f_s)_s∈ S)| ≲ J^1/2. Combining this with (<ref>) yields the conclusion of the proposition. § PROOF OF PROPOSITION <REF> USING PROPOSITIONS <REF> AND  <REF> Let (k_j)_j=0^J be a finite increasing sequence of integers with k_j-1+10 ≤ k_j. For 1≤ j≤ J, let ϕ_0,j, ϕ_1,j,ϕ_2,j be rescaled respective left or right windows as in the proposition, and define K as in (<ref>). Let (f_s)_s∈ S be a tuple of functions as in (<ref>) and (<ref>). Set f_k:=f_(k,0)=f_(k,1) for each k=0,1,2. Let (χ,ϕ) be a universal pair and define χ_j:=χ_(2^k_j-2) and ϕ_j:=ϕ_(2^k_j-2). Define ϕ_3,j:=χ_j-1-ϕ_j and consequently, (ϕ_3,j)^2 =(χ_j-1)^2- (χ_j)^2 . If (ξ,η) is in the support of (ϕ_0,j -ϕ_2,j)⊗ϕ_1,j, then 2^-k_j+3-2^-k_j≤ |ξ+η|≤ 2^-k_j-1+2^-k_j. In this range, χ_j-1(ξ+η) is constant 1 and χ_j(ξ+η) is constant zero. We can therefore introduce artificial factors ϕ_3,j in K as follows, K(ξ,η) = ∑_j=1^J (ϕ_0,j -ϕ_2,j)(ξ) ϕ_1,j(η) = ∑_j=1^J (ϕ_0,j -ϕ_2,j)(ξ) ϕ_1,j(η)ϕ_3,j(-ξ-η). Taking the Fourier transform, we obtain for some universal constant C, K(u,v)= ∫_^2∑_j=1^J (ϕ_0,j -ϕ_2,j)(ξ)e^2π i uξϕ_1,j(η)e^2π i vηϕ_3,j(-ξ-η) dξ dη =C ∑_j=1^J ∫_(ϕ_0,j -ϕ_2,j)(u+p) ϕ_1,j(v+p)ϕ_3,j(p) dp, where we used that the integral of a function in ^3 over the diagonal {(p,p,p): p∈} equals the integral of its Fourier transform over the orthogonal complement of the diagonal, suitably normalized. Therefore, with S and Π_s as in the datum D_1, and doing a variable transformation p→ x_2+p, Λ_D_1,K((f_s)_s∈ S) = ∑_j=1^J∫_^6[ ∏_s∈ Sf_s(Π_s x) ] × (ϕ_0,j -ϕ_2,j)(x_3^0-x_0-x_1+p) ϕ_1,j(x_3^1-x_0-x_1+p)ϕ_3,j (x_2+p) dx dp. We apply Fubini in (<ref>) to have the integral in x_2 as the innermost and then apply the Cauchy-Schwarz inequality in x_0,x_1,x_3^0,x_3^1, p, which bounds |Λ_D_1,K((f_s)_s∈ S)| up to a constant by the geometric mean of ∑_j=1^J ∫_^5[∏_i=0,1 |f_2(x_0,x_1,x_3^i)|^2 ] μ_j(x_3^0-x_0-x_1+p,x_3^1-x_0-x_1+p) dx_0 dx_1 dx_3^0 dx_3^1 dp and ∑_j=1^J ∫_^5[ ∫_[ ∏_i=0,1 f_0(x_3^i, x_1,x_2) f_1(x_0,x_3^i,x_2) ] ϕ_3,j (x_2+p) dx_2 ]^2 ×μ_j(x_3^0-x_0-x_1+p,x_3^1-x_0-x_1+p) dx_0 dx_1 dx_3^0 dx_3^1 dp , where we have introduced the weight μ_j defined by μ_j(u,v)=| ϕ_0,j -ϕ_2,j|(u) |ϕ_1,j|(v). We will estimate (<ref>) as ≲ J and (<ref>) as ≲ 1, thereby proving Proposition <ref>. We begin with (<ref>). Applying the Cauchy-Schwarz inequality in the remaining integration variables, we bound (<ref>) by ∑_j=1^J ∏_i=0,1[ ∫_^5|f_2(x_0,x_1,x_3^i)|^4 μ_j(x_3^0-x_0-x_1+p,x_3^1-x_0-x_1+p) dx_0 dx_1 dx_3^0 dx_3^1 dp ]^1/2 = ∑_j=1^Jf_2_4^4 μ_j_1≲ J. Here the identity is seen by integrating first in x_3^1-i then in p to obtain the L^1 norm of μ_j. It remains to estimate (<ref>). We use decay of μ_j thanks to control of derivatives of Fourier transform of windows and the superposition estimate (1+|(u,v)|)^-N-20≲∫_1^∞ g_(α)(u) g_(α)(v) dα/α^N+10, which we scale isotropically and unisotropically, to dominate μ_j(u,v)≲∫_1^∞ g_(α 2^k_j)(u) g_(α 2^k_j)(v) dα/α^N+10 +∫_1^∞ g_(α 2^k_j-1)(u) g_(α 2^k_j)(v) dα/α^N+10. By superposition of positive terms, it suffices to estimate as ≲α^N the variant of (<ref>) with μ_j(u,v) replaced by g_(α2^l_j)(u)g_(α2^k_j)(v) and for each of the sequences l_j=k_j and l_j=k_j-1. Define the sequence of real numbers (m_j)_j=1^J by 2^2m_j+2^2m_j=2^2k_j+ 2^2l_j. Note that k_j-1≤ m_j≤ k_j, because l_j≤ k_j. Adding and subtracting terms, it suffices to estimate as ≲α^N the variants of (<ref>) with μ_j(u,v) replaced by g_(α 2^m_j)(u)g_(α 2^m_j)(v) and by (ν_j)_(α)(u,v), where ν_j(u,v):= g_( 2^l_j)(u)g_( 2^k_j)(v)- g_( 2^m_j)(u)g_( 2^m_j)(v). We begin with (<ref>). We need to estimate ∑_j=1^J ∫_^5[ ∫_[ ∏_i=0,1f_0(x_3^i, x_1,x_2) f_1(x_0,x_3^i,x_2) ] ϕ_3,j (x_2+p) dx_2 ]^2 × g_(α 2^m_j)(x_3^0-x_0-x_1+p)g_(α 2^m_j)(x_3^1-x_0-x_1+p) dx_0 dx_1 dx_3^0 dx_3^1 dp . A renaming of variables, naming the variable x_2 that is twice an integration variable once as x_2^0 and once as x_2^1, then renaming the variables x_0,x_1,x_2^0,x_2^1,x_3^0,x_3^1 in this order as x_1,x_0,x_3^0,x_3^1,x_2^0,x_2^1, and finally introducing functions f_0(a,b,c)=f_0(b,a,c) and f_1=f_1, we write (<ref>) as ∑_j=1^J ∫_^5[ ∏_i=0^1∫_f_0(x_0, x_2^0,x_3^i) f_1(x_1,x_2^0,x_3^i) f_0(x_0,x_2^1,x_3^i) f_1(x_1,x_2^1,x_3^i) ϕ_3,j (x_3^i+p) dx_3^i ] × g_(α 2^m_j)(x_2^0-x_0-x_1+p)g_(α 2^m_j)(x_2^1-x_0-x_1+p) dx_0 dx_1 dx_2^0 dx_2^1 dp. Introducing for the datum D_2 the tuple f_(k,j)=f_k for k=0,1 and j∈𝒞, we may write (<ref>) as Λ_D_2,K_1((f_s)_s∈ S), with K_1(u,v,z)= ∑_j=1^J ∫_g_(α 2^m_j)(u+p) g_(α 2^m_j)(v+p) ϕ_3,j(z+p)ϕ_3,j(p) dp . Proposition <ref> implies Λ_D_2,K_1((f_s)_s∈ S)≲α^N. It remains to estimate the term with (<ref>). We may assume l_j=k_j-1, because (<ref>) vanishes in the case k_j=l_j. With similar transformations as for term (<ref>), we write the form associated with (<ref>) as Λ_D_2,K_2((f_s)_s∈ S) with K_2(u,v,z)= ∑_j=1^J ∫_(ν_j)_(α)(u+p,v+p) ϕ_3,j(z+p)ϕ_3,j(p) dp . We decompose ν_j=∑_n∈ν_j,n, where ν_j,0(ξ,η)= ν_j(ξ,η) ((χ_(2^k_j-1))^2-(χ_(2^k_j))^2)(ξ+η), and for n<0, ν_j,n(ξ,η)= ν_j(ξ,η) ((χ_(2^k_j-1+n))^2-(χ_(2^k_j-1+n+1))^2)(ξ+η) and for n>0, ν_j,n(ξ,η)= ν_j(ξ,η) ((χ_(2^k_j+n-1))^2-(χ_(2^k_j+n))^2)(ξ+η). We split K_2=∑_n∈ K_2,n accordingly and estimate for each n Λ_D_2,K_2,n((f_s)_s∈ S)≲ 2^-|n|. Upon summing over n, we obtain the desired bound for (<ref>). We begin with n=0. We have, similarly as in (<ref>), for some universal constant C, K_2(u,v,z)=C∑_j=1^J ∫_^3(ν_j)_(α)(ξ,η)e^2π i (uξ+vη)ϕ_3,j(τ)e^2π i zτϕ_3,j(-τ-ξ-η) dξ dη dτ and thus K_2,0(ξ,η,τ)=C∑_j=1^J ((χ_(α 2^k_j-1))^2-(χ_(α 2^k_j))^2)(ξ+η)(ν_j)_(α)(ξ,η)ϕ_3,j(τ)ϕ_3,j(-τ-ξ-η). Preparing to apply Proposition <ref>, we note that K_2,0 is of the form (<ref>) with ρ_j defined by ρ_j(u_1,u_2,u_3,u_4):= (ν_j)_(α)(u_1,u_2)ϕ_3,j(u_3)ϕ_3,j(u_4), as can be seen from the Fourier transform side (<ref>). We do not attempt to show that ρ_j itself satisfies the assumptions of Proposition <ref>, but we split into six pieces by the distributive law, splitting ν_j into two pieces as in its definition (<ref>) and each ϕ_3,j into two as in its definition (<ref>). A typical piece is g_(α 2^l_j)(u_1) g_(α 2^k_j)(u_2) χ_j-1(u_3)ϕ_j(u_4), which satisfies the assumptions of Proposition <ref>, because ∫_^2 g_(α 2^l_j)(u_1+p) g_(α 2^k_j)(u_2+p) χ_j-1(u_3+r)ϕ_j(u_4+r) dpdr ≲ (g*g)_(α 2^1+k_j)(u_1-u_2) 2^-k_j(1+2^-k_j|u_3-u_4|)^-2. This along with similar estimates for the other eleven pieces completes the bound for Λ_D_2,K_2,0((f_s)_s∈ S) by Proposition <ref>. We turn to n>0. We introduce artificial factors that are constant 1 where relevant, using that the sequence k_j is well separated, and write K_2,n(ξ,η,τ) = C∑_j=1^J ((χ_(α 2^k_j-1+n+1))^2-(χ_(α 2^k_j+n+1))^2)(ξ+η) × ((χ_(α 2^k_j+n-1))^2-(χ_(α 2^k_j+n))^2)(ξ+η) (ν_j)_(α)(ξ,η)ϕ_3,j(τ)ϕ_3,j(-τ-ξ-η) . This kernel is of the form (<ref>) with ρ_j(ξ,η,τ, σ)= (ν_j,n)_(α)(ξ,η)ϕ_3,j(τ)ϕ_3,j(σ), with ν_j,n defined in (<ref>). We break both functions ϕ_3,j into pieces as above. All pieces are done similarly, we discuss a typical piece of ρ_j given by ϱ_j(ξ,η,τ, σ)= (ν_j,n)_(α)(ξ,η)χ_j-1(τ)ϕ_j(σ) . Using that χ_j and ϕ_j are even, we have ∫_χ_j-1(u_3+r)ϕ_j(u_4+r) dr=(χ_j-1*ϕ_j)(u_3-u_4)≲ 2^-k_j(1+2^-k_j|u_3-u_4|)^-2. With Lemma <ref> below, we obtain ∫_^2|ϱ_j|(u_1+p,u_2+p,u_3+r,u_4+r) dpdr ≲ 2^-nα^-1 2^- k_j(1+α^-1 2^- k_j |u_1-u_2|)^-2 2^-k_j(1+2^-k_j|u_3-u_4|)^-2. Proposition <ref> gives Λ_D_2,K_2,n((f_s)_s∈ S)≲ 2^-n, as desired. We have for every 1≤ j≤ J and every x,y∈ the estimate |ν_j,n(x,y)|≲ 2^-n 2^- k_j(1+2^- k_j|x-y|)^-42^-n- k_j(1+2^-n- k_j |x+y|)^-4. Scaling by a factor 2^k_j allows us to assume k_j=0 and -1≤ m_j≤ 0 and l_j≤ 0. We fix j and omit the index j. We thus have to prove |ν_n(x,y)|≲ 2^-2n (1+|x-y|)^-4(1+2^-n |x+y|)^-4, where ν_n(ξ,η)= ((χ_( 2^-1))^2-(χ)^2)(2^n(ξ+η))ν(ξ,η) with ν(ξ,η)= g(2^lξ) g(η)- g(2^mξ)g(2^mη). We claim that for 0≤α≤ 4 and 0≤β≤ 4 and |ξ+η|≤ 1 |∂_(1,-1)^α∂_(1,1)^βν(ξ,η)| ≲ |ξ+η|^max{1-β,0} (1+|ξ-η|)^-2. For β>0, this follows by deriving (<ref>) and using the decay of Gaussians and their derivatives for those Gaussians whose argument contains m or k, because -1≤ m≤ 0 and k=0. Here we also use the fact that whenever |ξ+η|≤ 1 and |ξ-η|≥ 1, then the three quantitities |ξ-η|, |ξ| and |η| are comparable. We next estimate the term with β=0. By the choice of m, the function ν vanishes on the diagonal ξ+η=0, and thus the same property holds also for ∂_(1,-1)^αν. Therefore, |∂_(1,-1)^αν(ξ,η)| = |∂_(1,-1)^αν(ξ,η) -∂_(1,-1)^αν((ξ-η)/2,-(ξ-η)/2)| ≲ |ξ+η| |∫_0^1 ∂_(1,-1)^α∂_(1,1)ν((ξ-η)/2+r(ξ+η)/2, -(ξ-η)/2+r(ξ+η)/2) dr| ≲ |ξ+η|sup_0≤ r≤ 1 | ∂_(1,-1)^α∂_(1,1)ν((ξ-η)/2+r(ξ+η)/2, -(ξ-η)/2+r(ξ+η)/2)| ≲ |ξ+η| (1+|ξ-η|)^-2. Turning to ν_n as in (<ref>), using that χ_( 2^-1)^2-χ^2 is supported in [-2,2], we obtain by differentiating |ν_n(ξ,η)| ≲ 2^-n 1_|2^n(ξ+η)| <1 (1+|ξ-η|)^-2, |∂_(1,-1)^4 ν_n(ξ,η)| ≲ 2^-n 1_|2^n(ξ+η)| <1 (1+|ξ-η|)^-2, |∂_(1,1)^4 ν_n(ξ,η)| ≲ 2^3n 1_|2^n(ξ+η)| <1 (1+|ξ-η|)^-2, |∂_(1,-1)^4∂_(1,1)^4 ν_n(ξ,η)| ≲ 2^3n 1_|2^n(ξ+η)| <1 (1+|ξ-η|)^-2. Hence, estimating the Fourier inversion formula crudely by L^1→ L^∞ bounds, |ν_n(x,y)|≲ 2^-2n, |x-y|^4 |ν_n(x,y)|≲ 2^-2n, |x+y|^4 |ν_n(x,y)|≲ 2^2n, |x-y|^4 |x+y|^4 |ν_n(x,y)|≲ 2^2n. We can summarize these findings into (<ref>), as can be seen by splitting into four cases depending on whether 2^n≤ |x+y| or 2^n> |x+y| and depending on whether 1≤ |x-y| or 1> |x-y|. This proves the lemma. We finally turn to n<0. As in the previous case, we introduce an artificial factor and write K_2,n(ξ,η,τ) = C∑_j=1^J ((χ_(α 2^k_j-1+n-1))^2-(χ_(α 2^k_j+n-1))^2)(ξ+η) × ((χ_(α 2^k_j-1+n))^2-(χ_(α 2^k_j-1+n+1))^2)(ξ+η) (ν_j)_(α)(ξ,η)ϕ_3,j(τ)ϕ_3,j(-τ-ξ-η). This kernel is of the form (<ref>) with ρ_j(ξ,η,τ, σ)= (ν_j,n)_(α)(ξ,η)ϕ_3,j(τ)ϕ_3,j(σ) with ν_j,n as in (<ref>). We break both functions ϕ_3,j into pieces as above. All pieces are done similarly, we discuss a typical piece of ρ_j given by ϱ_j(ξ,η,τ, σ)= (ν_j,n)_(α)(ξ,η)χ_j-1(τ)ϕ_j(σ) . With Lemma <ref>, we obtain ∫_^2|ϱ_j|(u_1+p,u_2+p,u_3+r,u_4+r) dpdr ≲ 2^nα^-1 2^- k_j(1+α^-1 2^- k_j |u_1-u_2|)^-42^-k_j(1+2^-k_j|u_3-u_4|)^-2. Proposition <ref> gives Λ_D_2,K_2,n((f_s)_s∈ S)≲ 2^n, as desired. We have for every 1≤ j≤ J and every x,y∈ the estimate |ν_j,n(x,y)|≲ 2^n 2^- k_j-1(1+2^- k_j-1|x|)^-42^- k_j(1+2^- k_j |y|)^-4 We split the function ν_j(ξ,η)= g(2^l_jξ) g(2^k_jη)- g(2^m_jξ)g(2^m_jη) into its two summands and consider the summands separately. Consider the term g(2^l_jξ) g(2^k_jη). Scaling by the factor 2^l_j in ξ and 2^k_j in η reduces the matter to proving |μ_n(x,y)| ≲ 2^n(1+|x|)^-4 (1+|y|)^-4 where μ_n(ξ,η)= (χ^2-χ_(2)^2)(2^nξ+2^n+lη)g(ξ) g(η) with l ≤ 0. On the support of the function (χ^2-χ_(2)^2)(2^nξ+2^n+lη), we have |∂_ξ^α∂_η^β g(ξ) g(η)|≲ 2^n (1+|ξ|)^-4 (1+|η|)^-4 for all 0≤α,β≤ 4. By the Leibniz rule, analogous bounds hold for μ_n. The function μ_n then satisfies the bound (<ref>). This is the desired estimate for the term g(2^l_jξ) g(2^k_jη). To estimate the term g(2^m_jξ) g(2^m_jη), we rescale by 2^m_j in both variables and claim |μ_n(x,y)| ≲ 2^n 2^5l(1+|x|)^-4 (1+|y|)^-4 where μ_n(ξ,η)= (χ^2-χ_(2)^2)(2^n+l(ξ+η))g(ξ) g(η) and l=k_j-1-m_j ≤ 0. This follows similarly as before, using the decay of the Gaussians. As 2^5l(1+|x|)^-4≲ 2^-l(1+2^-l|x|)^-4, this completes the proof of the lemma. § PROOF OF PROPOSITION <REF> USING PROPOSITIONS <REF>, <REF>, AND THEOREM 1.1 IN Let α≥ 1. Let J be a positive integer and (k_j)_j=0^J a finite increasing sequence of integers with k_j-1+10 ≤ k_j for 1≤ j ≤ J, let (m_j)_j=1^J be a sequence of real numbers with k_j-1≤ m_j≤ k_j. For 0≤ j≤ J, let χ_j be a function such that (χ_j)_(2^2-k_j) is a left window and let ϕ_j be as in the statement of the proposition, i.e. (ϕ_j)^2=(χ_j-1)^2- (χ_j)^2. Let a tuple (f_s)_s∈ S be given as in (<ref>), (<ref>) and write f_(0,j)=f_0, f_(1,j)=f_1 for any j ∈𝒞. Taking the Fourier transform, the kernel K of the proposition reads as K(ξ,η,τ) =α^-N∑_j=1^J g_(α 2^m_j) (ξ) g_(α 2^m_j)(η) ϕ_j(τ) ϕ_j(-ξ-η-τ). Define the kernel K_1 by K_1(ξ,η,τ) :=α^-N∑_j=1^J g_(α 2^m_j)(ξ)g_(α 2^m_j)(η) (χ_j-1(τ)χ_j-1(-τ-ξ-η) - χ_j(τ)χ_j(-τ-ξ-η) ). Therefore, on the critical space ξ+η=0, the kernels are equal, i.e., for all ξ,τ we have K(ξ,-ξ,τ) = K_1(ξ,-ξ,τ). By the triangle inequality, it suffices to estimate Λ_D_2,K-K_1 and Λ_D_2,K_1. We begin with the latter. Since α≥ 1, we observe that it in fact suffices to prove the (stronger) bound |Λ_D_2,α^NK_1((f_s)_s∈ S)| ≲ 1. Define the kernel K_2 by K_2(ξ,η,τ) := ∑_j=1^J (g_(α 2^m_j-1)(ξ)g_(α 2^m_j-1)(η) - g_(α 2^m_j)(ξ)g_(α 2^m_j)(η)) χ_j-1(τ)χ_j-1(-τ-ξ-η) and define σ_j(ξ,η,τ):= g_(α 2^m_j)(ξ)g_(α 2^m_j)(η) χ_j(τ)χ_j(-τ-ξ-η). Here, we formally set m_0=k_0. By telescoping, we have α^NK_1+K_2 = σ_0 - σ_J. For each j, Λ_D_2,σ_j((f_s)_s∈ S) equals ∫_^7[∏_s∈ S f_s(Π_s x) ] g_(α 2^m_j) (x_2^0-x_0-x_1+p) g_(α 2^m_j)(x_2^1-x_0-x_1+p) χ_j (x_3^0+p)χ_j (x_3^1+p) dxdp, where x=(x_0,x_1,x_2^0,x_2^1,x_3^0,x_3^1). This can be estimated using a classical Brascamp-Lieb inequality as |Λ_D_2,σ_j((f_s)_s∈ S)| ≲g_(α 2^m_j)^2_1 χ_j_1^2∏_s∈ Sf_s_8≲ 1. One can verify this Brascamp-Lieb inequality by interpolation between estimates that put one of the functions f_s in L^1 and all others in L^∞. The estimate of Λ_D_2,α^NK_1 is thus reduced to an estimate of Λ_D_2,K_2, which we now proceed to do. We use the fundamental theorem of calculus to split up a difference of Gaussians with parameters a,b as g(aξ)g(aη) - g(bξ)g(bη) = ∫_a^b -t∂_t ( g(tξ) g(tη)) dt/t = 2π∫_a^b t^2(ξ^2+η^2) g(tξ) g(tη) dt/t = 2π∫_a^b t^2(ξ+η)^2 g(tξ) g(tη) dt/t - 4π∫_a^b t^2ξη g(tξ) g(tη) dt/t . Using this splitting, in place of Λ_D_2,K_2 we may estimate Λ_D_2,K_3 and Λ_D_2,K_4 with K_3(ξ,η,τ):= ∑_j=1^J ∫_α 2^m_j-1^α 2^m_j t^2(ξ+η)^2 g(tξ) g(tη) dt/t χ_j-1(τ)χ_j-1(-τ-ξ-η) and, using h=g' and that h(ξ) is a constant multiple of ξg(ξ), K_4(ξ,η,τ):= ∑_j=1^J ∫_α 2^m_j-1^α 2^m_jh (tξ) h(tη) dt/t χ_j-1(τ)χ_j-1(-τ-ξ-η). Proposition <ref> gives |Λ_D_2,K_3((f_s)_s∈ S)| ≲ 1 . We turn to Λ_D_2,K_4, which we write on the spatial side as ∑_j=1^J ∫_α 2^m_j-1^α 2^m_j∫_^5[ ∫_[ ∏_i=0,1 f_0(x_0, x_2^i,x_3) f_1(x_1,x_2^i,x_3) ] h_(t) (x_3+p) dx_3 ]^2 ×χ_j-1(x_2^0-x_0-x_1+p) χ_j-1(x_2^1-x_0-x_1+p) dx_0 dx_1 dx_2^0 dx_2^1 dp dt/t. Using positivity of the square in this expression, we may dominate |χ_j-1(u)χ_j-1(v)|≤∫_1^∞ g_(β2^m_j-1)(u)g_(β2^m_j-1)(v) β^-N dβ/β. Then it suffices to estimate for fixed β≥ 1 the form Λ_D_2,K_5 where K_5(ξ,η,τ) := ∑_j=1^J ∫_α 2^m_j-1^α 2^m_jh_(t)(ξ)h_(t)(η) dt/t g_(β 2^m_j-1)(τ)g_(β 2^m_j-1)(-τ-ξ-η) . We introduce a new kernel K_6 (ξ,η,τ) = ∑_j=1^Jg_(α 2^m_j)(ξ)g_(α 2^m_j)(η) ∫_β 2^m_j-1^β 2^m_jh_(t)(τ)h_(t)(-τ-ξ-η) dt/t. The kernel K_6 is symmetric to K_5 under the symmetry (<ref>). We note that, for some M, which is even in all variables and symmetric under switching the first two variables or switching the second two variables, K_5(x,y,z)=∫_ M(x+p,y+p,z+p,p) dp. With K_5 as defined near (<ref>), we have K_5(x,y,z)=∫_ M( x+y+z/2+p,x-y+z/2+p,z+p,p) dp =∫_ M(-p,-y-p,-x-z+y/2-p,-x+z+y/2-p) dp, where we obtained the last identity by the substitution of p by -p-x+y+z/2. For K_5^* as defined near (<ref>), we obtain K_5^*(x,y,z)=∫_ M(-p,-z-p,-x-y+z/2-p,-x+y+z/2-p) dp. Using that M is an even function and that it is invariant under interchanging the first two entries or the second two entries, we obtain K_5^*(x,y,z)=∫_ M(z+p,p,x+y+z/2+p,x-y+z/2+p) dp. Inverting the tilde operation, we identify the kernel K_6^*(x,y,z)=∫_ M(z+p,p,x+p,y+p) dp. Hence, the star symmetry acts on M by interchanging the first two variables with the second two variables in M. As Λ_D_2,K_5((f_s)_s∈ S) is positive by the above construction, it follows by symmetry that Λ_D_2,K_6 is positive as well and it suffices to estimate the sum Λ_D_2,K_5+K_6. We reverse the arguments leading from K_2 to K_4, with a Gaussian in place of χ_j-1, and apply these arguments both to K_5 and symmetrically to K_6. In place of Λ_D_2,K_3, we obtain the corresponding forms Λ_D_2,K_7 and Λ_D_2,K_8 with K_7(ξ,η,τ):= ∑_j=1^J ∫_α 2^m_j-1^α 2^m_j t^2(ξ+η)^2 g(tξ) g(tη) dt/t g_(β 2^m_j-1)(τ)g_(β 2^m_j-1)(-τ-ξ-η), K_8(ξ,η,τ):= ∑_j=1^J g_(α 2^m_j)(ξ)g_(α 2^m_j)(η)∫_β 2^m_j-1^β 2^m_j t^2(ξ+η)^2 g(tτ) g(t(-τ-ξ-η)) dt/t. Note that to arrive at K_8, in place of symmetry arguments, we may also use in place of (<ref>) the identity (ξ+η)^2+2τ(τ+ξ+η) =τ^2+(τ+ξ+η)^2. The forms Λ_D_2,K_7 and symmetrically Λ_D_2,K_8 are estimated analogously to Λ_D_2,K_3 using Proposition <ref>. Having thus reverted the above steps and having arrived at the analogue of Λ_D_2,K_2, we have reduced the bound of Λ_D_2,K_5+K_6 to a bound on Λ_D_2,K_9 with K_9(ξ,η,τ) = ∑_j=1^J [g_(α 2^m_j-1)(ξ)g_(α 2^m_j-1)(η) - g_(α 2^m_j)(ξ)g_(α 2^m_j)(η)] g_(β 2^m_j-1)(τ)g_(β 2^m_j-1)(-τ-ξ-η) + ∑_j=1^J g_(α 2^m_j)(ξ)g_(α 2^m_j)(η) ×[ g_(β 2^m_j-1)(τ)g_(β 2^m_j-1)(-τ-ξ-η) -g_(β 2^m_j)(τ)g_(β 2^m_j)(-τ-ξ-η) ] =g_(α 2^m_0)(ξ)g_(α 2^m_0)(η)g_(β 2^m_0)(τ)g_(β 2^m_0)(-τ-ξ-η) -g_(α 2^m_J)(ξ)g_(α 2^m_J)(η)g_(β 2^m_J)(τ)g_(β 2^m_J)(-τ-ξ-η) , where in the last identity we have telescoped the sum. We then obtain |Λ_D_2,K_9((f_s)_s∈ S)| ≲ 1 by a standard Brascamp-Lieb inequality analogously to the bound (<ref>). This completes the bound for Λ_D_2,K_1. It remains to estimate Λ_D_2,K-K_1. We have (K-K_1)(ξ,η,τ) = α^-N∑_j=1^J (g_(α 2^m_j) (ξ) g_(α 2^m_j)(η) ψ_j(τ, -τ-ξ-η) with ψ_j=ϕ_j⊗ϕ_j -(χ_j-1⊗χ_j-1 - χ_j⊗χ_j ). Define ϑ_1,j = ϕ_j - χ_j-1 ϑ_2,j = χ_j-1 - (χ_j)_(2^-4) ϱ_j =ψ_j - ϑ_1,j⊗ϑ_2,j - ϑ_2,j⊗ϑ_1,j K_10(ξ,η,τ) = ∑_j=1^J (g_(α 2^m_j) (ξ) g_(α 2^m_j)(η) ϑ_1,j(τ)ϑ_2,j(-τ-ξ-η) K_11(ξ,η,τ) = ∑_j=1^J (g_(α 2^m_j) (ξ) g_(α 2^m_j)(η) ϑ_2,j(τ)ϑ_1,j(-τ-ξ-η) K_12(ξ,η,τ) = α^-N∑_j=1^J (g_(α 2^m_j) (ξ) g_(α 2^m_j)(η) ϱ_j(τ, -τ-ξ-η) By the triangle inequality, it remains to estimate Λ_D_2,K_10, Λ_D_2,K_11, Λ_D_2,K_12 separately. We begin with Λ_D_2,K_10. Recall that (χ_j)_(2^2-k_j) is a left window. If (τ, -τ-ξ-η) is in the support of ϑ_1,j⊗ϑ_2,j, then |τ| ≤ 2^-k_j+2, 2^-k_j+5≤ |τ+ξ+η| ≤ 2^-k_j-1+2, 2^-k_j+4<|ξ+η|<2^-k_j-1+3. Defining ϑ_3,j: = (ϕ_j)_(2^-2) we have K_10(ξ,η,τ) = ∑_j=1^J g_(α 2^m_j) (ξ) g_(α 2^m_j)(η)ϑ_1,j(τ) ϑ_2,j(-τ-ξ-η) ϑ_3,j(ξ+η)^2 because the additional factor involving ϑ_3,j is constant one on the support of the original summand in the definition of K_10. The bound |Λ_D_2,K_10((f_s)_s∈ S)| ≲ 1 then follows from Proposition <ref> applied with ρ_j:=g_(α 2^m_j)⊗g_(α 2^m_j)⊗ϑ_1,j⊗ϑ_2,j. The form Λ_D_2,K_11 is estimated analoguously to the form Λ_D_2,K_10. It remains to estimate Λ_D_2,K_12. This form is a more standard singular Brascamp-Lieb form with a kernel associated with a Hörmander-Mikhlin multiplier and we will apply Theorem 1.1 in <cit.>, which was the reason to set N=2^18. That theorem will give |Λ_D_2,K_12((f_s)_s∈ S)| ≲ 1 provided |∂^γK_12(ξ,η,τ)| ≲ |(ξ,η,τ)|^-|γ| for all multi-indices γ of order 0≤ |γ|≤ N. The assumption of that theorem that Π_sΠ^T is regular for the present datum D_2 is satisfied. It thus remains to show (<ref>). By definition of ψ_j and ϑ_1,j, we obtain ψ_j=χ_j-1⊗ϑ_1,j + ϑ_1,j⊗χ_j-1+ ϑ_1,j⊗ϑ_1,j + χ_j⊗χ_j . Using further the definition of ϱ_j and ϑ_2,j, we obtain ϱ_j=(χ_j)_(2^-4)⊗ϑ_1,j + ϑ_1,j⊗ (χ_j)_(2^-4)+ ϑ_1,j⊗ϑ_1,j + χ_j⊗χ_j. Note that ϑ_1,j vanishes outside [-2^-k_j+2,2^-k_j+2]. Hence ϱ_j is supported on the ball of radius 2^10-k_j around the origin. In addition, ϑ_1,j coincides with -1 on [-2^-k_j+1,2^-k_j+1]. Using that (χ_j)_(2^2-k_j) is a left window, we then see that the Fourier transform of the first two terms on the right-hand side of (<ref>) is equal to -1 on [-2^-k_j+1,2^-k_j+1]^2 while the Fourier transform of the last two terms coincides with 1 on the same set. Therefore, ϱ_j vanishes inside the ball of radius 2^-k_j around the origin. The support properties of ϱ_j together with the estimates |ϱ_j| ≲ 1 and g ≲ 1 yield that |K_12| ≲ 1. Assume next that β is a multi-index with 1≤ |β|≤ N. Then ρ_j satisfies symbol estimates adapted to the ball of radius 2^11-k_j around the origin, namely |∂^βϱ_j(τ,σ)| ≲ 2^k_j|β| 1_|(τ,σ)| ≤ 2^11-k_j. Now assume first |ξ-η|≤ |(ξ+η,τ)|. Then, using that all derivatives of g up to order N are ≲ 1, and using that |m_j-k_j|≤ 1 and α≥ 1, |∂^βK_12(ξ,η,τ)|≲α^-N∑_j=1^J(α 2^k_j)^|β| 1_|(τ,τ+ξ+η)|≤ 2^11-k_j. Using further that α≥ 1 and |β| ≤ N we estimate the last display by ≲ |(τ,ξ+η)|^-|β|≲ |(τ,ξ,η)|^-|β|, where in the last inequality we have used |ξ-η|≤ |(τ, ξ+η)|. Now assume to the contrary that |ξ-η|≥ |(ξ+η,τ)|. Then we use that |∂^β g(ξ)|≲ e^-|ξ| for all |β|≤ N. Then |∂^βK_12(ξ,η,τ)|≲α^-N∑_j=1^J(α 2^k_j)^|β| e^-α 2^k_j|ξ-η|≲ |ξ-η|^-|β| ∑_j=1^J(2^k_j|ξ-η|)^|β| e^ -2^k_j|ξ-η| ≲ |ξ-η|^-|β|∑_n∈ 2^n|β|e^-2^n≲ |(ξ,η,τ)|^-|β|. § PROOF OF PROPOSITIONS <REF> AND <REF> The proofs of these propositions have some similarities, so we put them into one section and do the second proof analogously to the first. §.§ Proof of Proposition <ref> For 1≤ i≤ 2 let (a_i,j)_j=1^J be increasing sequences of positive real numbers, we choose a_i,0>0 so that (a_i,j)_j=0^J is still increasing. For 1≤ j ≤ J let ρ_j:^4→ be a continuous function satisfying (<ref>), and pick a further such function ρ_0:^4→. Let (c_j)_j=0^J be a well separated increasing sequence of positive numbers. Let χ be a left window, and let ϕ_j be a function on which for 1≤ j ≤ J satisfy ϕ_j≥ 0 and (ϕ_j)^2 = (χ_(c_j-1))^2 - (χ_(c_j))^2. Let K be defined by (<ref>) and let a tuple (f_s)_s∈ S be given as in (<ref>), (<ref>). The integrand of the integral expressing Λ_D_2,K((f_s)_s∈ S) factors into functions depending on x_0 and functions depending on x_1. We write the integrals in x_0 and x_1 innermost and separate these. With p_0:=p and p_1:=q we obtain Λ_D_2,K((f_s)_s∈ S)= ∑_j=1^J ∫_^7[∏_i=0,1∫_[ ∏_q∈𝒞 f_(i,q)(Π_(i,q)x) ] ϕ_j(x_i+p_i) dx_i] ×ρ_j(x_2^0+p_0+p_1+r,x_2^1+p_0+p_1+r,x_3^0+r,x_3^1+r) dx_2^0 dx_2^1 dx_3^0 dx_3^1 dp_0 dp_1 dr . Applying the Cauchy-Schwarz inequality in the seven exterior variables bounds the last display by the geometric mean of two forms, parameterized by i=0,1, which with the change of variables p_1-i→ p_1-i-p_i-r we write as ∑_j=1^J ∫_^7[ ∫_[ ∏_q∈𝒞 f_(i,q)(Π_(i,q)x) ] ϕ_j(x_i+p_i) dx_i]^2 × |ρ_j|(x_2^0+p_1-i,x_2^1+p_1-i,x_3^0+r,x_3^1+r) dx_2^0dx_2^1dx_3^0dx_3^1dp_0dp_1dr . Fix i and write f for f_(i,j), which thanks to (<ref>) does not depend on j. Using the decay (<ref>) for ρ_j, we dominate ∫_^2 |ρ_j | (x_2^0+p_1-i,x_2^1+p_1-i,x_3^0+r,x_3^1+r) dp_1-i dr ≲∫_1^∞∫_1^∞ (g*g)_(α a_1,j) (x_2^0-x_2^1) (g*g)_(β a_2,j)(x_3^0-x_3^1) dα/α^2dβ/β^2. It suffices to consider fixed α and β, and prove uniform bounds in α and β for (<ref>) with (<ref>) replaced by (g*g)_(α a_1,j) (x_2^0-x_2^1) (g*g)_(β a_2,j)(x_3^0-x_3^1) . Modifying the sequences a_i,j if necessary, we may assume α=β=1. Expanding the square in (<ref>) and integrating in p_i, our task becomes to show ∑_j=1^J ∫_^6[ ∏_s∈ S f (Π_sx)] (ϕ_j*ϕ_j)(x_1^0-x_1^1) (g * g)_a_1,j(x_2^0-x_2^1) (g * g)_ a_2,j(x_3^0-x_3^1) dx ≲ 1, where S and (Π_s)_s∈ S are as in the datum D_-I, which is the datum defined in (<ref>) in the case A=-I. Define the kernels K_1 := ∑_j=1^J ((χ*χ)_(c_j-1) - (χ*χ)_(c_j))⊗ (g* g)_( a_1,j)⊗ (g * g)_( a_2,j), K_2:=∑_j=1^J (χ*χ)_(c_j-1)⊗ ((g* g)_( a_1,j-1) - (g* g)_( a_1,j))⊗ (g * g)_( a_2,j), K_3 := ∑_j=1^J (χ*χ)_(c_j-1)⊗ (g* g)_( a_1,j-1)⊗ ((g * g)_( a_2,j-1) - (g * g)_( a_2,j)), and for 0≤ j ≤ J also σ_j = (χ*χ)_(c_j)⊗ (g* g)_( a_1,j)⊗ (g * g)_( a_2,j). We have the telescoping identity K_1+K_2+K_3 = σ_0 - σ_J. The form (<ref>) to be estimated becomes Λ_D_-I, K_1((f)_s∈ S). For each 0≤ j ≤ J one has by a standard Brascamp-Lieb inequality |Λ_D_-I, σ_j((f)_s∈ S)| ≲ 1. It then suffices to estimate the forms associated with K_2 and K_3 instead. By symmetry, we will only elaborate on Λ_D_-I, K_2((f)_s∈ S). Next we would like to dominate |(χ*χ)_(c_j-1)| in these two forms by superposition of Gaussians in such a way that the cancellation is preserved. To do that, we will use the identity (g*g)_(a_1,j-1) - (g*g)_(a_1,j) = -1/π∫_a_1,j-1^a_1,j (h*h)_(t) dt/t, which follows by taking the Fourier transform of the identity g(aξ)^2 - g(bξ)^2 =-∫_a^b ∂_t g(tξ)^2 dt=1/π∫_a^b (2π tξ g(tξ))^2 dt/t =-1/π∫_a^b (h(tξ))^2 dt/t for any a,b>0. Using further that h is odd and thus - h*h(x-y) = ∫_ h(x+p)h(y+p) dp , we obtain Λ_D_-I, K_2((f)_s∈ S) = 1/π∑_j=1^J ∫_a_1,j-1^a_1,j∫_^5[∏_i=0,1∫_[ ∏_s(1)=i f (Π_sx) ] h_(t)(x_2^i+p) dx_2^i] × (χ*χ)_(c_j-1)(x_1^0-x_1^1) (g*g)_(a_2,j)(x_3^0-x_3^1) dx_1^0dx_1^1dx_3^0dx_3^1dp dt/t. The product over i=0,1 has two identical factors and thus is nonnegative. We may therefore estimate the last display by dominating |(χ*χ)_(c_j-1)|≲∫_1^∞ (g*g)_(β c_j-1)β^-5 dβ . It suffices to prove bounds of (<ref>) with (χ*χ)_(c_j-1) replaced by (g*g)_(β c_j-1) uniformly in β. Fix β. By changing c_j if necessary, we may assume β=1. Define again kernels K_4 := ∑_j=1^J ((g * g)_( c_j-1) - (g * g)_( c_j))⊗ (g* g)_( a_1,j)⊗ (g * g)_( a_2,j), K_5:=∑_j=1^J (g * g)_( c_j-1)⊗ ((g* g)_( a_1,j-1) - (g* g)_( a_1,j))⊗ (g * g)_( a_2,j), K_6 := ∑_j=1^J (g * g)_( c_j-1)⊗ (g* g)_( a_1,j-1)⊗ ((g * g)_( a_2,j-1) - (g * g)_( a_2,j)). Similarly as near (<ref>), Λ_D_-I, K_4((f)_s∈ S)+Λ_D_-I, K_5((f)_s∈ S)+Λ_D_-I, K_6((f)_s∈ S) telescopes into a form that is ≲ 1 by a standard Brascamp-Lieb inequality. We have seen above that Λ_D_-I, K_5((f)_s∈ S) is positive. By symmetric arguments, the other summands in (<ref>) are also positive. Hence each summand is ≲ 1. This completes the proof of Proposition <ref>. §.§ Proof of Proposition <ref> Let a positive integer J be given as well as increasing sequences of positive real numbers (a_j)_j=0^J, (b_j)_j=1^J. Pick b_0>0 so that (b_j)_j=0^J is an increasing sequence. For 1≤ j≤ J let ϕ_j be given as in (<ref>). Let K be defined by (<ref>). Let a tuple (f_s)_s∈ S be given as in (<ref>), (<ref>). We write t^2(ξ+η)^2 g(tξ)g(tη) = t^2(ξ+η)^2g(2^-3/2t(ξ+η))^2g(2^-1t(ξ-η)) g(2^-1/2tξ)g(2^-1/2tη) = -h(2^-3/2t(ξ+η))^2ρ(2^-3/2t(ξ,η)). with ρ(2^-3/2(ξ,η)):=2/π^2g(2^-1(ξ-η)) g(2^-1/2ξ)g(2^-1/2η). Hence passing to the spatial side as near (<ref>), replacing the arbitrary sequence a_j by 2^-3/2a_j to avoid the cumbersome factors 2^-3/2, K(u,v,z)= ∑_j=1^J ∫_^3∫_a_j-1^a_j h_(t)(p) h_(t)(q) ρ_(t)(u+p+q+r,v+p+q+r) dt/tϕ_j(z+r,r) dpdqdr. We thus have analoguously to (<ref>) Λ_D_2,K((f_s)_s∈ S)= ∑_j=1^J ∫_^7∫_a_j-1^a_j[∏_i=0,1∫_[ ∏_q∈𝒞 f_(i,q)(Π_(i,q)x) ] h_(t)(x_i+p_i) dx_i] ×ρ_(t)(x_2^0+p_0+p_1+r,x_2^1+p_0+p_1+r) ϕ_j(x_3^0+r,x_3^1+r) dt/t dx_2^0 dx_2^1 dx_3^0 dx_3^1 dp_0 dp_1 dr . Applying the Cauchy-Schwarz inequality as in (<ref>), we need to estimate for i=0,1 ∑_j=1^J ∫_^7∫_a_j-1^a_j[ ∫_[ ∏_q∈𝒞 f_(i,q)(Π_(i,q)x) ] h_(t)(x_i+p_i) dx_i]^2 × |ρ_(t)|(x_2^0+p_1-i,x_2^1+p_1-i)|ϕ_j|(x_3^0+r,x_3^1+r) dt/t dx_2^0dx_2^1dx_3^0dx_3^1dp_0dp_1dr . Thanks to the square, the above integrand is positive and we dominate |ρ_(t)|≲ g_(4t)⊗ g_(4t) and |ϕ_j|≲∫_1^∞ g_(β b_j)⊗ g_(β b_j)β^-3 dβ. It suffices to prove bounds with g_(β b_j)⊗ g_(β b_j) in place of |ϕ_j| uniformly in β. Fix β, we may assume β=1 by modifying the otherwise arbitrary sequence b_j. Performing the analogous steps as leading to (<ref>) we end up having to estimate Λ_D_-I, K_1((f)_s∈ S), where now K_1 := -∑_j=1^J ∫_a_j-1^a_j ( h*h)_(t)⊗ ( g* g)_(4 t) dt/t⊗ ( g* g)_(b_j) . Define K_2 := -∑_j=1^J ∫_a_j-1^a_j ( g*g)_( t)⊗ ( h* h)_(4 t) dt/t⊗ ( g * g)_(b_j) , K_3 := -∑_j=1^J (g*g)_(a_j-1)⊗ (g *g )_(4 a_j-1)⊗∫_b_j-1^b_j (h*h)_(t) dt/t, and for 0≤ j ≤ J also σ_j = (g*g)_(a_j)⊗ (g*g)_(4 a_j)⊗ (g*g)_(b_j) . Then we have the telescoping identity K_1 + K_2 + K_3 = π (σ_0 - σ_J). Indeed, this follows with (<ref>), which gives K_1 + K_2 = π∑_j=1^J ∫_a_j-1^a_j -t∂_t((g*g)_( t)⊗ (g* g)_(4 t) ) dt/t⊗ (g * g)_(b_j) = π∑_j=1^J ((g*g)_(a_j-1)⊗ (g* g)_(4 a_j-1) - (g*g)_ (a_j)⊗ (g* g)_(4a_j) ) ⊗ (g * g)_(b_j), and K_3 = π∑_j=1^J (g*g)_ (a_j-1)⊗ (g*g)_(4 a_j-1)⊗ ((g*g)_(b_j-1) - (g*g)_(b_j)). By the identity (<ref>), Λ_D_-I, K_1((f)_s∈ S) + Λ_D_-I, K_2((f)_s∈ S) + Λ_D_-I, K_3((f)_s∈ S) ≲ 1. All quantities on the left-hand side are non-negative. For Λ_D_-I, K_1((f)_s∈ S), this can be seen as it resulted after an application of the Cauchy-Schwarz inequality, while for Λ_D_-I, K_2((f)_s∈ S) and Λ_D_-I, K_3((f)_s∈ S) it follows by symmetry. This gives the desired upper bound Λ_D_-I, K_1((f)_s∈ S) ≲ 1. plain
http://arxiv.org/abs/2307.05751v1
20230709165255
Exponential Resummation of QCD at finite chemical potential
[ "Sabarnya Mitra" ]
hep-ph
[ "hep-ph", "hep-lat", "nucl-ex", "nucl-th" ]
http://arxiv.org/abs/2307.04358v1
20230710060523
False Sense of Security: Leveraging XAI to Analyze the Reasoning and True Performance of Context-less DGA Classifiers
[ "Arthur Drichel", "Ulrike Meyer" ]
cs.CR
[ "cs.CR", "cs.LG" ]
Leveraging XAI to Analyze the Reasoning and True Performance of DGA Classifiers]False Sense of Security: Leveraging XAI to Analyze the Reasoning and True Performance of Context-less DGA Classifiers [email protected] RWTH Aachen University [email protected] RWTH Aachen University The problem of revealing botnet activity through Domain Generation Algorithm (DGA) detection seems to be solved, considering that available deep learning classifiers achieve accuracies of over 99.9%. However, these classifiers provide a false sense of security as they are heavily biased and allow for trivial detection bypass. In this work, we leverage explainable artificial intelligence (XAI) methods to analyze the reasoning of deep learning classifiers and to systematically reveal such biases. We show that eliminating these biases from DGA classifiers considerably deteriorates their performance. Nevertheless we are able to design a context-aware detection system that is free of the identified biases and maintains the detection rate of state-of-the art deep learning classifiers. In this context, we propose a visual analysis system that helps to better understand a classifier's reasoning, thereby increasing trust in and transparency of detection methods and facilitating decision-making. <ccs2012> <concept> <concept_id>10002978.10002997.10002999</concept_id> <concept_desc>Security and privacy Intrusion detection systems</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010147.10010257</concept_id> <concept_desc>Computing methodologies Machine learning</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [300]Security and privacy Intrusion detection systems [300]Computing methodologies Machine learning [ Ulrike Meyer August 12, 2023 =================== Copyright held by the owner/author(s) 2023. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive version was published in The 26th International Symposium on Research in Attacks, Intrusions and Defenses (RAID ’23), https://doi.org/10.1145/3607199.3607231 § INTRODUCTION In recent years, deep learning has been increasingly used as a building block for security systems incorporating classifiers that achieve high accuracies in various classification tasks. The advantage of deep learning classifiers is that they often outperform classical machine learning approaches, can be trained in an end-to-end fashion, and automatically learn to extract relevant features for classification. Therefore, less effort is often expended in creating such classifiers, since they seem to achieve high accuracies out-of-the-box and do not require the integration of domain knowledge as would be required to create feature-based or rule-based classifiers. This black-box nature of deep learning classifiers is particularly dangerous in the security domain, as the classifiers operate in an adversarial environment where an attacker actively aims to avoid detection. Since it is unclear what a classifier has learned, not only is its operation opaque, leading to trust issues, but it is also unclear whether the training data might have influenced a classifier in a way that an attacker could easily bypass the classification. Related work <cit.> has identified and summarized common pitfalls when using machine learning in computer security, including pitfalls that make it easier for an attacker to evade detection. These pitfalls range from sampling bias, where the data used does not adequately represent the true data distribution, over inaccurate ground-truth labels, to incorporating spurious correlations, where artifacts unrelated to the classification problem provide shortcuts for distinguishing classes. To uncover potential classification biases introduced by these pitfalls, related work suggests using explainability techniques for machine learning. However, it remains unclear which strategy is appropriate to mitigate identified problems. In this work, we systematically apply explainability techniques to the use-case of Domain Generation Algorithm (DGA) detection to reveal a variety of biases in state-of-the-art deep learning classifiers. We then evaluate the loss in classification performance induced by the elimination of these biases from the classifiers and propose a classification system that is free of the identified biases. We focus on DGA detection because for this use-case a plethora of research exists, the state-of-the-art classifiers that achieve accuracies up to 99.9% are open source, and domains generated by different DGAs are publicly available in bulk through open source intelligence (OSINT) feeds such as DGArchive <cit.>. This allows us to replicate the results of related work before performing a critical analysis of automatic feature extraction. To this end, we first conduct an extensive evaluation of a variety of different explainability techniques including recent developments. Then, we demonstrate how these methods can be used to debug and improve the understanding of state-of-the-art classifiers. In this context, we identify features and classification biases and show how this knowledge can be exploited to evade detection with ease. To address these issues, we propose a classification system free of the identified biases combined with a visualization system that supports analysts in Security Operation Centers (SOCs), increases transparency and confidence in detection methods, and facilitates decision-making. Finally, as a secondary contribution, we use the knowledge gained from our study to improve the state-of-the-art deep learning as well as feature-based approaches for DGA multiclass classification in terms of classification performance and efficiency. Overall, we thus provide a systematic approach to expose biases and analyze the reasoning of deep learning classifiers for DGA detection. While some of these biases may seem obvious and easily avoidable, they are present even in DGA detection approaches proposed at leading security conferences (e.g., <cit.>). Moreover, these biases are rooted on subtle flaws that are rife in security research and affect many other use-cases as well <cit.>. Thus, with this work we aim to raise awareness of potential pitfalls in state-of-the-art classifiers that allow bypassing detection, and provide helpful guidance in conducting a similar analysis also for different use-cases. While features and biases are highly domain specific, the generation of explanations is completely independent of the underlying classification task. Hence, the fundamental idea of leveraging XAI to improve machine learning classifiers is applicable to a variety of different use-cases (e.g., phishing detection, malware detection, vulnerability discovery, or general network intrusion detection). § PRELIMINARIES The self-learned features of a deep learning classifier and thus potential biases in its classification decision are mostly use-case dependent. It is thus fundamental to understand the specifics of the classification task at hand, including the data used by state-of-the-art classifiers and the data preprocessing applied. §.§ Domain Generation Algorithm Detection Domain Generation Algorithms (DGAs) are used by malware infected devices to contact the botnet master's command and control (C2) server for updates or instructions (e.g., the target IP for a distributed denial-of-service (DDoS) attack). DGAs are pseudo-random algorithms which generate a large amount of domain names that the bots query one by one. The advantage of this approach over using fixed IP addresses or fixed domain names is that it creates an asymmetric situation where the botnet master only needs to register one domain, but the defenders have to block all generated domains. The botnet master knows the seed and the generation scheme and can thus register a DGA-generated domain in advance. When the bots query this domain, they get the valid C2 server's address, while all other queries result in non-existent domain (NXD) responses. §.§ State-of-the-Art Classifiers To combat DGAs, binary detection approaches have been proposed in the past, capable of distinguishing benign domains from DGA-generated domains with high probability and low false-positive rates (e.g., <cit.>). Going a step further, multiclass classifiers have been proposed that can not only separate benign domains from DGA-generated domains, but are also able to associate malicious domains with the DGA that generated them, allowing for the identification and targeted remediation of malware families (e.g., <cit.>). In general these approaches can be divided into two groups: context-less (e.g., <cit.>) and context-aware (e.g., <cit.>) approaches. Context-less approaches work exclusively with information that can be extracted from a single domain name, while context-aware approaches use additional information, such as statistical data from the monitored network, to further improve detection performance. Previous studies (e.g., <cit.>) have shown that context-less approaches achieve similar or even higher performance while requiring less resources and being less intrusive than context aware approaches. Furthermore, the machine learning classifiers can additionally be divided into feature-based classifiers such as support vector machines (SVMs) or random forests (RFs) (e.g., <cit.>), and feature-less (deep learning-based) classifiers such as recurrent (RNNs), convolutional (CNNs), or residual neural networks (ResNets) (e.g., <cit.>). Previous studies (e.g., <cit.>) have shown that feature-less approaches achieve superior classification performance. The currently best deep learning-based classifier for binary and multiclass classification is ResNet <cit.>. Hence, we analyze the reasoning of this particular classifier in detail. In addition, we use the insights gained from our analysis to identify missing features in EXPLAIN <cit.>, currently the most powerful feature-based multiclass classifier, and seek to bring its classification performance up to the state-of-the-art level. In the following, we briefly introduce both classifier types. Detailed information on the implementations of each classifier can be found in <cit.>. §.§.§ ResNet Drichel et al. <cit.> proposed ResNet-based models for DGA binary and multiclass classification. The classifiers are constructed from residual blocks containing skip connections between convolutional layers to counteract the vanishing gradient problem. B-ResNet, the proposed binary classifier, uses only one residual block with 128 filters per convolutional layer while M-ResNet, the multiclass classifier, is more complex and composed of eleven residual blocks with 256 filters. §.§.§ EXPLAIN The authors of EXPLAIN <cit.> proposed several variants of their feature-based and context-less DGA multiclass classifier. The best performing model is a one-vs.-rest variant of a RF that extracts 76 features for each domain name to be classified, which can be categorized into 51 linguistic, 19 statistical and 6 structural features. §.§ Data To train machine learning classifiers for DGA classification, domain names labeled with the DGA that generated them are widely available in OSINT feeds such as DGArchive <cit.>. Benign training data can either be obtained by monitoring real networks or generated artificially based on public top sites rankings such as Tranco <cit.>. The problem with artificial data is that it may not accurately reflect real network traffic and thus may introduce bias and lead to misleading results. Further, the domain names included in public top sites rankings are on the resolving side of the DNS traffic because they are registered. Since most DGA-generated domains are not registered, additional bias may be introduced when they are paired with registered benign domain names for training. Due to these reasons, several approaches (e.g., <cit.>) focus on the classification of non-resolving DNS traffic (NX-traffic). Moreover, the focus on NX-traffic offers a number of other advantages: First, NX-traffic is easier to monitor because its volume is an order of magnitude smaller than the volume of full DNS traffic. Monitoring NX-traffic still allows us to detect malware-infected machines before they are instructed to participate in malicious actions, as DGAs can usually be detected in NX-traffic long before they resolve a registered domain for their C2 server. Second, NXDs are less privacy-sensitive compared to resolving domain names, as they generally do not contain user-generated domains, with the exception of typo domains. Although, NXDs may still contain sensitive information about an organization as a whole, the classification of NX-traffic seems better suited to a Classification-as-a-Service (CaaS) setting. Finally, it has been shown that classifiers trained on NX-traffic are more robust against certain adversarial attacks compared to classifiers trained on resolving traffic <cit.>. In this work, we follow the suggestions of related works and focus on the classification of NX-traffic. In the following, we briefly describe our data sources. §.§.§ DGArchive We use the OSINT feed of DGArchive <cit.> to obtain DGA-labeled domains. At the time of writing the feed contains approximately 123 million unique samples generated by 106 different DGAs. §.§.§ University Network We extract benign-labeled domain names from traffic recordings of the central DNS resolver of the campus network of RWTH Aachen University. This network includes several academic and administrative networks, dormitory networks, and the network of the affiliated university hospital. We selected a one-month recording of NXDs from mid-October 2017 until mid-November 2017 containing approximately 35 million unique NXDs for our evaluation. We deliberately chose an older NX-traffic recording because in our study we also want to evaluate whether a classifier learns time-dependent artifacts of a specific network or whether it generalizes well to new environments and is time-robust. We filter all NXDs from this data source using DGArchive to remove potentially malicious domains. Although the data may still contain mislabeled samples, the only way to avoid this problem is to use artificial data which may not accurately reflect real network traffic and thus may introduce additional bias. §.§.§ Company Network A second source for benign-labeled data are recordings of several central DNS resolvers of Siemens AG. Data obtained from this source is very diverse as the DNS resolvers cover the regions of Asia, Europe, and the USA. From the company, we obtain a one-month recording of benign NXDs from April 2019 containing approximately 311 million unfiltered NXDs. Benign data from this source is only used for the final real-world evaluation study, which is free of experimental biases, to assess whether a classifier contains any biases with respect to the network data on which it was trained and whether a classifier is time-robust. We again filter all NXDs from this data source using DGArchive to clean the data as much as possible. §.§.§ Ethical Considerations Our institution does not yet have an ethics review board that could have approved this study. However, we ensured that we do not record or use any personally identifiable information (PII) or quasi-identifiers. When recording traffic from the university and company network, we only observe NX-traffic and store the queried domain names, omitting all other information including IP addresses that could be used as pseudonyms to correlate domain names queried by the same host. Thereby, we only obtain a list of domain names that occurred within the recording period, with no relation to users within the network. Additionally, we focus on NX-traffic because NXDs are less privacy-sensitive compared to resolving domain names, as they generally do not contain user-generated domains, with the exception of typo domains. Although the NXDs may still contain sensitive information about an organization as a whole (e.g., they could indicate possible business relationships between different companies), it is questionable to what extent and with what accuracy such information can be recovered, if at all possible. §.§ Preprocessing It is important to understand the applied domain name preprocessing as this step can introduce significant classification biases. The works (e.g., <cit.>) that operate on single NXDs for classification make the data used unique and filter all benign samples against OSINT feeds to remove potentially contained malicious domains before training and testing a classifier. Other than that, they do not apply any filtering to the benign-labeled data used, since it is captured from real-world networks. The argument for this decision is that this feeds the classifier with the queries that occur naturally in a network, and does not bias the classification performance in any direction since no filtering is applied. While the feature-based classifiers (e.g., <cit.>) start extracting predefined features from this data, the deep learning-based approaches (e.g., <cit.>) have to convert the domain names into a numerical representation in order to be able to feed them to a neural network. Most works (e.g., <cit.>) follow a similar approach, which mainly differs in the maximum acceptable length of a domain. First, all characters are converted to lowercase (which is an uncritical operation as the DNS operates case-insensitive) and every character is mapped to a unique integer. Additionally, the input is padded with zeros from the left side. The authors of the ResNet classifier <cit.> propose padding to the maximum domain length of 253 characters in order to be able to perform training and classification on every possible NXD while using batch learning. In this work, we follow these suggestions of related work on preprocessing. § EVALUATION OVERVIEW In this section, we describe our evaluation methodology, explain the decisions underlying the dataset generation process, and perform a result reproduction study of the classifiers from related work to verify our evaluation setup. §.§ Datasets & Methodology We create two disjoint datasets, one to train and test a set of state-of-the-art models (DSmod), and one to analyze different explainability methods and investigate biases (DSex). For each DGA in DGArchive, we randomly select 20,000 samples. If less than 20,000 samples are available per DGA, we select all samples. Then we split the samples for each DGA equally between the two datasets. For two DGAs, only five samples are available in the OSINT feed. We constrain that at least four samples are available for training classifiers within DSmod. Thus, for two DGAs (Dnsbenchmark and Randomloader), only one sample is contained in DSex.[We intentionally include underrepresented classes because the inclusion of a few training samples per class allows a classifier to detect various underrepresented DGAs with high probability that would otherwise be missed. At the same time, this does not affect a classifier's ability to recognize well-represented classes <cit.>.] Thereby, we are able to perform a four-fold cross validation stratified over all included classes using DSmod, resulting in four different classifiers being trained and tested. Finally, we select the same number of benign samples as we selected malicious samples, resulting in balanced datasets. In binary classification experiments, we use all benign samples and use the same label for all malicious domains, regardless of which DGA generated a domain. In multiclass classification experiments, we limit the amount of benign samples to 10,000 in order to have a more evenly distributed amount of samples between the various classes. Here we assign a separate label for each DGA. In total, DSmod and DSex each contain approximately 1.2 million domains derived from 107 different classes. We train all four classifiers in the four-fold cross validation with DSmod using early stopping with a patience of five epochs to avoid overfitting. These classifiers are then used to analyze different explainability methods and investigate biases using samples from DSex. This methodology allows us to conduct a study to reproduce the results of related work (using DSmod) as it replicates the classification setting used by the state of the art. In addition, we can evaluate four classifiers and 20 explainability methods on the same unseen data (DSex) and can assess whether the classifiers converge to similar local optima and whether the explainability methods provide stable results between different models. However, this methodology introduces spatial and temporal experimental biases <cit.>. Spatial bias arises from using an unrealistic ratio of benign to malicious samples in the test data. For the DGA detection use-case, most queried domains within a network are benign. This significant class imbalance can lead to base-rate fallacy <cit.> where evaluation metrics such as true-positive rate (TPR) and false-positive-rate (FPR) are misleading. Temporal bias is introduced by temporally inconsistent evaluations which integrate future knowledge about testing samples into the training phase. In the state-of-the-art classification setting, temporal bias is introduced in two ways: First, four-fold cross validation does not ensure that all training samples are strictly temporally precedent to the testing ones. Second, the benign and malicious samples in the datasets are not from the same time window (one-month real-world benign data compared to several years of DGArchive data). Thus, we conduct an additional evaluation under real-world conditions where we mitigate all experimental biases in Section <ref>. To this end, we make use of our second source for real-world data, the company network. In this context, we also assess whether classifiers generalize between different networks and are time-robust. §.§ State-of-the-Art Results Reproduction Before conducting the actual explainability study, we reproduce the results of related work to validate our evaluation setup. We use the same evaluation metrics as in the original papers: accuracy (ACC), true-positive rate (TPR), and false-positive rate (FPR) for the binary experiments, and f1-score, precision, and recall (which is equal to TPR) for the multiclass experiments. As suggested in <cit.>, we use macro-averaging to calculate the overall evaluation metrics because the available samples vary widely per DGA class. This way we do not skew the overall score towards well-represented classes. We present the averaged results of the four-fold cross validation in Table <ref>. The upper part of the table shows the results of the binary evaluation, the lower part those of the multiclass evaluation. By comparing these results with the values reported in the original papers, we can confirm that we were able to reproduce the results, as we arrive at very similar values. The last row of the table shows the results for an adapted model of M-ResNet aimed at making it more explainable. Recently, Bohle et al. <cit.> proposed a so-called B-Cos transform which, when interchanged with linear transforms of neural networks, increases the networks' explainability by promoting the alignment of weight-input during training. The alignment pressure on the weights ensures that the model computations align with task-relevant features and therefore become explainable. Since interchanging the linear transforms of the ResNet model with B-Cos transforms could introduce a trade-off between classification performance and explanatory fidelity, we also evaluate this model using DSmod and present the results in the last row of Table <ref>. Indeed, this modification slightly sacrifices model performance in favor of a more explainable model compared to the M-ResNet baseline. § EXPLAINABILITY METHODS As a secondary contribution to the critical analysis of automatic feature extraction for DGA detection, we conduct a comparative evaluation of different explainability methods. In this section, we briefly introduce explainability techniques for machine learning and present the results of the comparative evaluation. The exhaustive evaluation can be found in Appendix <ref>. In general, explainability methods can be divided into two categories: white-box approaches, which are model-specific and use knowledge, e.g, about the internal architecture and model weights of a neural network, and black-box approaches that are model-agnostic. In this work, we focus on white-box approaches as they have been proven to produce better results compared to black-box approaches <cit.>. The general idea of white-box approaches to deriving local explanations for input samples is to compute the gradients from the output back to the input. Thereby, for an input sample , a neural network N, and a prediction , a relevance vector is derived which describes the relevance of each dimension of x for the predicted label y. Thus, in terms of context-less DGA classification, an explainability method determines the relevance of each character in the context of its position for the assignment of an individual domain name to a particular class. When evaluating the explainability methods, we focus on the explanations generated for the predictions of a multiclass classifier because, unlike a binary classifier, it has a variety of other prediction possibilities in addition to distinguishing between benign and malicious. In this work, we make use of the iNNvestigate library <cit.> which implements many explainability methods and provides a common interface to evaluate 19 white-box approaches including Layer-wise Relevance Propagation (LRP) <cit.> using 12 different rules. In addition, we also evaluate explanations generated by the recently proposed B-Cos network adjustment <cit.>. Similarly to Warnecke et al. <cit.>, we evaluate the explainability methods based on four metrics: fidelity, sparsity, stability, and efficiency. Since we only evaluate white-box methods that compute relevance vectors directly from the weights of a neural network, all explainability methods are complete in that they are able to compute non-degenerate explanations for every possible input. In contrast to <cit.>, we evaluate a total of 20 white-box explainability approaches (compared to the three evaluated by Warnecke et al.) and extend the fidelity and stability metrics to be more suitable for analyzing DGA classifiers. Based on the four metrics, we select the top five techniques (b-cos, deeptaylor, integratedgradients, lrp.alpha2beta1, and lrp.zplus) for our bias investigation study in the next section. § INTERPRETING THE EXPLANATIONS Having decided on explainability methods, we can now examine the reasoning of the deep learning classifiers. To this end, we use the classifiers trained during the four-fold cross validation on DSmod to predict all samples of DSex, and then use all selected explainability methods to compute explanations. Subsequently, for each method and class, we use DBSCAN <cit.> to cluster the relevance vectors and group similar explanations together. Finally, we manually review the clusters to identify potential features of the deep learning classifiers. For each domain name and relevance vector, we visualize the importance of each character through heatmaps. We encode positive contributions to the predicted label as green colors and negative contributions as red colors. An example of the clustering and visualization of the relevance vectors generated by lrp.zplus for the Banjori DGA is shown in Fig. <ref>.[ Note that relevance vectors are not direct characteristics of individual inputs, but rather of the model that processes those inputs. By clustering the relevance vectors, we can still find clusters similar to those in Fig. <ref>, but in this case it might be more appropriate to first compute clusters based on other features such as n-gram embeddings. However, it is unclear what other features should be used to calculate such clusters (which brings us back to manual feature engineering) since, e.g., n-gram embeddings would not be useful for hex-based DGAs. ] In the following we present our findings from this study. We use the explainability methods to identify potential biases and then conduct various experiments to quantify the impact on classification. While some of these biases may seem obvious and easily avoidable, they are present even in DGA detection approaches proposed at leading security conferences (e.g., <cit.>). Moreover, these biases are rooted on subtle flaws that are rife in security research and affect many other use-cases as well <cit.>. §.§ Revealing Biases In this work, we mainly focus on the classification biases between the benign and the malicious class since the most severe danger in misclassification is that DGA-domains are wrongly labeled as benign. If a certain proportion of samples is incorrectly assigned to a DGA by a multiclass classifier, this has less impact because the domains are still detected as malicious. The main incentive for an adversary would be to exploit biases to force a detection system to classify DGA-domains as benign, allowing communication with botnets. Therefore, we consider the threat model, which attempts to mask domains as if they were generated by another DGA, to be less reasonable. In total, we identified five biases present in current state-of-the-art classifiers that provide a false sense of security, as they can be easily exploited to evade detection.[While we analyzed the ResNet-based classifier in detail, we verified that the identified biases are also exploitable in the LSTM-based <cit.> and the CNN-based classifier <cit.>.] Moreover, biases inherent in a classifier can affect the classifier's ability to detect yet unknown DGAs. §.§.§ Length Bias Across all explainability methods and across many clusters, dots included in a domain name are often calculated as particularly important for the classification. We reckon that the dots themselves are not important in isolation, but that the deep learning classifiers infer the features of domain length and number of subdomains from it. To assess the importance of this feature, we conduct the following experiment: First, we chose the Qadars DGA as it generates domains of a fixed length and is correctly attributed by M-ResNet most of the time (f1-score of 0.99400). In detail, all domains generated by Qadars match the following regular expression (regex): , i.e., Qadars generates domains with a fixed length of 12, using only the characters a-z and 0-9, and finally adds a dot and one of four possible top-level domains (TLDs). Then, we adapt the reimplementation of Qadars[<https://github.com/baderj/domaingenerationalgorithms>] to generate domains of all possible lengths. Note that each domain name identifier can be a maximum of 63 characters long before it must be separated by a dot, and the full domain name can be a maximum of 253 characters long. For each possible length and for each known seed (six in total), we generate at most 100 different domains, resulting in a dataset size of around 147,000 unique samples. For each sample, we always fill in the highest level subdomain with characters before adding a dot. Finally, we feed the generated domains into the M-ResNet classifier and observe the percentage of classifications assigned to Qadars, any other DGA, and the benign class depending on the domain length. In Fig. <ref>, we display the results of this experiment. The percentage of classifications assigned to Qadars increases with domain length, peaking at the original domain length of 12, and then falls abruptly from there. As the domain length increases, the percentage increases slightly because the classifier has more information to derive the correct prediction. Most of the time, however, the classifier assigns the samples to different DGA classes. The percentage of benign classifications increases rapidly from the length of 69, 133, and 197. This is because at these lengths additional subdomains must be included to form a valid domain. The more dots, the more benign classifications. Sometimes even more than 50% of all classifications are assigned to the benign class. After the dots are inserted, the benign classifications decrease with increasing domain length as more information generated by the DGA is available for prediction. Investigating the sample length distribution of the classifiers' training set illustrates the problem that with increasing length, more domains are classified as benign. In Fig. <ref>, we display two box plots of the domain length distribution for the benign and malicious classes. The maximum domain length of a DGA-labeled sample within the training set is 59. Thus, it is very likely that a classifier learns to assign a sample to the benign class with greater probability if it exceeds 59 in length. Fortunately, this is not the only feature on which classification depends. Since the domain length depends on the number of dots/subdomains, we examine this bias below. §.§.§ Number of Dots/Subdomains Bias As seen in the previous section, the number of dots/subdomains has a significant impact on the classification. Looking at the number of dots contained in the training set separately for the benign and malicious classes, we can see that the benign class contains significantly more dots. The average number of dots is 7.12, the median is 5, and the maximum is 35. In comparison, the average for the malicious class is 1.08, the median is 1 and the maximum is 2. In fact, only 19 DGAs generate domains with more than one dot and only two DGAs (Beebone and Madmax) have dots past their effective second-level domain (e2LD). We refer to e2LD here because some DGAs use dynamic DNS services or public suffixes, which should not be counted as their generated second-level domain. §.§.§ www. Bias In connection to the number of dots/subdomains bias we observed during our manual review of the relevance vector clusters for the benign class, that over all explainability methods, clusters have formed which highlight the importance of the “www.” prefix. Examining the distribution of domains with the prefix “www.” within the training set, we find that the benign class contains 3,382 (0.00288%) samples, while the malicious class contains only 183 (0.00016%) samples. To assess the impact of this bias, we perform the following experiment: We take the four binary classifiers of the four-fold cross validation and all the malicious samples that the classifiers have correctly classified (true-positives). Then we prepend the “www.” prefix to all true-positives and reevaluate the models on these samples. On average over all folds, 434,916 (74.23%) out of 585,907 true-positives became false-negatives, while only 150,991 were still correctly classified. This shows that there is a huge bias regarding this prefix and malware authors could exploit this issue by simply prepending “www.” to their generated domains in order to evade detection of state-of-the-art classifiers. Although, only a small fraction of all samples have the “www.” prefix, it can introduce bias into classification if the feature is sufficiently discriminatory. §.§.§ Top-Level Domain Bias Through our study, across all explainability methods and across multiple classes, we encountered multiple occurrences of clusters that, in combination with other features, highly value the top-level domain (TLD) as a significant feature. To assess the impact of this feature, we make use of out-of-distribution (OOD) testing, as it was identified to be one of the most effective ways to reveal biases <cit.>. To this end, we perform a leave-one-group-out evaluation. In detail, similarly to the four-fold cross validation, we train a classifier for every fold on the respective fold's training data of DSmod, except that we omit all samples of a particular class. Then, we use the four trained classifiers to predict all samples of the left out class contained in DSex. As an example, we present the results obtained on the Mirai DGA leave-one-group-out evaluation. All samples generated by Mirai use one of these three TLDs: online, support, and tech. In each fold all Mirai samples that use the online and tech TLD are predicted to be malicious while all samples with the support TLD are labeled as benign. It seems that this is because the classifier tends to classify samples with never-seen TLDs into the benign class. Omitting all Mirai samples from training has the effect of removing all samples that use the support TLD from the entire training set. Although there appears to be enough information within the second-level domain to correctly assign a sample to the malicious class (as 100% of all online TLD samples are correctly assigned), the classifier is biased due to the unknown TLD to attribute the samples to the benign class. Similar pictures emerge also for a variety of other DGAs. Examination of the TLD distribution within the training set supports this statement. There are 413 distinct TLDs in the benign data, of which 274 are unique to benign samples. In comparison, there are only 258 different TLDs within the malicious labeled data, of which 115 are uniquely used by malicious samples. On the other hand, all samples with the tech TLD were also correctly labeled as malicious although this TLD was completely removed from the training data. Since all support TLD samples are misclassified and all samples use the same generation algorithm, it is unlikely that the information within the second-level domain was discriminatory enough for the tech TLD samples. Analyzing the calculated relevance vectors for these samples revealed that the classification is significantly influenced by the “ch” suffix of the tech TLD. Looking at the ch TLD distribution within the training data it becomes apparent why this is the case: there are 2063 ch TLDs within the malicious samples and only 51 within the benign samples. This bias investigation delivers two results: First, state-of-the-art classifiers heavily depend on the TLD, resulting in the fact that a malware author could simply change the TLD used to evade detection. Second, it might be useful to encode the TLD as a one-hot encoded vector before inputting it to a classifier since it is rather a categorical feature. In the case of the Mirai evaluation, this was a stroke of luck for the defender site. However, since the TLD can be freely chosen, an attacker could exploit this knowledge to evade detection. §.§.§ Validity/Diversity Bias During our study, we encountered several large benign clusters that contain domains that are invalid and therefore would not resolve (e.g. due to an invalid or missing TLD). In fact, 7.64% of all benign samples within the training set are invalid, while all malicious samples are valid. An attacker has no incentive in generating invalid samples, as they would be useless for establishing connections between bots and their C2 server. Thus, a classifier most likely learns the shortcut to distinguish domains based on their validity. Although this is not a true bias, since invalid domains cannot be resolved and therefore assigned to the benign class, it does have an impact on the reported FPR of state-of-the-art classifiers as invalid samples are probably easier to classify. While there is nothing wrong in calculating the FPR for the detection system which pre-filters invalid domains to the benign class, here the classifiers real true-negative rate (TNR) is artificially inflated. Furthermore, including invalid samples in the training sets carries the additional risk of the classifier focusing on useless information and prevents the classifier from learning more complex features that might be useful in separating valid benign samples from malicious ones. In addition, we found several benign clusters specific to the network in which the data was collected (e.g., domains including the official e2LD of the university). Training and evaluating classifiers on this data could lead to misleadingly high results, as the classifiers may have only learned to separate network-specific domains from malicious ones, but they do not generalize between different networks. § MITIGATING BIASES Now that we have identified several biases, we present strategies to mitigate them. In addition, in various experiments, we measure the cost in terms of loss in classification performance for avoiding biases, since biases are nothing more than features that appear in the training data. For instance, biases such as the TLD are perfectly valid signals for the classifier to learn based on the underlying data distribution, since such features can be used to some extent to distinguish between benign and malicious samples. However, this is not desirable for features that can be easily modified by an attacker, as they can be exploited (e.g. by exchanging the TLD) to evade detection. Finally, in a real-world study, we measure the true classification performance of DGA classifiers that are free of the identified biases, and evaluate whether a classifier generalizes to different networks and is time-robust. In other words, here we evaluate whether a classifier is free from biases that might be introduced by artifacts in specific networks and at certain times. §.§ Mitigation Strategies In the following, we address the individual biases and suggest how to mitigate them. §.§.§ Number of Dots/Subdomains, www., and TLD Biases As demonstrated in the previous section, these biases can be easily exploited by an attacker to evade detection. Adding the “www.” prefix to malicious domains converted around 75% of true-positives into false-negatives, while selecting a TLD that was never seen by a classifier during training allows for complete bypass of detection. Since the botmaster's authority over a domain starts with the e2LD and all other subdomains as well as the TLD can be freely selected, we suggest to perform the classification exclusively on the e2LD and to omit all other information. Note that this does not open up any new attack vector, but may remove valuable features that could be used for classification, resulting in a decrease in overall classification performance. Hence, in Section <ref>, we measure the trade-off between bias-reduced classification and performance. §.§.§ Validity/Diversity Bias Since invalid samples can be pre-filtered and assigned to the benign class, we choose to only train a classifier on valid domains, allowing the classifier to focus on task-relevant features. As a result, the FPR of the classifier reported by us is likely to be larger than that reported by related work, since the classifier does not encounter easily classifiable invalid samples during testing. Further, to mitigate the problem that a classifier only learns to separate network-specific domains from malicious ones, we focus on diverse data by training on unique e2LDs. In doing so, we aim to train classifiers that generalize well between different networks. Focusing solely on unique e2LDs has the effect that the underlying sample distribution changes fundamentally. Training using this data will again increase the classifier's FPR since a e2LD occurs only once, either in the training or test set. In contrast, in the state-of-the-art classification setting, a large proportion of unique domains with the same e2LD occur, which may be network-specific, such as domains that contain the university's official e2LD. Once the classifier learns of a benign e2LD, samples with the same e2LD can be easily assigned to the benign class. §.§.§ Length Bias Focusing exclusively on valid and diverse e2LD already significantly equalizes the length distribution between benign and malicious samples and almost mitigates the bias. In Fig. <ref>, we show two box plots of the unique and valid e2LD length distributions for the benign class and malicious samples. In comparison to the sample length distributions in the state-of-the-art classification setting (cf. Fig. <ref>), the e2LD length distributions are much more similar. Unfortunately, thereby the length bias cannot be fully mitigated. The classifier will probably still tend to classify longer samples towards the benign class. However, as we saw during the length bias experiment, longer samples contain more information that helps the classifier make the correct decision. Thus, for an adversary, increasing the domain length is more of a trade-off between exploiting length bias and providing too much information to the classifier. Note, reducing the domain length of input samples to mitigate this bias is not a viable option, as this opens up a new attack vector where an attacker can hide features that would have sorted a domain into the malicious class. On the other hand, it is possible to generate additional artificial domains by adapting publicly available reimplementations of DGAs (similar to the length bias experiment) to balance the length distributions and thus mitigate the bias completely. However, this may require oversampling of benign data and care must be taken to ensure that this does not affect classification performance on clean data. Since the focus on valid and diverse e2LD almost evens out the distributions, we decided against it. §.§ Bias Mitigation Experiments In the following, we measure the cost in terms of loss in classification performance for avoiding biases. We expect classification performance to deteriorate because biases are nothing more than features based on the underlying distribution of the training data. All experiments are similar to the four-fold cross validation performed in Section <ref>, except that here we focus on diverse data. To this end, we first map all fully qualified domain names (FQDNs) to their e2LDs. We then randomly sample the e2LDs and then select exactly one sample per unique e2LD for each evaluation scenario. For binary and multiclass classification, we examine four scenarios each: classification on valid and diverse FQDNs, on FQDNs without TLDs (no TLDs), on FQDNs without subdomains (e2LDs + TLDs), and exclusively on e2LDs. In the upper part of Table <ref>, we present the results for the binary setting while the lower part of the table displays the results for the multiclass setting. For convenience we also show the performance of the classifiers in the state-of-the-art classification setting from Section <ref>. As suspected, when only valid and diverse samples are used, the performance of the binary classifier is significantly worse, especially with respect to the FPR. Removing the TLDs from the FQDNs has less of an impact on performance than removing all subdomains after the e2LD. However, in both scenarios the loss in performance is tremendous, increasing the FPR to about 7.1% - 7.6%. Classification solely on the e2LD delivers the worst results reaching a 89.1% TPR @ 10.5% FPR for the decision threshold of 0.5. Examining the individual TPRs for each DGA, we find that the rate drops significantly for some DGAs, while for others it remains high, even reaching 100%. Although the average TPR drops significantly compared to the state-of-the-art setting, we expect that most DGAs could still be detected as they query multiple domains before finally resolving a registered domain. Provided that a decision is not made on the basis of a single query. Only the DGAs Redyms and Ud3 would be completely missed as for these DGAs the TPRs are zero over all four folds. In the multiclass setting, classification performance is not affected as much when trained on valid and diverse FQDNs. This is because focusing on these samples mainly affects the benign class and a few DGA classes that have a small sample size and generate FQDNs that map to the same e2LD (e.g., they generate domains with the same e2LD but with different TLDs). However, most DGAs are not affected by this. In contrast to the binary setting, here the TLDs are more relevant for classification than the subdomains after the e2LD. If only the e2LDs are used for classification, the performance deteriorates drastically (mainly because of the missing TLDs). Removing all subdomains after the e2LD affects only two DGAs: Beebone and Madmax. However, when the subdomains are removed, there is still enough information in their domain names to classify them correctly most of the time. Beebone's f1-score drops slightly from 97.7% to 95.7%, and Madmax's from 74.9% to 60.2%. In summary, the TLD is vital for the multiclass classification. In the binary setting, classifying exclusively e2LD is as bias-free as possible but the achieved performance does not seem to be acceptable. However, the effective TPR@FPR operation point of a detection system that pre-filters invalid samples and classifies all input samples regardless of the uniqueness of their e2LD can still be acceptable. In the next section, we get to the bottom of this question. §.§ Real-World Study In this section, we perform a real-world study to assess the true performance of bias-reduced DGA binary classification. In this context, we evaluate whether the classifiers generalize between different networks and are time-robust. Simultaneously, we enforce that the evaluation is free of experimental biases. In the following, we refer to classifiers that mitigate the identified biases as bias-reduced classifiers. To this end, we train a classifier using the real-world benign e2LDs from the university network recorded from mid-October 2017 to mid-November 2017, as well as DGArchive data that was available until the end of the recording period. In detail, DGArchive contains approximately 53 million unique domains generated by 85 different DGAs up to this point in time. Training a classifier using a dataset which is similar to DSmod, but with the constraint that the malicious samples are from the same time window as the benign samples, mitigates one of the two experimental temporal biases included in the state-of-the-art classification setting. To mitigate the second experimental temporal bias, that requires that all training samples are strictly temporally precedent to the testing ones, we evaluate the classifier on approximately 311 million benign e2LDs captured in the company network in April 2019 (cf. Section <ref>) and DGA-domains from DGArchive that were generated by DGAs in April 2019. Within April 2019, 46 DGAs (four of which were unknown at the time of the training) generated approximately 1.2 million domains. In this way, we eliminate the experimental temporal biases, and can guarantee that the benign samples come from different networks and that the time interval between the occurrence of the training and the test samples is about 17 months. To eliminate the experimental spatial bias, it is required to approximate the true ratio of benign to malicious samples in the test data. Since the true sample distribution is unknown, we conduct two experiments to estimate the true detection performance of bias-reduced DGA binary classification. First, we evaluate the classifier using all 311 million benign e2LDs and gradually increase the amount of included malicious test samples generated in April 2019 from 1% to 100% for each DGA. Thereby, the ratios between the domains generated by the different DGAs follow the true distribution. In the following, we report the obtained results of the classifier that first checks whether a sample is invalid. If it is invalid, the sample is ignored. Otherwise, it is evaluated by the classifier. In Fig. <ref>, we display the TPRs for fixed FPRs between [0.001,0.008] for the bias-reduced classifier depending on the contamination of the test set (i.e., the relative amount of included malicious test samples from April 2019). The achieved TPRs are nearly stable for all fixed FPRs, showing that no base-rate fallacy is measurable within these ratios of benign to malicious samples. We argue this is because the benign data heavily overshadows the malicious data even when we include 100% of all DGA-domains from April 2019. In this experiment, the relative percentage of malicious samples varies between 0.00362% and 0.35998%, which means that in the worst case, 99.64002% of the test data is still from the benign class. As it is unclear, how many DGAs are present in a real-world network, we additional conduct a second experiment to estimate the worst-case classification performance. Here, for each DGA, we evaluate the classifier using all malicious samples generated in April 2019 of that particular DGA and all 311 million benign e2LDs. In total, we thus evaluate the classifier using 46 test sets, since there are 46 DGAs that generate at least one domain in April 2019. On average the bias-reduced classifier achieves a TPR of 0.85735 at a FPR of 0.00506 for the decision threshold of 0.5. In Fig. <ref>, we display the receiver operating characteristic (ROC) curve averaged over all evaluation runs for the FPR range of [0,0.01]. In addition, we also show the ROC curves for the best-detected DGA (Dyre) and the worst-detected DGA (Nymaim2). We argue that the classifier is remarkable time-robust and generalizes well to different networks. The temporal and spatial changes in data distribution have increased the FPR compared to the state-of-the-art setting at the decision threshold of 0.5. However, this was to be expected as the distribution of benign samples naturally varies between networks, at least to some degree. Moreover, the classifier is able to achieve a slightly lower TPR as the bias-reduced e2LD classifiers from the previous section. Surprisingly, for three of the four DGAs that were unknown at the time of training (Ccleaner, Tinynuke, Wd), the bias-reduced classifier is able to correctly classify 100% of all generated samples. Only the Nymaim2 DGA is detected worse with a TPR of 14.84%, which is the main reason for the slightly lower average TPR compared to the bias-reduced e2LD classifiers from the previous section.[ We additionally evaluated the four e2LD classifiers from the previous section against the 311 million benign NXDs and all DGA-domains from DSex (which are completely disjoint with the training samples) to evaluate the performance using all 106 known DGAs. Thereby, we arrive at very similar results. We present the corresponding ROC curves in Appendix <ref>. Note that this of course reintroduces experimental temporal bias. ] At a fixed FPR of 0.008 the bias-reduced classifier achieves a TPR of about 89%. In practice, it might be advantageous to set the threshold to a lower fixed FPR value. Setting the FPR at 0.001 to 0.002 would still allow an approximate detection rate of about 67% to 78%. However, how useful this is depends on what is done with the classification results. Context-less DGA detection was never intended for single-domain based decision-making. This evaluation assessed the true performance of bias-reduced DGA classifiers and demonstrated the limits of what is possible without contextual information. § BIAS-REDUCED DGA CLASSIFICATION In this section, we use the insights gained from the bias mitigation and the real-world study to propose a classification system that (1) is as bias-free as possible and (2) does not miss entire DGA families. Further, we propose an approach to improve visualization support to increase trust in and transparency of detection methods and facilitate decision-making. §.§ Bias-reduced DGA Classification System As previous evaluations have shown, bias can be easily exploited to evade detection. Focusing exclusively on e2LD helps mitigate most identified biases. However, this causes the classifier to lose the ability to recognize specific DGA families as a whole. In the case of multiclass classification, we have seen that the classification relies heavily on information outside of the e2LD to correctly assign domains of multiple classes. In the following, we present a detection system that counteracts these issues. In Fig. <ref>, we visualize the system's architecture. In the first step, the detection system evaluates whether the entered NXD is invalid or not. If it is invalid, it is ignored, otherwise the input sample is passed to the binary classification step. Here, two classifiers work in parallel: a bias-reduced classifier that classifies the e2LD of the input sample, and a full classifier that uses the FQDN. This classification step can lead to four possible outcomes: First, both classifiers agree on the benign label, so the detection system also outputs benign. Second, the bias-reduced classifier outputs malicious while the full classifier predicts benign. This is an indication that an attacker might try to exploit biases to evade detection. Third, the bias-reduced classifier predicts benign and the full classifier malicious. This suggests that the features outside the e2LD may be indispensable to detect the DGAs that the bias-reduced classifier would miss. And fourth, both classifiers agree on the malicious label indicating that the input sample is very likely DGA-generated. Regardless of the results, the input sample can be passed to a multiclass classifier trained on FQDNs to associate the sample with the DGA that most likely generated it. Finally, we propose to pass the input sample associated with the classification results to a visualization system to understand the classifier's reasoning and to support the decision-making process. Using this detection system, we achieve bias-reduced DGA detection and do not miss entire DGA families. §.§ Visualization Support The proposed detection system gets the most out of context-less and bias-reduced DGA classification. In order to facilitate decision-making and to better understand the reasoning of a classifier we propose a visualization system. In this work, we demonstrated the limits of context-less classification and showed that decision-making based on the classification result of a single query is practically insufficient. To make a decision based on multiple classification results, the minimum information required is the mapping between the host and the queried domains. While this information may not be available to a CaaS provider, the network operator that uses the service most likely has this knowledge. In the following, we only use this additional knowledge to facilitate the work of SOC analysts. Fig. <ref> shows the different views of the proposed visualization system based on mock data. Two main view groups summarize the classification results: the global and the local views. Both contain the queried domain names, in which the relevance of each character to the prediction is highlighted using a heatmap. In this example, we used integratedgradients to compute the relevance vectors for the predictions of the multiclass model. However, any other explainability method can be chosen. In addition, we display the total amount of times the domain was queried as well as the classification results from the bias-reduced, full binary, and multiclass classifier. The global view summarizes all classification results for the entire network and allows finding multiple hosts infected with the same malware. The local view summarizes the results for a single host and allows targeted analysis of all queries performed by that host. Local views can be accessed through the Recent Classification Results by Client view, which displays the total and relative number of domains classified as benign or malicious per host. From both, the global and the local view, it is possible to analyze how often and which hosts queried a particular domain. Additionally, for each domain, it is possible to analyze the clusters in which the relevance vector falls and to extract a simple regex that fits all samples within the cluster. In this way, it may be possible to identify multiple hosts infected with the same malware. § ADDITIONAL UTILIZATION OF THE KNOWLEDGE GAINED As a secondary contribution, we use the knowledge gained in the previous evaluations to improve the state-of-the-art deep learning and feature-based multiclass classifiers in terms of classification performance and efficiency. In this section, we therefore take a step back from improving the generalization of classifiers by removing classification biases and briefly turn our attention to improving the performance and efficiency of the classifiers themselves. §.§ Improving M-ResNet In this work, we mainly improved the binary classifier B-ResNet by mitigating identified biases. Now we also take a closer look at the multiclass classifier M-ResNet. In Section <ref>, we noted that the classifier does not use the TLD as a standalone feature, but also derives additional features from the character distribution. Since the TLD can be freely chosen by the adversary and the TLD is more of a categorical feature, we adapt the M-ResNet model to classify a domain by using the one-hot encoded vector representation of the TLD instead of the character-wise encoding. Thereby, we aim to improve classification performance by allowing the classifier to focus on the more important part of the FQDN. Furthermore, this has the effect that other implicit features, such as domain length, are no longer affected by the chosen TLD. We evaluated this model using a four-fold cross validation on DSmod but could not measure any significant improvement. As could be seen in the relevance vector cluster analysis, the original model appears to have a large enough capacity to learn the correct extraction of the TLD from the characters. Furthermore, the characters within the TLD do not appear to significantly affect the multiclass classifier. Since overparameterization has been associated with a higher susceptibility to learning spurious correlations <cit.>, we attempt to iteratively reduce the complexity of the adapted model. As a result, we were able to successfully remove the last four residual blocks and reduce the number of trainable parameters by 35.5% without affecting classification performance (f1-score of 0.78691). Thereby, we additionally improved the model's carbon footprint and reduced the required time for training and inference. §.§ Improving EXPLAIN Now we try to improve the feature-based multiclass classifier EXPLAIN by using knowledge extracted by explainability methods applied on M-ResNet. To this end, we cluster relevance vectors for samples which are correctly classified by M-ResNet but incorrectly by EXPLAIN, targeting the identification of features that are missing in EXPLAIN. We attribute the performance difference between both classifiers to four findings: (1) ResNet seems to handle imbalanced data and class weighting better, (2) for some DGAs, M-ResNet is simply better at guessing, (3) M-ResNet is able to learn complex features through a series of non-linear transformations that are not easily understood by a human, and (4) both classifier converge to different local optima and thus tend to assign similar samples to either one or the other class. §.§.§ Imbalanced Data Investigating the relevance vector clusters for the Redyms DGA, it is immediately apparent that for M-ResNet, the “-” character is useful for the correct classification. Although, the feature that counts the “-” character is defined in EXPLAIN's source code, it was not selected during the feature selection process. We reckon, that this is because the feature is only important for a few classes but other features are important for a much higher number of classes which resulted in lower importance score during the feature selection process. This problem could be the reason why several classes are recognized worse by EXPLAIN, and suggest that M-ResNet might be better with imbalanced data and class weighting in general. In contrast to EXPLAIN's feature selection step, we assume that M-ResNet does not completely remove self-learned features, but fine-tunes the importance by adjusting the weights. Adding the “-”-feature to EXPLAINs feature set improves the f1-score for the Redyms DGA by 53.15% and brings the detection rate to a level similar to that of M-ResNet. §.§.§ Random Guessing EXPLAIN mostly confuses the samples of Ud4 with Dmsniff. Analysis of all samples from both classes revealed that both DGAs generate 100% identical domains, so they are most likely the same DGA. Upon inquiry to DGArchive this was confirmed and in the future the feed of Ud4 will be discontinued. Here, M-ResNet is just better at guessing (by an f1-score of 16.48%). §.§.§ Complex Features We cannot exclude the possibility that M-ResNet is able to learn complex features through a series of non-linear transformations that are not easily understood by a human. For instance, related work <cit.> suggests that the ResNet classifier may be able to distinguish, at least to some degree, between underlying pseudo-random number generators. To improve EXPLAIN, we adapt the features related to randomness tests and add all of them to the final feature set. In detail, we adapt the 14 randomness tests from  <cit.> to include the final p-values used for the decision of whether a certain randomness test is passed instead of only the result of the test. Reevaluating the model with all additional features, we could measure a small improvement of 0.783% in f1-score. §.§.§ Different Optima Most other DGAs that are confused by EXPLAIN generate similar domains, and often all domains match the same regexes. EXPLAIN is significantly better (> 10% in f1-score) than M-ResNet in four DGAs, whereas M-ResNet is also significantly better in four other DGAs. We reckon that both models converge to different local optima and thus tend to assign similar samples to either one or the other class. §.§.§ Overall Results We were able to improve EXPLAIN from an f1-score of 0.76733 to 0.77516 by adding additional features to EXPLAINs feature set, bringing it closer to the performance of deep learning classifiers such as M-ResNet. § OTHER RELATED WORK We already discussed related work on DGA detection in Section <ref>. Consequently, we focus here on related work on explainability and bias learning prevention. For the DGA detection use-case, there are only a few works that partially address the explainability of detection systems. Drichel et al. <cit.> proposed the multiclass classifier EXPLAIN as a feature-based alternative to deep learning-based classifiers. While feature-based approaches often seem inherently explainable, it is often not easy to interpret their predictions. For instance, EXPLAIN's predictions are based on the majority vote of 360 decision trees with a maximum depth of 43 and a random mixture of 76 features that include several statistical features that are difficult for a human to analyze. The authors of <cit.> also adopt a feature-based RF classifier based on the EXPOSURE system <cit.> and mainly use SHAP <cit.> to derive explanations. However, their approach relies heavily on extensive tracking of DNS traffic and is unable to derive explanations in the multiclass classification setting. None of these works investigate biases inherent in detection methods. To the best of our knowledge, this is the first work to critically analyze the features used, focusing on their limitations and unintended consequences for the DGA use-case. In addition, related work <cit.> has identified several general measures to mitigate bias learning that can also be applied here. Changing the loss function <cit.> and adding regularization terms <cit.> can force a classifier to learn more complex features instead of focusing on simple biases. Also, the learning rate of the optimizer can be adjusted to make the classifier learn either simpler or more complex features <cit.>. Somewhat related is the issue of adversarial attacks and the robustness of classifiers. Here, semantic gaps in the data create blind spots in classifiers which make them susceptible to small input perturbations that lead to misclassifications. Adversarial training can be used to prevent such classification shortcuts <cit.>. In context of DGA detection, several works deal with this topic . § CONCLUSION In this work, we showed how XAI methods can be used to debug, improve understanding, and enhance state-of-the-art DGA classifiers. To this end, we performed a comparative evaluation of different explainability methods and used the best ones to explain the predictions of the deep learning classifiers. Thereby, we identified biases present in state-of-the-art classifiers that can be easily exploited by an adversary to bypass detection. To solve these issues we proposed a bias-reduced classification system that mitigates the biases, achieves state-of-the-art detection performance, generalizes well between different networks, and is time-robust. In this context, we measured the true performance of state-of-the-art DGA classifiers, showed the limits of context-less DGA binary classification, and proposed a visualization system that facilitates decision-making and helps to understand the reasoning of deep learning classifiers. Finally, we used the knowledge gained from our study to improve the state-of-the-art deep learning as well as feature-based approaches for DGA multiclass classification. In future work, the usefulness of the visualization system needs to be evaluated, preferably in an operational environment. A promising future research direction is the combination of context-less and context-aware systems to further enhance detection and decision-making. § AVAILABILITY We make the source code of the machine learning models publicly available[<https://gitlab.com/rwth-itsec/explainability-analyzed-dga-models>] to encourage replication studies and facilitate future work. The authors would like to thank Daniel Plohmann, Simon Ofner, and the Cyber Analysis & Defense department of Fraunhofer FKIE for granting us access to DGArchive as well as Siemens AG and Jens Hektor from the IT Center of RWTH Aachen University for providing NXD data. ACM-Reference-Format § EVALUATING EXPLAINABILITY METHODS We evaluate the explainability methods using four metrics: fidelity, sparsity, stability, and efficiency following <cit.>. Since we only evaluate white-box methods that compute relevance vectors directly from the weights of a neural network, all explainability methods are complete in that they are able to compute non-degenerate explanations for every possible input. To evaluate the explainability methods we use the four classifiers trained on DSmod during our results reproduction study and predict all samples from DSex. For each metric, we average the results across all classifiers. §.§ Fidelity The first evaluation criterion is fidelity, which measures how faithfully important features contribute to a particular prediction. We adopt the Descriptive Accuracy (DA) metric from <cit.>, which measures for a given input sample x how removing the k-most relevant features change the original neural network's prediction . The idea behind this metric is that as relevant features are removed, accuracy should decrease as the classifier has less information to make the correct prediction. The better an explanation, the faster the accuracy decreases as the removed features capture more context of the predictions. Thus, explainability methods that show a more rapid decline in DA when removing key features provide better explanations than explainability methods with a more gradual decrease. In context-less DGA classification, removing an input feature corresponds to removing a character from a domain. Here, we consider two scenarios: (1) removing a character and thus reducing the total domain length, and (2) replacing a character with the padding symbol and thereby retaining the original domain length. Both approaches have drawbacks: removing a character can have a greater impact on accuracy because it also affects the implicit feature of domain length. On the other hand, preserving the domain length by replacing the character with the padding symbol may confuse a classifier, as the classifier was never faced with such samples during training. Hence, we calculate the average DA for both scenarios and on all samples of DSex for k∈[1,10]. To derive a single score, we compute the Area Under the Curve (AUC). The smaller the score, the better the explanations. Results: In Table <ref>, we show the results for this criterion. For further evaluation we choose integratedgradients as it scores best when removing the top k-features and b-cos as it achieves the best score in the second scenario. In addition, we also select lrp.zplus since it obtains the best scores when replacing features on the unmodified M-ResNet model. §.§ Sparsity An explanation is only meaningful if only a limited number of features are selected as the explanation result to make it understandable for a human analyst. To measure the sparsity of an explanation, we follow the Mass Around Zero (MAZ) criterion proposed in <cit.>. First, for every sample, we calculate the relevance vector r = (r_0,...,r_n), normalize the absolute entries of r to the range [0,1], and fit it to a half-normalized histogram h. Then, we calculate the MAZ by . Finally, we compute the AUC to derive a single score. Sparse explanations have a steep increase in MAZ around zero and are flat around one because only few features are marked as relevant. Conversely, explanations with many relevant features have a smaller slope close to zero. Therefore, the higher the AUC score, the sparse the explanations. Results: In the third column of Table <ref>, we show the results for this criterion. We select lrp.alpha2beta1 for further evaluation as it shows the best sparsity for explanations. However, high sparsity is only useful if the most relevant features are correctly determined. Therefore, we also investigate Sparsity * (1-Fidelity) and display the results in the fourth column. Depending on the fidelity, integratedgradients shows the most sparse explanations. §.§ Stability An explainability method is stable if it provides the same explanation for a given input over multiple runs. Since we only evaluate white-box approaches which calculate the relevance vector deterministically, all methods are stable. However, here we still want to evaluate the stability of the explainability methods over different model weights, i.e., whether the explainability methods calculate similar explanations via different model weights. Assuming that all models converge to similar local optima, it is conceivable that they learn the same features that are similarly relevant to predictions of specific classes. Note that this need not be the case as there may be multiple highly predictive features for a single class. However, we believe this is an important criterion, as it is beneficial when deriving explanations in an operational environment that the security analyst is presented with similar explanations for the same classes after a model update, e.g., after the inclusion of a newly emerged malware family, as before the model update. Otherwise, the new explanations would confuse rather than help the analyst. The standard deviation of the f1-score across the four folds is low at 0.00552, which may indicate that the classifiers are converging to similar local optima. To evaluate this criterion, we first compute the average of the standard deviation values (std) for each entry of a relevance vector across all folds for all domains. Then, we average these values to derive a single score, with smaller values corresponding to more similar explanations across different model weights. Results: The fifth column of Table <ref> shows the results for this criterion. The two methods which achieve the best results by far are deeptaylor and lrp.zplus. Both methods also achieve high fidelity scores (deeptaylor is second best in the feature remove setting and lrp.zplus is best on the unmodified M-ResNet model in the feature replace setting), which may indicate that the models learn the same most predictive features for the same classes. On the other hand, integratedgradients achieves the best fidelity score in the feature remove setting and only performs moderately well in terms of stability. This could be due to the fact, that in contrast to the other two methods, integratedgradients shows a significantly higher sparsity, which could indicate that there may be multiple highly predictive feature combinations for the same classes. We add deeptaylor to the list of methods to be evaluated further. However, the results of this criterion should be treated with caution, as they depend heavily on what a model has learned. Since we use the same models for all explainability methods, this criterion still allows us to compare explainability methods in terms of whether they provide similar explanations through different model weights. §.§ Efficiency We follow the definition of efficiency in <cit.>, which states that a method is efficient if it does not delay the typical workflow of an expert. To evaluate this criterion, we measured and averaged the times to compute the explanations during the previous experiments. Results: In the last column of Table <ref> we display the average time in seconds for computing a single explanation for a prediction. All methods are sufficiently fast that we do not select any method based on this criterion. B-cos, integratedgradients, and smoothgrad are around on order of magnitude slower than the other approaches. For B-cos this is the case as the current implementation does not support batch calculations to derive explanations. For integratedgradients and smoothgrad this is because we had to reduce the batch size of 2,000 samples to 200 due to higher RAM requirements of the algorithms. Nevertheless, even without batch calculations all methods are sufficiently fast and would not delay the workflow of an expert. §.§ Comparison of Explainability Methods We briefly document our findings of using different explainability methods during our evaluations: While lrp.alpha2beta1 often provides very sparse explanations, it occasionally seems to fail, sometimes just flagging features that argue against the prediction even though the classifier is very confident. We cannot justify the loss of performance caused by the required adjustment to the state-of-the-art M-ResNet model for the explanations generated by b-cos, since the explanations are not significantly different from the other methods. The three best performing explainability methods through our study are deeptaylor, integratedgradients, and lrp.zplus. All three can be used to explain the predictions of deep learning classifiers for the DGA classification use-case. However, integratedgradients seems to provide sparser explanations compared to the other two methods. § ADDITIONAL ROC CURVES OF THE REAL-WORLD STUDY
http://arxiv.org/abs/2307.05974v1
20230712074252
Contrastive Learning for Conversion Rate Prediction
[ "Wentao Ouyang", "Rui Dong", "Xiuwu Zhang", "Chaofeng Guo", "Jinmei Luo", "Xiangzheng Liu", "Yanlong Du" ]
cs.IR
[ "cs.IR", "cs.LG" ]
Alibaba Group [email protected] Alibaba Group [email protected] Alibaba Group [email protected] Alibaba Group [email protected] Alibaba Group [email protected] Alibaba Group [email protected] Alibaba Group [email protected] Conversion rate (CVR) prediction plays an important role in advertising systems. Recently, supervised deep neural network-based models have shown promising performance in CVR prediction. However, they are data hungry and require an enormous amount of training data. In online advertising systems, although there are millions to billions of ads, users tend to click only a small set of them and to convert on an even smaller set. This data sparsity issue restricts the power of these deep models. In this paper, we propose the Contrastive Learning for CVR prediction (CL4CVR) framework. It associates the supervised CVR prediction task with a contrastive learning task, which can learn better data representations exploiting abundant unlabeled data and improve the CVR prediction performance. To tailor the contrastive learning task to the CVR prediction problem, we propose embedding masking (EM), rather than feature masking, to create two views of augmented samples. We also propose a false negative elimination (FNE) component to eliminate samples with the same feature as the anchor sample, to account for the natural property in user behavior data. We further propose a supervised positive inclusion (SPI) component to include additional positive samples for each anchor sample, in order to make full use of sparse but precious user conversion events. Experimental results on two real-world conversion datasets demonstrate the superior performance of CL4CVR. The source code is available at https://github.com/DongRuiHust/CL4CVR. [500]Information systems Online advertising 2023 2023 acmlicensed[SIGIR '23]Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information RetrievalJuly 23–27, 2023Taipei, Taiwan Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '23), July 23–27, 2023, Taipei, Taiwan 15.00 10.1145/3539618.3591968 978-1-4503-9408-6/23/07 printacmref=true Contrastive Learning for Conversion Rate Prediction Yanlong Du August 12, 2023 =================================================== § INTRODUCTION Conversion rate (CVR) prediction <cit.> is an essential task in online advertising systems. The predicted CVR impacts both the ad ranking strategy and the ad charging model <cit.>. Recently, deep neural network-based models have achieved promising performance in CVR prediction <cit.>. However, deep models are data hungry and require an enormous amount of training data. In online advertising systems, although there may be millions to billions of ads, users tend to click only a small set of them and to convert on an even smaller set. This data sparsity issue restricts the prediction power of these deep models. Contrastive learning (CL) <cit.> offers a new way to conquer the data sparsity issue via unlabeled data. The idea is to impose different transformations on the original data and to obtain two augmented views for each sample. It then pulls views of the same sample close in the latent space and pushes views of different samples apart in order to learn discriminative and generalizable representations. In this paper, we propose the Contrastive Learning for CVR prediction (CL4CVR) framework, which associates the supervised CVR prediction task with a CL task. The CL task can learn better data representations and improve the CVR prediction performance. The way to create different data augmentations highly impacts the performance of CL. In recommender systems, most data augmentation methods are ID-based sequence or graph approaches <cit.>, which do not apply to CVR prediction which is a feature-rich problem. The most relevant work is <cit.>, which proposes feature masking for item recommendation. The aim is that two differently masked views, each containing part of item features, can still well represent the same item. However, feature masking does not work well for CVR prediction, because the input features are diverse, which relate to the user, the item, the context and the interaction, rather than only the item. We cannot make a good CVR prediction if we only know the target user but not the target item (which is masked). To tailor the CL task to the CVR prediction problem, we propose embedding masking (EM), rather than feature masking, to generate two views of augmented samples. In this way, each augmented view contains all the features, except that some embedding dimensions are masked. The CL loss will force the learned embeddings to be more representative. We also propose a false negative elimination (FNE) component to account for the natural property in user behavior data. We further propose a supervised positive inclusion (SPI) component to make full use of sparse but precious user conversion events. Experimental results show that the proposed EM, FNE and SPI strategies all improve the CVR prediction performance. In summary, the main contributions of this paper are * We propose the CL4CVR framework, which leverages a contrastive learning task to learn better data representations and to improve the CVR prediction performance. * We propose embedding masking for data augmentation that is tailored to feature-rich CVR prediction. * We propose a false negative elimination component and a supervised positive inclusion component to further improve the contrastive learning performance. -2pt § MODEL DESIGN We propose the CL4CVR framework that combines contrastive learning (CL) with supervised learning (SL) to improve the performance of CVR prediction. The structure is shown in Fig. <ref>(a). §.§ Problem Formulation In typical advertising systems, user actions follow an impression → click → conversion path. Denote the input feature vector as 𝐱, which contains multiple fields such as user ID, gender, age group, ad ID, ad title, ad industry, city, OS, etc. If a click event occurs, the click label is y=1, otherwise, y=0. If a conversion event occurs, the conversion label is z=1, otherwise, z=0. The (post-click) CVR prediction problem is to estimate the probability ẑ = p(z=1|y=1, 𝐱). §.§ Supervised Prediction Model Our focus in this paper is on the design of the CL task, and we use existing CVR prediction model as the SL task. In particular, we use ESMM <cit.> as the supervised prediction model because of its popularity and versatility. More sophisticated models such as ESM^2 <cit.>, GMCM <cit.> and HM^3 <cit.> require additional post-click behaviors (e.g., favorite, add to cart and read reviews), which are not always available in different advertising systems. Fig. <ref>(b) shows the structure of ESMM. It has a shared embedding layer, a CTR tower and a CVR tower. Assume there are N samples in a mini-batch. Denote the predicted CTR as ŷ and the predicted CVR as ẑ, the supervised loss is defined as L_pred = 1/N∑_n=1^N l(ŷ_n, y_n) + 1/N∑_n=1^N l(ŷ_n ẑ_n, y_n z_n), where l(ŷ_n, y_n) = - y_n log (ŷ_n) - (1-y_n) log (1 - ŷ_n). §.§ Embedding Masking We now turn our attention to the CL task. Data augmentation is an important step that highly impacts the CL performance. In <cit.>, the authors propose to create two views of each original sample by feature masking for item recommendation (Fig. <ref>(a)). The aim is that two differently masked views, each containing part of item features, can still well represent the same item. However, feature masking does not work well in CVR prediction, because the input features are diverse, rather than only about the item. We cannot decide whether a user would like to convert on an ad if the ad features are masked. Therefore, we propose embedding masking (EM) in this paper, which is illustrated in Fig. <ref>(b). In EM, we apply two different element-wise masks on the concatenated long embedding vector 𝐞 rather than on the raw features 𝐱. Assume there are F features and the embedding dimension for each feature is K. Then a feature mask has dimension F, but an embedding mask has dimension FK. By EM, each masked view contains all (rather than part of) the features, except that some random embedding dimensions are masked. The aim is that the remaining embedding dimensions can still well represent the whole sample and the CL loss will force the learned embeddings to be more representative. We denote the two augmented embedding vectors of the same sample as 𝐞̃_i and 𝐞̃_j, which form a positive pair. §.§ Encoder and Traditional Contrastive Loss We map the two views 𝐞̃_i and 𝐞̃_j of the same sample to two high-level representation vectors 𝐡_i and 𝐡_j through the same encoder f. That is, 𝐡_i = f(𝐞̃_i) and 𝐡_j = f(𝐞̃_j). For simplicity, we use an MLP as the encoder, which contains several fully connected (FC) layers with the ReLU activation <cit.> except the last layer (Fig. <ref>(c)). Given N original samples in a mini-batch, there are 2N augmented samples. Given an anchor sample 𝐞̃_i, the authors in <cit.> treat the other augmented sample 𝐞̃_j of the same original sample as the positive and treat other augmented samples as negatives. We illustrate it in Fig. <ref>(a). The traditional contrastive loss <cit.> is L_0 = - 1/2N∑_i=1^2Nlogexp(s(𝐡_i, 𝐡_j)/τ)/∑_k ≠ iexp(s(𝐡_i, 𝐡_k)/τ), where s(𝐡_i, 𝐡_j) is the cosine similarity function and τ is a tunable temperature hyper-parameter. This loss function aims to learn robust data representations such that similar samples are close to each other and random samples are pushed away in the latent space. §.§ False Negative Elimination In advertising systems, it is common that an ad is shown to a user multiple times at different time epochs. Because user behaviors naturally contain uncertainty, it is possible that the user clicks the ad a_1 times and converts a_2 times (a_2 ≤ a_1). This results in a_1 click samples with the same features but possibly different conversion labels. When such samples are included for CL, contradiction happens. It is because augmented samples corresponding to original samples with different indices will be treated as negatives. However, their original samples actually have the same features. Therefore, we propose a false negative elimination (FNE) component. It generates a set ℳ(i) for an anchor sample index i (Fig. <ref>(b)). Note that, FNE only impacts the CL task and the supervised prediction model still uses all the original samples for training, as otherwise, the learned conversion probabilities are incorrect. We use o(𝐞̃_i) to denote the original sample of 𝐞̃_i. We introduce a duplication indicator where I(o(𝐞̃_i), o(𝐞̃_k)) = 1 indicates that o(𝐞̃_i) and o(𝐞̃_k) have the same features and it is 0 otherwise. Given an anchor sample index i, we define the set ℳ(i) = {j}∪{k | I(o(𝐞̃_i), o(𝐞̃_k)) = 0 }. ℳ(i) contains the indices of samples that should be included in the denominator of the CL loss function. §.§ Supervised Positive Inclusion As the conversion label is sparse but also precious, we further propose a supervised positive inclusion (SPI) component to effectively leverage label information. It generates a set 𝒮(i) with supervised positive included for an anchor sample index i. Inspired by supervised contrastive learning <cit.>, we include additional positive samples for an anchor sample when its conversion label is 1 (Fig. <ref>(c)). Note that in traditional contrastive learning <cit.>, an anchor sample 𝐞̃_i has a single positive sample 𝐞̃_j. Given an anchor sample index i, we define the set 𝒮(i) = {j}∪{k | z(𝐞̃_k) = z(𝐞̃_i) = 1, k ≠ i, k ≠ j}, where z(𝐞̃_i) denotes the label of 𝐞̃_i, which is the same as the original sample. In other words, 𝒮(i) = {j} if z(𝐞̃_i)=0 (i.e., the anchor sample has label 0) and 𝒮(i) may contain more positive samples if z(𝐞̃_i) = 1. We do not include supervised positive samples when z(𝐞̃_i) = 0 because of the data sparsity issue. It is possible that all the samples in a mini-batch has z=0, which makes all the samples supervised positives and there is no negative and no contrast at all. §.§ Contrastive Loss and Overall Loss ℳ(i) generated by FNE and 𝒮(i) generated by SPI impact the contrastive loss. In particular, we define the contrastive loss used in this paper as L_cl = - 1/2N∑_i=1^2N[ 1/|𝒬(i)|∑_q ∈𝒬(i)logexp(s(𝐡_i, 𝐡_q)/τ)/∑_k ∈ℳ(i)exp(s(𝐡_i, 𝐡_k)/τ)], where 𝒬(i) = 𝒮(i) ∩ℳ(i). For each anchor sample, we average over all its positives. The overall loss is the combination of the supervised CVR prediction loss and the contrastive loss as L = L_pred + α L_cl, where α is a tunable balancing hyper-parameter. § EXPERIMENTS §.§ Datasets -10pt The statistics of the datasets are listed in Table <ref>. Both datasets contain samples from advertising systems with rich features, and are tagged with click and conversion labels. 1) Industrial dataset: This dataset contains a random sample of user behavior logs from an industrial news feed advertising system in 2022. 2) Public dataset: This dataset is gathered from the traffic logs in Taobao[https://tianchi.aliyun.com/dataset/dataDetail?dataId=408]. §.§ Compared Methods We compare the following methods for CVR prediction. Base is the supervised prediction model. Other methods associate the same base model with different data regularization or CL algorithms. * Base. The supervised CVR prediction model. In this paper, we use ESMM <cit.> as the base. * FD. Base model with random Feature Dropout <cit.> in the supervised task. * SO. Base model with Spread-Out regularization <cit.> on original examples. * RFM. Random Feature Masking <cit.>. Base model with a CL task. It randomly splits features into two disjoint sets. * CFM. Correlated Feature Masking <cit.>. Base model with a CL task. It splits features according to feature correlation. * CL4CVR. The framework proposed in this paper. §.§ Settings Parameter Settings. We set the dimensions of fully connected layers in prediction towers and those in the CL encoder as {512, 256, 128}. The training batch size is set to 64. All the methods are implemented in Tensorflow <cit.> and optimized by Adagrad <cit.>. We run each method 3 times and report the average results. Evaluation Metric. The Area Under the ROC Curve (AUC) is a widely used metric for CVR prediction. It reflects the probability that a model ranks a randomly chosen positive sample higher than a randomly chosen negative sample. The larger the better. §.§ Experimental Results §.§.§ Effectiveness Table <ref> shows the AUCs of different methods. It is observed that FD performs worst because it operates on the supervised task. SO performs better than RFM and CFM on the industrial dataset, but they have comparable performance on the public dataset. CFM performs better than RFM because it further considers feature correlation. CL4CVR performs best on both datasets, showing its effectiveness to cope with the data sparsity issue and to improve the CVR prediction performance. §.§.§ Ablation Study Table <ref> lists the AUCs of three components in CL4CVR. It is observed that EM itself outperforms RFM and CFM, showing that embedding masking is more suitable than feature masking for CVR prediction. The incorporation of the FNE component or the SPI component leads to further improvement. CL4CVR that uses all the three components perform best, showing that these components complement each other and improve the prediction performance from different perspectives. §.§.§ Impact of the Temperature and the CL Loss Weight Fig. <ref> plots the impact of the temperature τ. It is observed that generally a large τ works well on the two datasets. Fig. <ref> plots the impact of the CL loss weight α, where 0 denotes the supervised base model. It is observed that when α increases initially, performance improvement is observed. But when α is too large, too much emphasis on the CL task will degrade the performance. § RELATED WORK CVR prediction. The task of CVR prediction <cit.> in online advertising is to estimate the probability of a user makes a conversion event on a specific ad. <cit.> estimates CVR based on past performance observations along data hierarchies. <cit.> proposes an LR model and <cit.> proposes a log-linear model for CVR prediction. <cit.> proposes a model in non-guaranteed delivery advertising. <cit.> proposes ESMM to exploit click and conversion data in the entire sample space. ESM^2 <cit.>, GMCM <cit.> and HM^3 <cit.> exploit additional purchase-related behaviors after click (e.g., favorite, add to cart and read reviews) for CVR prediction. Contrastive learning. Contrastive learning <cit.> offers a new way to conquer the data sparsity issue via unlabeled data. It is able to learn more discriminative and generalizable representations. Contrastive learning has been applied to a wide range of domains such as computer vision <cit.>, natural language processing <cit.> and recommendation <cit.>. In the recommendation domain, most data augmentation methods are ID-based sequence or graph approaches <cit.>, which do not apply to CVR prediction which is a feature-rich problem. The most relevant work is <cit.> which proposes feature masking for item recommendation. § CONCLUSION In this paper, we propose the Contrastive Learning for CVR prediction (CL4CVR) framework. It associates the supervised CVR prediction task with a contrastive learning task, which can learn better data representations exploiting abundant unlabeled data and improve the CVR prediction performance. To tailor the contrastive learning task to the CVR prediction problem, we propose embedding masking, false negative elimination and supervised positive inclusion strategies. Experimental results on two real-world conversion datasets demonstrate the superior performance of CL4CVR. ACM-Reference-Format
http://arxiv.org/abs/2307.03985v1
20230708142437
Spectroscopic Devices for Asteroseismology With Small Telescopes in NARIT
[ "Somsawat Rattanasoon", "Eugene Semenko", "David Mkrtichian", "Saran Poshyachinda" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.IM" ]
National Astronomical Research Institute of Thailand (Public Organization) 260 Moo 4, T. Donkaew, A. Maerim, Chiangmai, 50180 Thailand Spectroscopic Devices for Asteroseismology With Small Telescopes in NARIT Saran Poshyachinda ========================================================================= National Astronomical Research Institute of Thailand (NARIT) has a manifold network of small telescopes installed worldwide. These telescopes serve educational and research purposes and are equipped mainly with CCD detectors for direct imaging and photometry. To extend the possible field of applications, several telescopes were fitted with commercially available medium-resolution spectrographs eShel from Shelyak. With these devices, researchers in NARIT obtained a versatile tool for stellar spectroscopy. Here we describe the current status of available equipment, possible ways of upgrading, and briefly introduce the achieved results of the asteroseismologic study of fast-rotating stars. § MOTIVATION A fibre-fed medium-resolution echelle spectrograph eShel has been designed and distributed for small telescopes by Shelyak Instruments (France) since 2008 <cit.>. A typical device consists of a stationary spectrograph block linked by a fibre with 50 μm core to the Fibre Injection and Guiding Unit (FIGU) installed at the telescope side. FIGU is also connected through a 200-μm fibre channel to the Calibration Unit comprising halogen, LED, and ThAr lamps. Spectrograph and its components are commercially available on the company's website <https://www.shelyak.com/>. Earlier models of eShel registered spectra within the wavelength range 430–700 nm with the resolution R > 10,000. In 2018, after the upgrade, which affected many components of eShel, the working range was significantly extended. NARIT has a distributed network of small telescopes with apertures up to 1 m. For the spectroscopy of relatively bright stars, these telescopes can optionally be equipped with eShel. At the moment, NARIT has three devices with serial numbers 6H-115 (2010), 6H-128 (2016), and 6H-171 (2018). All spectrographs were acquired in their original complete set, thus having limited capabilities. To enable observations of fainter objects and to increase sensitivity in the blue part of the spectrum, we initiated a substantial upgrade of a device with SN 6H-171. § MODIFICATION AND TESTS The improved device received a new high-OH fibre with enhanced throughput in the blue part of the spectrum, a new doublet collimator (Shelyak provided both components), a new imaging lens, and a professional-grade CCD. All components, except fibre, are shown in Fig. <ref>. As a detector, we use a water-cooled Andor iKon-L system based on a 2048×2048 pixels CCD array with 13.5 μm pixel pitch. To match the plate scale to the increased pixel size, among several lenses with comparable focus lengths available in the market, we choose a commercial lens Sony FE 135 mm F1.8 GM, primarily due to its outstanding optical quality. Subsequent testing of the whole assembly also showed excellent transmission of the selected lens within the required range of wavelengths. The imaging lens is attached to the CCD camera through a specially designed adapter with an enclosed shutter. Technical parameters of the original and upgraded versions of eShel are summarized in Table <ref>. An upgraded variant of the spectrograph was installed for tests in a spectrograph room of the Thai National Observatory (TNO) at Doi Inthanon (Chiang Mai, Thailand) in a temperature-controlled environment. The FIGU was mounted to the left Nasmyth port of the 1-m telescope of TNO. Tests were performed in December 2022 and January 2023 under affordable weather conditions and were aimed at the verification of the optical performance of the assembly. Observational data include a standard set of calibrations (bias, flat, ThAr) and spectra of the selected stars and daytime sky. Two-dimensional raw FITS images were reduced using the pipeline PyYAP (<http://github.com/ich-heisse-eugene/PyYAP>), specially adapted to a new device. § RESULTS Test images taken with the upgraded device showed remarkable aberrations arising from the misaligned optical elements of the spectrograph. As this problem appeared in the direction perpendicular to dispersion, it influenced the overall throughput and the level of scattered light. Still, it didn't affect the spectral resolution and transmission of the device. Thus we leave the evaluation of the total throughput and stability for future works and concentrate here primarily on studying these unaffected characteristics. §.§ Transmission Analysis of observational data revealed significantly improved spectrum quality due to better control of aberrations and enhanced transmission in the Sony lens. In the images, the point spread function remains nearly stable across the field of view in the 380-850 nm wavelength range. As a result, the shortwave limit of the working spectral range has been extended by 70 nm, from 450 nm to 380 nm. In the infrared, the working range of the current setup is limited by 900 nm. In Fig. <ref>, we show four samples of the observed spectrum of the daytime sky. §.§ Resolving power The resolving power of the modified eShel was evaluated by fitting the Gaussian function to the emission lines of the ThAr spectrum. This procedure is implemented as a standard step of processing in PyYAP. Inspection of the ThAr spectra showed that the focus of the imaging camera remained stable during all observational nights. Within the spectrograph's working wavelength range, the resolving power R = λ/Δλ varied from 10,000 to 12,500, with the median R = 11,700 evaluated from 355 lines in a single image. The resolving power does not variate significantly between nights: the full width at half maximum (FHWM) of the mean ThAr line equals 3.7 pixels, close to the optimal sampling. § SCIENTIFIC APPLICATION A medium-resolution fibre-fed spectrograph, in combination with a 1-meter class telescope, can be a powerful instrument for the spectroscopy of relatively bright sources. Literature has many examples of using eShel in stellar physics and the physics of the Solar system objects. Due to its compact design and high positional stability, this spectrograph appears even in the observations of the extrasolar planets. The thing is that the accuracy of the radial velocity measurements reported in <cit.>, <cit.>, and <cit.> was better than 100 m s^-1 for the stars brighter than 11 magnitudes and exposure time under one hour. Such characteristics enable the detection and observation of hot Jupiters around the brightest stars. <cit.> also gave the example of how to use eShel for observation of pulsating stars (cepheids). The proposed upgrade opens new perspectives for the family of small telescopes in NARIT, as we have several spectrographs which, after the improvement, can be installed at any of our telescopes. In this way, it becomes possible to move part of scientific proposals aimed at studying exoplanets, active solar-like stars, binary and multiple stars from the main 2.4-m Thai National Telescope to smaller instruments without losing the efficiency and observing time. However, the main stimulus which led us to this technical work was the capability of using this device for asteroseismology of the brightest fast-rotating pulsating stars. To demonstrate the efficiency of eShel in asteroseismological observations, in Fig. <ref>, we show an example of non-radial pulsations discovered in a 4-magnitude fast-rotating star. A typical pattern of waves propagating across the averaged spectral profile is in the left panel of <ref>. The right panel shows the 2D periodogram used for the identification of frequencies of pulsation. In this example, the star has been observed continuously with short exposures for more than five hours with the original version of eShel and the 1-m telescope of NARIT. The upgraded version of the spectrograph will allow us to increase the signal-to-noise ratio (SNR) of observational data and, thus, expand the number of potential targets or increase the temporal resolution of data with shorter exposure time preserving the same level of SNR. §.§.§ ORCID identifiers of the authors 0000-0002-1912-1342Eugene Semenko 0000-0001-5094-3910David Mkrtichian §.§.§ Author contributions SR, ES, and DM are responsible for formulating the project, its technical implementation, and carrying out the observations. ES and DM are responsible for data reduction and analysis. SP contributed to the project administration. All authors equally contributed to the text of the article. §.§.§ Conflicts of interest The authors declare no conflict of interest. bullsrsl-en
http://arxiv.org/abs/2307.06092v3
20230712113537
Quantitative CLTs in Deep Neural Networks
[ "Stefano Favaro", "Boris Hanin", "Domenico Marinucci", "Ivan Nourdin", "Giovanni Peccati" ]
cs.LG
[ "cs.LG", "cs.AI", "math.PR", "stat.ML" ]
On the Galois covers of degenerations of surfaces of minimal degree Email address: M. Amram: [email protected]; C. Gong: [email protected]; Jia-Li Mo: [email protected]; 2020 Mathematics Subject Classification. 05E15, 14J10, 14J25, 14N20. [ ==================================================================================================================================================================================================================================================== We study the distribution of a fully connected neural network with random Gaussian weights and biases in which the hidden layer widths are proportional to a large constant n. Under mild assumptions on the non-linearity, we obtain quantitative bounds on normal approximations valid at large but finite n and any fixed network depth. Our theorems show both for the finite-dimensional distributions and the entire process, that the distance between a random fully connected network (and its derivatives) to the corresponding infinite width Gaussian process scales like n^-γ for γ>0, with the exponent depending on the metric used to measure discrepancy. Our bounds are strictly stronger in terms of their dependence on network width than any previously available in the literature; in the one-dimensional case, we also prove that they are optimal, i.e., we establish matching lower bounds. AMS 2010 Classification: 60F05; 60F07; 60G60; 68T07. § INTRODUCTION Deep neural networks <cit.> are parameterized families of functions at the core of many recent advances in domains such as computer vision (e.g. self-driving cars <cit.>), natural language processing (e.g. ChatGPT <cit.>), and structural biology (e.g. AlphaFold <cit.>). The practical success of deep learning has led to significant interest in theoretical approaches to understanding how neural networks work and how to make them more efficient. As we are about to explain, an important chapter in deep learning theory seeks to understand the distribution of neural networks with randomly chosen parameters. This is the context for the present article, whose goal is to derive new quantitative CLTs for wide neural networks with random weights and biases. To motivate and informally introduce our results, recall that the typical use of neural networks in practice is to approximate an unknown function f from a training dataset x_α,f(x_α) : α=1,2,…, k consisting of its values at k different inputs. Given the training data, one then fixes a neural network architecture, which specifies a parametric family of neural networks, and searches in this family for an approximation to f. In this article we will study the simplest, so-called fully connected, network architectures: Fix a positive integer L as well as L+2 positive integers n_0,…, n_L+1 and a function σ:. A fully connected depth L neural network with input dimension n_0, output dimension n_L+1, hidden layer widths n_1,…, n_L, and non-linearity σ is any function x_α∈^n_0↦ z_α^(L+1)∈^n_L+1 of the following form z_α^(ℓ) = W^(1)x_α+b^(1), ℓ=1 W^(ℓ)σ(z_α^(ℓ-1))+b^(ℓ), ℓ=2,…, L+1 , z_α^(ℓ)∈^n_ℓ, where W^(ℓ)∈^n_ℓ× n_ℓ-1 are matrices, b^(ℓ)∈^n_ℓ are vectors, and σ applied to a vector is shorthand for σ applied to each component. The trainable parameters of a fully connected network are the network weights W_ij^_(ℓ) (entries of the weight matrices W^(ℓ)) and network biases b_i^_(ℓ) (components of the bias vectors b^(ℓ)). Of course, the network architecture and dataset must be compatible in the sense that f must be a function from ^n_0 to ^n_L+1. For a training dataset and a network architecture, the goal is to find a setting of the weights and biases so that not only do we have z_α^(L+1)≈ f(x_α) for x_α in the training dataset but also for inputs not included in the training data. This optimization is typically done in two steps: (1) Randomly initialize (i.e. sample) the network weights and biases (2) Optimize the weights and biases by some variant of gradient descent on an empirical loss such as the squared error: ∑_α=1^k z_α^(L+1)-f(x_α)_2^2. We thus see that neural networks with random weights and biases, the main subject of this article, describe the properties of neural networks at the start of training. The usual way to initialize parameters in practice leads to the following: Fix L≥ 1, n_0,…, n_L+1≥ 1, σ: as well as C_b≥ 0, C_W>0. A random depth L neural network with input dimension n_0, output dimension n_L+1, hidden layer widths n_1,…, n_L, and non-linearity σ, is the random field (<ref>) with random weights and biases: W_ij^(ℓ)∼𝒩0,C_W/n_ℓ-1, b_i^(ℓ)∼𝒩(0,C_b) independent. In general, both describing the distribution of a randomly initialized neural network and tracking the dynamics of optimization is quite difficult. To make progress, several influential lines of research study these questions asymptotically when the network widths n_1,…, n_L are large <cit.>. That neural networks simplify significantly in this infinite width limit can already be seen at initialization: Fix L, n_0, n_L+1,r≥ 1 and a non-linearity σ: that is polynomially bounded to order r in the sense of the forthcoming formula (<ref>). As n_1,… n_L∞, the random field x_α∈^n_0↦ z_α^(L+1)∈^n_L+1 converges weakly in distribution, as an element of C^r-1(^n_0, ^n_L+1), to a Gaussian process with n_L+1 iid centered components (z_i;α^(L+1), i=1,…, n_L+1) with limiting covariance K_αβ^(L+1):=lim_n_1…, n_L∞Covz_i;α^(L+1),z_i;β^(L+1) satisfying K_αβ^(ℓ+1) = C_b + C_Wσz_i;α^(ℓ)σz_i;β^(ℓ)_K^(ℓ), ℓ≥ 1 C_b + C_W/n_0x_α· x_β, ℓ=0 , where for any f:^2 we've written f(z_i;α^(ℓ),z_i;β^(ℓ))_K^(ℓ) for the average values of f with respect to the distribution z_i;α^(ℓ),z_i;β^(ℓ)∼𝒩0,K_αα^(ℓ)K_αβ^(ℓ)K_αβ^(ℓ)K_ββ^(ℓ). We can now state the main question taken up in this article: Question: How close is a random neural network at finite width to the infinite width Gaussian process described in Theorem <ref>? Perhaps the main motivation for taking up this question comes from prior work on the neural tangent kernel (NTK) regime <cit.>, which occurs when L,n_0,n_L+1, and the training dataset are fixed, weights and biases are initialized as in (<ref>), and the hidden layer weights n_1,…, n_L are sent to infinity. The NTK regime has two salient features: * The stochastic processes x_α↦ z_α^(L+1) converge in distribution to centered Gaussian processes with independent components. We already saw this in Theorem <ref>. * Using sufficiently small learning rates and losses such as the mean squared error, the entire trajectory of optimization coincides with the one obtained by replacing the non-linear network z_α^(L+1) by its linearization around the randomly initialized setting of network weights and biases (see <cit.>, Theorem 3.2 in <cit.>, and Theorem 5.4 in <cit.>). The second point shows that in the infinite width limit optimization will not get stuck in a spurious local minimum of the training loss, as the loss is convex after we replace the network by its linearization. But it also suggests that taking the width to infinity comes at a steep explanatory cost. Indeed, one of the most important practical features of neural networks is precisely that they are not linear models and hence learn data-dependent features <cit.>. The NTK regime is thus too rigid to capture important aspects of the behavior of realistic neural networks. To study non-linear effects such as feature learning one must either change the initialization scheme (leading to the mean-field limit <cit.>), consider regimes in which the training dataset size grows with network width (see e.g. <cit.>), or study neural networks at finite width (see e.g. <cit.>). In this article, we focus on this last option and develop new probabilistic tools for analyzing neural networks at finite but large width (see (<ref>)). Beyond the development of the NTK regime, the infinite width Gaussian process of Theorem <ref> has been exploited to develop Bayesian inference for deep neural networks <cit.> and to investigate properties of infinitely wide neural networks as functions of the depth through information propagation <cit.>. §.§ Informal Overview of Results Our main results, which we present in more detail in <ref> below, can be summarized as follows: * One-dimensional QCLTs. For a fixed network input x_α∈^n_0 we consider a single component z_i;α^(L+1) of the network output. Here we ask, as a function of network width, how close is the distribution of z_i;α^(L+1) (and its derivatives with respect to x_α) to the corresponding infinite width Gaussian? We find that the total-variation distance between them is bounded above by a constant times (width)^-1; we also prove that this rate is optimal, i.e., we establish matching lower bounds. See Theorem <ref> for the precise statement. * Finite-dimensional QCLTs. For a fixed finite collection of network inputs x_α∈^n_0, α∈𝒜, we obtain in Theorem <ref> upper bounds on the convex distance (see (<ref>)) between the vector z_i;α^(L+1), α∈𝒜 (and its derivatives with respect to x_α) and the corresponding Gaussian. Here we find an upper bound of the order of (width)^-1/2, with a pre-factor that scales polynomially with the number of elements in 𝒜. We conjecture that this rate is sub-optimal and can be improved to be a constant depending on 𝒜 times (width)^-1. * Functional QCLTs. We prove upper bounds in Theorem <ref> between z_α^(L+1) – viewed as an element of the infinite-dimensional Sobolev space of weakly differentiable functions from a compact set in ^n_0 to ^n_L+1 – and its infinite width limit. These bounds are in terms of both the Wasserstein-2 metric and the d_2 distance (see (<ref>)) and scale like n^-κ for certain exponents κ∈ (0,1/2). When σ is a smooth non-linearity and we study the Wasserstein-2 metric, we may simply take κ = 1/4- ϵ for any ϵ >0, but for non-linearities such as the ReLU, we obtain weaker estimates. In Theorem <ref> – which is one of the most innovative contributions of our work and applies to smooth nonlinearities – we use a Sobolev embedding argument to deduce upper bounds on transport distances associated with supremum norms on spaces of differentiable functions. In this case, we again achieve bounds that scale as n^-κ, with κ = 1/4- ϵ for any ϵ >0. As we review in more detail in <ref>, our results in cases (1)—(3) above are strictly stronger in terms of their dependence on network width than any previously available in the literature. We conclude by emphasizing that, on a technical level, this article introduces several novel ideas: * In the one-dimensional case, we prove and use a new optimal bound (stated in Proposition <ref>) on the total variation and 1-Wasserstein distances between the law a conditionally Gaussian random variable and that of a Gaussian random variable. This result – that is of independent interest – extends techniques and ideas initially introduced in <cit.>. * We extend ideas from the work of <cit.> to obtain quantitative CLTs with respect to the convex distance (see (<ref>)) in the setting of possibly degenerate covariances (see Proposition <ref>). This is used in deriving the finite-dimensional CLTs from Theorem <ref>. In general, our way of deducing one- and finite-dimensional approximations revolves around the use of integration by parts formulae and the so-called Stein's method for probabilistic approximations – see <cit.>. * We develop, based on ideas from <cit.>, a novel coupling argument for conditionally Gaussian fields (see Proposition <ref>) that is used in the proof of Theorem <ref> and Theorem <ref>. * We develop in Proposition <ref> a simple but apparently new variant of the classical Powers-Størmer inequality <cit.> that allows one to upper bound the Hilbert-Schmidt norm of the difference of the square roots of two positive semi-definite operators without requiring that one of them is strictly positive definite. This will be used in the proof of Theorem <ref>. Our approach should be compared with the discussion contained in <cit.>, where some alternate strategies for deriving functional bounds is partially outlined. The recent (independently written) paper <cit.> uses Stein's method in a way comparable to ours to deduce upper bounds in the total variation and convex distances for shallow and deep networks, in the case where the input is fixed and only the network's output is concerned (no derivatives). We stress that our paper focuses on the derivation of tight probabilistic bounds in terms of the width of the neural network, providing only a partial description of the analytical dependence of the constants on the other parameters of the model (e.g. C_W, C_b). While it is in principle possible to explicit such dependence in a finite-dimensional setting (see e.g. <cit.>) the task becomes much more complicated in a functional framework since in this case the constants in our bounds crucially depend on certain traces of integral operators that one cannot directly represent in terms of the involved parameters. We prefer to think of this point as a separate issue, and leave it open for further research. §.§ Outline for Remainder of Article The rest of this article is structured as follows. First, in <ref>, we formally introduce assumptions and notation related to the non-linearity σ. Then, in <ref>, we state our precise results on one-dimensional (<ref>), finite-dimensional (<ref>), and infinite-dimensional quantitative CLTs (<ref>). Throughout <ref> we compare our results to prior work and mention a range of further related articles in <ref>. We then develop in <ref> some preparatory results that will be used in the proofs of our main Theorems. Specifically, <ref> builds on the simple observation from Lemma <ref> that random neural networks are conditionally Gaussian and recalls key estimates on the fluctuations of the conditional covariances (see Theorem <ref>). Next, <ref> recalls Stein's method for one-dimensional quantitative CLTs, while <ref> and <ref> provide the finite-dimensional and infinite-dimensional extensions, respectively. Several of these extensions (specifically Propositions <ref>, <ref>, and <ref>) are elementary but new. Finally, in <ref> we complete the proofs of our main results. §.§ Acknowledgements BH gratefully acknowledges support from NSF CAREER grant DMS-2143754 as well as NSF grants DMS-1855684, DMS-2133806 and an ONR MURI on Foundations of Deep Learning. DM is grateful to MUR projects MatModTov, Grafia and to PNRR/CN1 Spoke 3 for financial support. GP's research is supported by the Luxembourg National Research Fund (Grant: O21/16236290/HDSA). § ASSUMPTIONS AND DEFINITIONS For our precise results, we will need the following mild technical condition on the activation function. For fixed r≥ 1, we say that the non-linearity σ: is polynomially bounded to order r, if either σ is r times continuously differentiable or if it r-1 times continuously differentiable and its (r-1)-st derivative is a continuous piecewise linear function with a finite number of points of discontinuity for its derivative. In either case we also require that the r-th derivative of σ is polynomially bounded: ∃ k ≥ 0 s.t. (1+x)^-kd^r/dx^rσ(x)_L^∞()<∞. Virtually all non-linearities used in practice are polynomially bounded to some order r≥ 1. This is true, for instance, of smooth non-linearities such as tanh and GeLU <cit.> (in which case r is arbitrary) as well as piecewise linear non-linearities such ReLU <cit.> and leaky ReLU <cit.>. Since we are concerned in this article with quantitative central limit theorems not only for the outputs of a random neural network but also for its derivatives with respect to network inputs, let us agree that for a multi-index J=j_1,…, j_n_0∈ℕ^n_0 we will write J:=j_1+⋯ +j_n_0 for the order of J and let D_α^J =∂_x_1^j_1⋯∂_x_n_0^j_n_0|_x=x_α = (x_1,...,x_n_0) denote the corresponding partial derivative operator, with the convention that D^0 = Id. For fixed ℓ = 1,...,L+1, consider the real valued random field x_α∈^n_0↦ z_i;α^(ℓ)∈ defined in (<ref>), and denote by Γ the centered Gaussian field having covariance K^(ℓ), as defined in (<ref>). Assume that σ is polynomially bounded to the order r (Definition <ref>), and that weights and biases are selected according to Definition <ref>. Then, one can verify the following properties: (i) both Γ and z_i^(ℓ) are of class C^r-1 with probability one; (ii) with probability one, both z_i^(ℓ) and Γ are Lebesgue-almost everywhere r times differentiable and, for all J such that |J|=r, there exist versions of the fields x_α↦ D^J_α z_i;α^(ℓ) and x_α↦ D^J_αΓ_α that are locally bounded; (iii) for every fixed non-zero x_α and J such that |J|=r, the mixed derivatives D^J_α z_i;α^(ℓ) and D^J_αΓ_α are well-defined and finite with probability one; (iv) For every x_α, x_β∈^n_0 and every I,J such that |I|,|J|≤ r, 𝔼[D_α^IΓ_α· D_β^J Γ_β] = D_α^I D_β^J K^(ℓ)_αβ, and the mapping (x_α, x_β)↦ D_α^I D_β^J K^(ℓ)_αβ is bounded on compact sets. The following definition formalizes what it means for a random neural network, together with some of its derivatives, to have a non-degenerate covariance structure in the infinite width limit. This condition will be used as a hypothesis in most of our results. Fix r≥ 1 and suppose that σ is polynomially bounded to order r, in the sense of Definition <ref>. Consider any finite set 𝒜 of indexing distinct network inputs x_𝒜 = {x_α : α∈𝒜}⊆^n_0 and any finite set of directional derivative operators V=V_1,…, V_p, V_j:=∑_i=1^n_0 v_ij∂_x_i. We say that the infinite-width covariance structure {K^(ℓ) : ℓ = 1,...,L+1} from Theorem <ref> is non-degenerate on x_𝒜 to order q≤ r with respect to V if, for all ℓ = 1,..., L+1, the infinite width covariance matrix K_𝒜,V^(ℓ),≤ q:= V_α_1^J_1 V_α_2^J_2K_α_1α_2^(ℓ), |J_1|, |J_2|≤ q, α_1,α_2∈𝒜 is invertible, where for each multi-index J_i=j_i1,…, j_ip∈ℕ^p with order |J_i|= j_i1+⋯+j_ip we have written V_α_i^J_i :=V_1^j_i1⋯ V_p^j_ip|_x=x_α_i for the corresponding differential operators and have set V^0 = Id. We stress that, in (<ref>), the rows and columns of the matrix K_𝒜,V^(ℓ),≤ r are indexed, respectively, by pairs (J_1, α_1) and (J_2, α_2). When the set V is such that p=n_0 and v_i,j = δ_ij, then V^J = D^J — as defined in (<ref>); in this special case, we will say that the infinite-width covariance structure {K^(ℓ) : ℓ = 1,...,L+1} is canonically non-degenerate to the order q≤ r and use the notation K_𝒜,V^(ℓ),≤ q = K_𝒜^(ℓ),≤ q. By virtue if (<ref>), the invertibility of the covariance matrix K_𝒜,V^(ℓ),≤ q implies (thanks to Cauchy-Schwarz) that K_𝒜,V^(ℓ),≤ q has strictly positive diagonal terms. Plainly, the covariance structure {K^(ℓ)} is non-degenerate on x_𝒜 to order 0 if and only if the matrices { K_αβ^(ℓ) : α, β∈𝒜} defined in (<ref>) are invertible for ℓ = 1,..., L+1. In particular, if 𝒜 = {α} is a singleton, non-degeneracy to the order 0 simply means that K^(ℓ)_αα>0 for ℓ = 1,..., L+1. Finally, for ℓ=1,...,L+1, we introduce the notation κ_αβ^(ℓ):=Covz_i;α^(ℓ), z_i;α^(ℓ), and write ℱ^(ℓ) := sigma field generated by weights and biases in layers 1,…, ℓ. We conclude this section with the following elementary lemma that will be used throughout the paper. Conditionally on ^(ℓ) the random field x_α∈^n_0↦ z_α^(ℓ+1)∈^n_ℓ+1 has i.i.d. centered Gaussian components with conditional covariance Covz_i;α^(ℓ+1), z_j;β^(ℓ+1) | ℱ^(ℓ) =δ_ijΣ_αβ^(ℓ) where Σ_αβ^(ℓ):=C_b + C_W/n_ℓ∑_j=1^n_ℓσz_j;α^(ℓ)σz_j;β^(ℓ). One has in particular that κ_αβ^(ℓ) = 𝔼[Σ_αβ^(ℓ)], where we used the notation (<ref>). Assume that σ is polynomally bounded to the order r≥ 1. In resonance with Remark <ref>, the random covariance function (x_α, x_β)↦Σ_αβ^(ℓ) verifies the following properties: (i) Σ^(ℓ) is of class C^r-1, r-1(ℝ^n_0×ℝ^n_0) with probability one; (ii) with probability one, Σ^(ℓ) is Lebesgue-almost everywhere r times differentiable in each variable and, for all multi-indices I,J such that |I|, |J|=r, there exist a version of the field (x_α, x_β)↦Σ_αβ^(ℓ) that is locally bounded; (iii) for every fixed x_α, x_β and I,J such that |I|, |J|=r, the mixed derivatives D^J_α D_β^I Σ_αβ^(ℓ) are well-defined and finite with probability one; (iv) For every x_α, x_β∈^n_0 and every I,J such that |I|,|J|≤ r, 𝔼[D^J_α D_β^I Σ_αβ^(ℓ)] = D_α^I D_β^J κ^(ℓ)_αβ, and the mapping (x_α, x_β)↦𝔼[ ( D^J_α D_β^I Σ_αβ^(ℓ) )^2] is integrable over arbitrary compact sets. § MAIN RESULTS In the subsequent sections we will present our main results, respectively, in the one-dimensional (<ref>), the finite-dimensional (<ref>), and the functional setting (<ref>). Before stating them we present some notation in <ref>. §.§ Notation and Setting for Main Results Our results give quantitative CLTs for random neural networks verifying the following (parameterized) set of assumptions. Fix constants c_1≥ c_2>0, integers r,L,n_0, n_L+1≥ 1, scalars C_b≥ 0, C_W>0, and a mapping σ: that is polynomially bounded to order r as in Definition <ref>. We then consider a random neural network x_α∈^n_0↦ z_α^(L+1)∈^n_L+1 with input dimension n_0, output dimension n_L+1, hidden layer widths n_1,…, n_L, and non-linearity σ as in Definition <ref> and suppose that for some n≥ 1 c_2 n≤ n_1,…, n_L≤ c_1 n. For the sake of brevity, we define the set of parameters 𝒫:= {σ, c_1, c_2, L, n_0, C_b, C_W} (note that 𝒫 does not contain r). The results in this article give quantitative CLTs showing that when n is large the random field z_α^(L+1) and its derivatives D_α^Jz_i;α^(L+1). :=∂_x_1^j_1⋯∂_x_n_0^j_n_0|_x=x_αz_i;α^(L+1), J = j_1,…, j_n_0∈ℕ^n_0 (or, more generally, the mixed directional derivatives appearing in (<ref>)) are close to those of a centered Gaussian process with n_L+1 independent and identically distributed components. Although the classes of probabilistic distances we study vary somewhat between the one-dimensional, finite-dimensional, and functional cases below, they all contain Wasserstein distances, which we recall here for the sake of readability. See <cit.> for further details and proofs. Let K be a real separable Hilbert space, let X,Y be two K-valued random elements, and fix p≥ 1. We define the p-Wasserstein distance, between the distributions of X and Y, to be the quantity W_p(X,Y) :=( inf_(T,S)𝔼[ T-S^p_K])^1/p, where the infimum runs over all random elements (T,S) such that Tlaw=X and Slaw=Y. Definition <ref> will be applied to K=ℝ in Section <ref>, to K = ℝ^m in Section <ref> and to K equal to some appropriate Sobolev space in Section <ref>[For the rest of the paper, we will use the notation ℬ(K) to indicate the class of all Borel subsets of K.]. We note that, trivially, W_p≤ W_q for p≤ q, and also record the following two additional facts: – if U is an arbitrary random element defined on the same probability space as X (taking values in some Polish space 𝒰 and with law ℙ_U), then there exists a version ℙ_X|U : 𝒰×ℬ(𝒦)→ [0,1] : (u,B)↦ℙ_X|U = u(B) of the conditional distribution of X given U such that W^q_q(X,Y) ≤∫_𝒰 W_q^q ( ℙ_X|U=u, ℙ_Y) dℙ_U(u), see e.g. <cit.>; – in the case q=1 one has the dual representation W_1(X,Y) = sup_h∈ Lip(1) | 𝔼h(X) - 𝔼h(Y) |, where the supremum runs over all 1-Lipschitz mappings on K, that is, all real-valued mappings h such that |h(a) - h(b)|≤a-b_K, for all a,b∈ K. In the subsequent Sections <ref>–<ref>, we will present our main results, respectively, in the one-dimensional, the finite-dimensional, and the functional setting. The three cases are presented separately since the classes of probabilistic distances we are able to deal with vary from one situation to the other. The following standard definition applies to the content of Sections <ref>, <ref> and <ref>, and is reported here for the sake of readability. See <cit.> for further details and proofs. Let K be a real separable Hilbert space, let X,Y be two K-valued random elements, and fix p≥ 1. We define the p-Wasserstein distance, between the distributions of X and Y, to be the quantity W_p(X,Y) :=( inf_(T,S)𝔼[ T-S^p_K])^1/p, where the infimum runs over all random elements (T,S) such that Tlaw=X and Slaw=Y. Definition <ref> will be applied to K=ℝ in Section <ref>, to K = ℝ^m in Section <ref> and to K equal to some appropriate L^2 space in Section <ref>. We note that, trivially, W_p≤ W_q for p≤ q, and also record the following two additional facts: – if U is an arbitrary random element defined on the same probability space as X (taking values in some metric space 𝔸 and with law μ_U), and X^u denotes a random element having the law of X conditionally on U=u, then W^p_p(X,Y) ≤∫_𝔸 W_p^p ( X^u, Y) μ_U(du). – in the case p=1 one has the dual representation W_1(X,Y) = sup_h∈ Lip(1) | 𝔼h(X) - 𝔼h(Y) |, where the supremum runs over all 1-Lipschitz mappings on K, that is, all real-valued mappings h such that |h(a) - h(b)|≤a-b_K, for all a,b∈ K. §.§ One-dimensional bounds Our first result, Theorem <ref>, measures the total variation and 1-Wasserstein distances between the output of a random neural network evaluated at a single input and a Gaussian random variable. To state it, recall that, given random variables X,Y, the total variation distance between the distributions of X and Y is defined as d_TV(X,Y) := sup_B∈ℬ(ℝ) |ℙ(X∈ B) - ℙ(Y∈ B) | where ℬ() denotes the Borel-measurable subsets of . Consider a random neural network x_α∈^n_0↦ z_α^(L+1)∈^n_L+1 as in <ref> with a non-linearity σ that is polynomially bounded to order r as in Definition <ref>. Fix a network input x_α∈^n_0, a multi-index J such that |J|≤ r and let Z be a centered Gaussian random variable with variance D^J_α D^J_β K_αβ^(L+1)|_x_α = x_β, where we have used the notation introduced in (<ref>). (1) If {K^(ℓ)} is non-degenerate on x_α to order r in the sense of Definition <ref>, then there exists C>0, depending on J,L,σ,C_b,C_W,x_α, with the property that for each i=1,… n_L+1 and every n≥ 1 max{ W_1(D^J_α z_i;α^(L+1), Z), d_TV(D^J_α z_i;α^(L+1), Z) }≤ C n^-1/2. The constant C can be chosen uniformly when x_α^2/n_0 varies over a compact set. (2) Suppose further that C_b,C_W are chosen so that the random network z_α^(L+1) is tuned to criticality as in Definition <ref>. There exists a constant C_1, depending only on the universality class of σ, and a constant C_2, depending on J,C_b,C_W,σ, L such that for each i=1,… n_L+1 and every n≥ 1 max{ W_1(D^J_α z_i;α^(L+1), Z), d_TV(D^J_α z_i;α^(L+1), Z) }≤C_1(n/L)^-1/2+ C_2 n^-1. V_α = ∑_J=j_1,…, j_n_0∈ℕ^n_0 j_1+⋯+j_n_0≤ r v_J  ∂_x_1^j_1⋯∂_x_n_0^j_n_0|_x=x_α. Consider a random neural network x_α∈^n_0↦ z_α^(L+1)∈^n_L+1 verifying Assumption <ref> with a non-linearity σ that is polynomially bounded to order r≥ 1 as in Definition <ref>, and recall notation (<ref>). Fix a network input x_α∈^n_0, and directional derivative operators V = {V_1,...,V_p} like in (<ref>). Fix also a multi-index J∈ℕ^p such that |J|≤ r, and let Z be a centered Gaussian random variable with variance V^J_α V^J_β K_αβ^(L+1)|_x_α = x_β, where we have adopted the notation (<ref>). If the infinite-width covariance structure {K^(ℓ)} is non-degenerate on the singleton x_α to order q= |J|≤ r with respect to V, in the sense of Definition <ref>. Then, the following conclusions hold: (1) there exists C>0, depending on r,V,J,x_α, 𝒫, with the property that, for each i=1,… n_L+1, max{ W_1(V^J_α z_i;α^(L+1), Z), d_TV(V^J_α z_i;α^(L+1), Z) }≤ C n^-1, and the constant C can be chosen uniformly when x_α^2/n_0 varies over a compact set; (2) the dependence on n in (<ref>) is optimal when q=0, in the following sense: denoting by Z' a centered Gaussian random variable with the same variance as z_i;α^(L+1), there exists C_0>0, depending on x_α and 𝒫, such that, for each i=1,… n_L+1, min{ W_1( z_i;α^(L+1), Z'), d_TV( z_i;α^(L+1), Z') }≥ C_0 n^-1. We prove Theorem <ref> in <ref>. Before presenting our results in the multi-dimensional setting, we make several remarks: (a) Let us give two examples of situations where Theorem <ref> applies: * When σ(t)=ReLU(t)=max0,t, we may take C_b=0,C_W=2 and V= ∂_x_i for some i. For any non-zero network input x_α, a simple computation shows that K_αα^(ℓ) = 2/n_0x_α^2, ∂_x_i;α∂_x_i;β K_αβ^(ℓ)|_α=β=2/n_0. Hence, the infinite width covariance structure is non-degenerate on the singleton x_α both to order 0 and to order 1 with respect to V. * By inspection of the proof of Theorem <ref> (given in Section <ref>) and by virtue of Theorem <ref>, one sees that the conclusion continues to hold if σ is smooth (that is, of class C^∞(ℝ)) and V^J_α V^J_β K_αβ^(L+1)|_x_α=x_β>0. In particular, we may take σ to be any smooth function such as tanh(t) and set C_b=0,C_W=1. For any non-zero network input x_α the recursion from Theorem <ref> then reads K_αα^(ℓ+1) = σ(z_i;α^(ℓ))^2_K^(ℓ), showing that the infinite width covariance structure is non-degenerate on the singleton x_α to order q=0. (b) We recall that V^0_α corresponds to the identity operator, so that Theorem <ref> in the case |J|=0 yields quantitative CLTs for the random variables z^(L+1)_i;α. (c) When |J| = 0 and L=1, our estimates on the total variation distance strictly improve those proved in <cit.> and <cit.>, that obtain a rate of convergence of the order n^-1/2 by using some version of Stein's method in dimension one. In particular, the results of <cit.> are based on the improved second-order Poincaré inequalities established in <cit.>. Similarly, in the case |J|=0 and L≥ 1 arbitrary, our estimates on W_1 strictly improve those that can be deduced by combining <cit.> with the general relation W_1≤ W_2. (d) In probabilistic approximations, it is typical to measure the distance between the laws of two random variables X,Y by using the so-called Kolmogorov distance, which is defined as d_K(X,Y) := sup_t∈ℝ| ℙ(X>t) -ℙ(Y>t) |. We observe that d_TV≥ d_K so that, in particular, our bound (<ref>) implies an estimate on the Kolmogorov distance d_K(V^J_α z_i;α^(L+1), Z) that is strictly better than the one implied by the standard relation d_K(V^J_α z_i;α^(L+1), Z) ≤ c √( W_1(V^J_α z_i;α^(L+1), Z)), with c an absolute constant – see <cit.>. We refer the reader to <cit.>, and the references therein, for further details on probabilistic distances. (e) (Maybe this should be rewritten?) Even when q=0, as stated, Theorem <ref> does not give an information on the dependence of the constant C in (<ref>) on the network depth L. To make this slightly more explicit, we note that for non-linearities such as ReLU and tanh there exist unique choices of the weight and bias variance scales C_b,C_W so that the infinite width Gaussian process described in Theorem <ref> is well-behaved at large L. For ReLU this corresponds to setting C_b=0,C_W=2, while for tanh it corresponds to taking C_b=0,C_W=1. This process is called tuning to criticality and was originally sketched in <cit.> and then expanded significantly in <cit.>. After turning to criticality and using Corollary 4.1 from <cit.>, the same proof as that of Theorem <ref> allows us to replace the upper bound in (<ref>) by C_1(L/n)^1/2 + C_2n^-2, where the constant C_1 no longer depends on L, but the constant C_2 does. This suggests, though does not prove, that the left hand side of (<ref>) should go to zero whenever L/n 0. §.§ Finite-dimensional bounds We now report what happens on the level of finite-dimensional distributions. For this, we recall that, for all integers m≥ 1, the convex distance between the distributions of two m-dimensional random vectors X and Y is d_c (X,Y) := sup_B| ℙ(X∈ B) - ℙ(Y∈ B) |, where the supremum runs over all convex B⊂ℝ^m. The convex distance d_c is a natural generalization of (<ref>) in a multivariate setting. Another popular distance for measuring the discrepancy between the distributions of random vectors is the so-called multivariate Kolmogorov distance, which is obtained from (<ref>) by restricting the supremum to the class of all hyperrectangles R⊂ℝ^m (see e.g. <cit.> and the references therein). Our choice of d_c over the multivariate Kolmogorov distance is motivated by the following two features: (i) d_c is invariant with respect to orthogonal and affine transformations, and (ii) d_c can be directly connected to multivariate transport distances through the estimate (<ref>) discussed below. In particular, property (i) will allow us to compare the distribution of the output of a given neural network and that of a possibly singular Gaussian random vector, see the forthcoming Theorem <ref>-(2) and its proof. We refer the reader to <cit.> for some classical examples of the use of d_c in the context of the multivariate CLT, as well as to <cit.> for a discussion of recent developments. Let x_α∈^n_0↦ z_α^(L+1)∈^n_L+1 be a random neural network as in <ref> with a non-linearity σ that is polynomially bounded to order r as in Definition <ref>. Fix m≥ 1, a set 𝒜={α_1,…, α_m} and a finite collection of distinct non-zero network inputs { x_α : α∈𝒜}⊆ℝ^n_0. Finally, we consider a family B = { (J_ℓ, α_ℓ) : ℓ = 1,...,M} of distinct pairs such that M≥ 2, J_ℓ∈ℕ^n_0 is a multi-index verifying |J_ℓ|≤ r and α_ℓ∈𝒜. We write G :=D_α_ℓ^J_ℓΓ^(L+1)_i;α_ℓ_1≤ i ≤ n_L+1 (J_ℓ,α_ℓ) ∈ B∈^M× n_L+1, where ℝ^n_0∋ x_α↦ (Γ^(L+1)_1;α,...,Γ^(L+1)_n_L+1;α) is the centered Gaussian field with covariance CovΓ^(L+1)_i;α,Γ^(L+1)_j;β= δ_ijK_αβ^(L+1), as defined in (<ref>). (1) Assume that the infinite width covariance structure {K^(ℓ): ℓ = 1,...,L+1} is non-degenerate to the order r, in the sense of Definition <ref>. Then, the covariance matrix of G is invertible, and there exists a constant C_0>0 depending on r, B,C_b, C_W, σ, L such that d_cD_α_ℓ^J_ℓ z_i;α_ℓ_1≤ i ≤ n_L+1 (J_ℓ,α_ℓ) ∈ B, G≤ C_0 n^-1/2, where, in the previous estimate, we have implicitly regarded D_α_ℓ^J_ℓ z_i;α_ℓ_1≤ i ≤ n_L+1 (J_ℓ,α_ℓ) ∈ B and G as (M· n_L+1)-dimensional random vectors. (2) Assume that the non-linearity σ is smooth (σ∈ C^∞(ℝ)). Then, there exists a constant C_1>0 depending on r, B,C_b, C_W, σ, L such that d_cD_α_ℓ^J_ℓ z_i;α_ℓ_1≤ i ≤ n_L+1 (J_ℓ,α_ℓ) ∈ B, G'≤ C_1 n^-1/2, where G':=D_α_ℓ^J_ℓΓ'_i;α_ℓ_1≤ i ≤ n_L+1 (J_ℓ,α_ℓ) ∈ B∈^M× n_L+1, and ℝ^n_0∋ x_α↦ (Γ'_1;α,...,Γ'_n_L+1;α) is the centered Gaussian field with covariance CovΓ'_i;α,Γ'_j;β= δ_ij𝔼[Σ^(L)_αβ] = δ_ijκ^(L+1)_αβ, with Σ_αβ^(L) and κ^(L+1)_αβ defined according to (<ref>) and (<ref>), respectively. Let x_α∈^n_0↦ z_α^(L+1)∈^n_L+1 be a random neural network verifying Assumption <ref> with a non-linearity σ that is polynomially bounded to order r≥ 1 as in Definition <ref>; recall notation (<ref>). Fix m≥ 1, a set 𝒜={α_1,…, α_m}, a finite collection of distinct non-zero network inputs { x_α : α∈𝒜}⊆ℝ^n_0 and a collection of directional derivatives V = {V_1,...,V_p} as in (<ref>). Further, consider a family B = { (J_k, α_k) : k = 1,...,M} of distinct pairs such that M≥ 2, where J_k∈ℕ^p is a multi-index verifying |J_k|≤ r and α_ℓ∈𝒜. Finally, for any multi-index J=j_1,…, j_p∈ℕ^p, use the notation (<ref>) and set G :=V_α_k^J_kΓ^(L+1)_i;α_k_1≤ i ≤ n_L+1 (J_k,α_k) ∈ B∈^M× n_L+1, where ℝ^n_0∋ x_α↦ (Γ^(L+1)_1;α,...,Γ^(L+1)_n_L+1;α) is the centered Gaussian field with covariance CovΓ^(L+1)_i;α,Γ^(L+1)_j;β= δ_ijK_αβ^(L+1), as defined in (<ref>). (1) Suppose the infinite width covariance structure {K^(ℓ): ℓ = 1,...,L+1} is non-degenerate to the order r on x_α: α∈𝒜 with respect to V, in the sense of Definition <ref>. Then, the covariance matrix of G is invertible, and there exists a constant C_0>0 depending on V, r, B, 𝒫 such that d_cV_α_k^J_k z_i;α_k_1≤ i ≤ n_L+1 (J_k,α_k) ∈ B, G≤ C_0 n^-1/2, where we have implicitly regarded V_α_k^J_k z_i;α_k_1≤ i ≤ n_L+1 (J_k,α_k) ∈ B and G as (M· n_L+1)-dimensional random vectors. (2) Assume that the non-linearity σ is smooth (σ∈ C^∞(ℝ)). Then, there exists a constant C_1>0 depending on V, r, B, 𝒫 such that d_cV_α_k^J_k z_i;α_k_1≤ i ≤ n_L+1 (J_k,α_k) ∈ B, G'≤ C_1 n^-1/2, where G':=V_α_k^J_kΓ'_i;α_k_1≤ i ≤ n_L+1 (J_k,α_k) ∈ B∈^M× n_L+1, and ℝ^n_0∋ x_α↦ (Γ'_1;α,...,Γ'_n_L+1;α) is the centered Gaussian field with covariance CovΓ'_i;α,Γ'_j;β= δ_ij𝔼[Σ^(L)_αβ] = δ_ijκ^(L+1)_αβ, with Σ_αβ^(L) and κ^(L+1)_αβ defined according to (<ref>) and (<ref>), respectively. As before, we consider integers r≥0 and L,n_0, n_L+1≥ 1, a nonlinearity σ: polynomially bounded to the order r≥0, as well as C_b≥ 0, C_W>0. We assume that x_α∈^n_0↦ z_α^(L+1)∈^n_L+1 is a random neural network with input dimension n_0, output dimension n_L+1, hidden layer widths n_1,…, n_L, and non-linearity σ as in Definition <ref>. Let the above notation and assumptions prevail, and suppose that, for constants c_1≥ c_2>0 and some n≥ 1, c_2 n≤ n_1,…, n_L≤ c_1 n. We fix m≥ 1, a set 𝒜={α_1,…, α_m} and a finite collection of distinct non-zero network inputs { x_α : α∈𝒜}⊆ℝ^n_0. Finally, we consider a family B = { (J_ℓ, α_ℓ) : ℓ = 1,...,M} of distinct pairs such that M≥ 2, J_ℓ is a multi-index verifying |J_ℓ|≤ r and α_ℓ∈𝒜. We write G :=D_α_ℓ^J_ℓΓ^(L+1)_i;α_ℓ_1≤ i ≤ n_L+1 (J_ℓ,α_ℓ) ∈ B∈^M× n_L+1, where ℝ^n_0∋ x_α↦ (Γ^(L+1)_1;α,...,Γ^(L+1)_n_L+1;α) is the centered Gaussian field with covariance CovΓ^(L+1)_i;α,Γ^(L+1)_j;β= δ_ijK_αβ^(L+1), as defined in (<ref>). (1) Assume that the infinite width covariance structure {K^(ℓ): ℓ = 1,...,L+1} is non-degenerate to the order r, in the sense of Definition <ref>. Then, the covariance matrix of G is invertible, and there exists a constant C_0>0 depending on r, B,C_b, C_W, σ, L such that d_cD_α_ℓ^J_ℓ z_i;α_ℓ_1≤ i ≤ n_L+1 (J_ℓ,α_ℓ) ∈ B, G≤ C_0 n^-1/2, where, in the previous estimate, we have implicitly regarded D_α_ℓ^J_ℓ z_i;α_ℓ_1≤ i ≤ n_L+1 (J_ℓ,α_ℓ) ∈ B and G as (M· n_L+1)-dimensional random vectors. (2) Assume that the non-linearity σ is smooth (σ∈ C^∞(ℝ)). Then, there exists a constant C_1>0 depending on r, B,C_b, C_W, σ, L such that d_cD_α_ℓ^J_ℓ z_i;α_ℓ_1≤ i ≤ n_L+1 (J_ℓ,α_ℓ) ∈ B, G'≤ C_1 n^-1/2, where G':=D_α_ℓ^J_ℓΓ'_i;α_ℓ_1≤ i ≤ n_L+1 (J_ℓ,α_ℓ) ∈ B∈^M× n_L+1, and ℝ^n_0∋ x_α↦ (Γ'_1;α,...,Γ'_n_L+1;α) is the centered Gaussian field with covariance CovΓ'_i;α,Γ'_j;β= δ_ij𝔼[Σ^(L)_αβ] = δ_ijκ^(L+1)_αβ, with Σ^(L) and κ^(L+1)_αβ defined according to (<ref>) and (<ref>), respectively. We prove Theorem <ref> in <ref>. Before providing in the next section our infinite-dimensional results, we make several remarks, where we write for simplicity V^J_ℓ z_i;α_ℓ := V_α_ℓ^J_ℓ z_i;α_ℓ_1≤ i ≤ n_L+1 (J_ℓ,α_ℓ) ∈ B. (a) Under the assumptions of Point (2) of Theorem <ref>, one might have that the covariance matrix of the vector G is singular. In this case, the law of G is supported by a lower-dimensional subspace ℒ⊂ℝ^M· n_L+1, and, in principle, one might have that ℙ[V^J_ℓ z_i;α_ℓ∈ℒ] = 0 for any choice of n_1,...,n_L, which would imply in turn d_cV^J_ℓ z_i;α_ℓ, G = 1. This difficulty is resolved by replacing G with a vector G' having the same covariance matrix as V^J_ℓ z_i;α_ℓ. (b) Theorem 1.1 in <cit.> allows one to deduce an estimate analogous to (<ref>)–(<ref>) in the case r=0, where the left-hand sides of these bounds are replaced by the quantities W_2 ( z_i;α^(L+1) , G) and W_2 ( z_i;α^(L+1) , G'), respectively — see Definition <ref>. As discussed in Section <ref>, our findings are based on some refinements of the bounds established in <cit.> by means of the so-called Stein's method for multivariate approximations, whereas <cit.> exploit optimal transport arguments, close to those that we will use in an infinite-dimensional setting (see the forthcoming Section <ref>). It is also remarkable that our bounds (in the case r=0) are strictly stronger than those obtained by combining the results of <cit.> with the following standard estimate (proved in <cit.>): if N is a centered Gaussian vector with invertible covariance matrix Σ, then, for all random vectors F, d_c(F, N) ≤ C √(Σ_HS)· W_1 (F,N)^1/2. Indeed, applying (<ref>) to F =z_i;α^(L+1) and N = G or N=G' (and using the fact that W_1≤ W_2), one can infer from <cit.> an upper bound of the order n^-1/4 on d_cz_i;α^(L+1) , G and d_cz_i;α^(L+1) , G', as n→∞. (c) (Dimensional dependence) An inspection of the arguments rehearsed in our proofs (see Section <ref>) reveals the following estimates: (i) the constant C_0 in (<ref>) is such that C_0≤ a_0 (M· n_L+1)^65/24, where a_0 depends on σ, L, n_0, C_b, C_W; (ii) the constant C_1 in (<ref>) is such that C_1≤ a_1 λ_+^-3/2 R^65/24, where the constant a_1 depends on σ, L, C_b, C_W, n_0, the symbol R denotes the rank of the covariance matrix of G', and λ_+ is the smallest strictly positive eigenvalue of the covariance matrix of G'. This implies in turn the following rough estimate: if λ_+ is bounded away from zero as M→∞, then C_1 = O( M^65/24). The exponent 65/24 does not carry any specific meaning: it is an artifact of the (recursive) techniques used in <cit.> and can be improved in special situations. To see this, consider for instance the elementary situation where L=1 and B = {0, α_0} (so that | B| = 1). In this case, V^J_ℓ z_i;α_ℓ =( z_i;α_0)_1≤ i ≤ n_2 has the law of some multiple of a random vector with the form Z_n_1 = 1/√(n_1)∑_k=1^n_1 Y_k, where the {Y_k} are i.i.d. centered n_2-dimensional Gaussian vectors with identity covariance matrices. Now, assuming for simplicity that the non-linearity σ is bounded, one deduces from <cit.> (and some elementary computations left to the reader) that, denoting by Z a standard n_2-dimensional Gaussian vector, d_c(Z_n_1, Z) ≤ B n_1^-1/2, where B = O(n_2^7/4) as n_2→∞ (and the implicit constants are absolute). We also observe that, in the case where B= {(0, α_0)} (one input, no derivatives), the exponent 65/24 implicitly appearing in our bound can be reduced to 53/24. (d) We provide here one important example in which the infinite width covariance structure fails to be canonically non-degenerate but is instead non-degenerate with respect to a particular set of directional derivatives. Specifically, consider σ(t)=ReLU(t)=max0,t, C_b=0, C_W=2, r=1, and fix a network input x_α with x_α=1. Note that, since x_α↦ z_i;α^(ℓ) is homogeneous of degree one with respect to x_α, we have z_i;α^(ℓ) = (x_α·∇) z_i;α^(ℓ). A direct computation now shows that for any v_1,v_2∈^n_0 V_j = v_j ·∇ = ∑_i=1^n_0(v_j)_i ∂_x_i ⇒ V_j V_k K_αα^(ℓ) = 2/n_0v_jv_k. Thus, writing ∂_x_0=id we have ∂_x_i∂_x_jK_αα^(ℓ)_i,j=0,…, n_0 = 2/n_0Gramx_α, e_1,…, e_n_0, where e_j is the j-th standard unit vector. The Gram matrix is not invertible since the vectors are not linearly independent. In contrast, V=V_1,…, V_n_0-1 to be partial derivatives in any set of directions that are a basis for the orthogonal complement to x_α, we see from (<ref>) that the infinite width covariance structure is indeed non-degenerate to order 1 on x_α with respect to V. (e) Similarly to the results obtained in <cit.>, and as noted above in (c), the bounds we obtain diverge when the dimension M increases. It is hence natural to focus on functional bounds, which are indeed addressed in the next subsection. (f) In the case where B = {0, α_0} (one input, no derivatives), our results are comparable to <cit.>, where some slightly different set of assumptions on the nonlinearity σ is considered. (g) It is natural to conjecture that the rate of convergence n^-1/2 in our multidimensional estimates (<ref>)–(<ref>) could be improved to n^-1 as in the one-dimensional case (see Theorem <ref>). However, in order to establish such a result, one should prove a multidimensional version of Proposition <ref>, a task that seems to be outside the scope of current techniques (the crucial technical difficulty is that no existing version of the multidimensional Stein's method allows one to directly deal with the mappings emerging from the use of Lusin's theorem – see <cit.> for a discussion). §.§ Functional Bounds For the rest of the section, we let ^n_0∋ x_α↦ z_α^(L+1)∈^n_L+1 be a random neural network verifying Assumption <ref>; in particular, σ is polynomially bounded to the order r≥ 1. To simplify the discussion, from now on we will use the symbol ℳ_r to denote the class of all multi-indices J∈ℕ^n_0 such that |J|≤ r; with such a notation, one has that ℳ_0 = {0}. For the rest of the section, 𝕌 is an open ball contained in ^n_0. [Spaces of smooth functions] (a) For k≥ 0, we write C^k(𝕌; ^n_L+1) :=C^k(𝕌) to indicate the class of ^n_L+1-valued, k times differentiable functions on 𝕌. We also denote by C^k_b(𝕌) the subspace of C^k(𝕌) composed of functions whose derivatives of order ≤ k are bounded and uniformly continuous on 𝕌. It is a well-known fact (see e.g. <cit.>) that the elements of C^k_b(𝕌), as well as their derivatives, admit continuous extensions to the closure 𝕌̅. It follows that C^k_b(𝕌) can be identified with the space C^k(𝕌̅), that we endow with the supremum norm, defined as follows: for f = (f_1,...,f_n_L+1) ∈ C^k(𝕌̅), f_C^k(𝕌̅):=max_i∈[n_L+1]max_J∈ℳ_rmax_x∈𝕌̅ | D^J f_i(x)|. It is clear that the space C^k(𝕌̅) is Polish. In this paper, we will sometimes use the following fact (whose proof can be deduced along the lines of <cit.>): for every k≥ 0 and every m>k, the set C^m(𝕌̅) is a Borel subset of C^k(𝕌̅). Analogous conventions and results hold for the spaces C^k,k(𝕌×𝕌) and C^k,k(𝕌̅×𝕌̅), k≥ 0. (b) For r≥ 0, we define 𝕎^r,2(𝕌) :=𝕎^r,2(𝕌; ^n_L+1) to be the Sobolev space obtained as the closure of square-integrable ℝ^n_L+1-valued mappings on 𝕌 with square-integrable (weak) derivatives up to the order r, see <cit.>. We observe that 𝕎^r,2(𝕌) is a closed subspace of the Hilbert space H:= L^2(ℳ_r× [n_L+1]×𝕌,dν_0⊗ dν_1 ⊗ dx) (from which it inherits the inner product), where [n] := {1,...,n}, and ν_0 and ν_1 are the counting measures on ℳ_r and [n_L+1], respectively; plainly, for r=0, the spaces 𝕎^0,2(𝕌) and H=L^2( [n_L+1]×𝕌, dν_1 ⊗ dx) coincide. (c) For every r≥ 0, there exists a canonical continuous injection ι from C^r(𝕌̅) to 𝕎^r,2(𝕌). This implies that, if X is a random element with values in C^r(𝕌̅), then ι(X) is a well-defined random element with values in 𝕎^r,2(𝕌). For the sake of brevity, we will often refer to this fact by saying that “X is regarded as a random element with values in 𝕎^r,2(𝕌)” (or some equivalent formulation), and write X instead of ι(X) by a slight abuse of notation. Similar conventions are tacitly adopted to deal with the spaces C^r,r(𝕌̅×𝕌̅) and 𝕎^r,2(𝕌)⊗𝕎^r,2(𝕌). (d) We will use the following special consequence of the Sobolev embedding theorem (as stated e.g. in <cit.> or <cit.>): if 𝕌 is an open ball (or, more generally, a Lipschitz domain) and u∈ C^∞(𝕌̅):= ⋂_k C^k(𝕌̅), then, for all k≥ 1, u _C^k(𝕌̅)≤ A · u _𝕎^r,2(𝕌), where r := k+1+⌊n_0/2⌋, ⌊ y ⌋ stands for the integer part of y, and A is an absolute constant uniquely depending on 𝕌. §.§.§ Random fields as random elements Given the ball 𝕌⊂^n_0, we define the random field z^(L+1)_𝕌 := {z^(L+1)_i;x_α : i=1,...,n_L+1 ; x_α∈𝕌}. Our aim is to compare the law of z^(L+1)_𝕌 with that of Γ^(L+1)_𝕌 ={Γ^(L+1)_i;α : i∈ [n_L] ; x_α∈𝕌}, where, as before, x_α↦ (Γ_1;α^(L+1), ...,Γ_n_L+1;α^(L+1)) is the centered Gaussian field with covariance 𝔼(Γ^(L+1)_i;αΓ^(L+1)_j;β) = δ_ij K^(L+1)_αβ, with K^(L+1) recursively defined according to (<ref>). In view of Remarks <ref>, <ref> and <ref>-(c), z^(L+1)_𝕌 and Γ^(L+1)_𝕌 can be regarded as both C^q(𝕌)- and 𝕎^q;2(𝕌)-valued random elements, for all q∈{0,...,r-1}. The case q=r is more delicate, however, and we will sometimes make the following assumption: The non-linearity σ is polynomially bounded to the order r≥ 1, and both z^(L+1)_𝕌 and Γ^(L+1)_𝕌 are random elements with values in 𝕎^r;2(𝕌) with probability 1. Though we do not know if it always holds, this assumption is easy to verify in the following three cases: (i) σ is smooth (i.e. in C^∞()) and r≥ 1 is arbitrary. (ii) σ is ReLU or leaky ReLU and r=1. (ii) σ is any non-linearity that is polynomially bounded to the order r≥ 1 and C_b>0. Indeed, case (i) is trivial. Case (ii) follows from the observation that the network function is continuous and piecewise linear subordinate to a finite partition of the input space ^n_0 into a finite collection of convex polyehdra on which the network is affine. And case (iii) follows from the fact when C_b>0 the set of inputs x_α for which σ is not r-times differentiable at z_i;α^(ℓ) for some i,ℓ has Lebesgue measure 0. With this in mind, from now on, for an integer q≤ r-1, let us write W_2;q to indicate the distance W_2 defined in (<ref>) for K = 𝕎^q,2(𝕌) (and we extend this definition to q=r when Assumption <ref> holds). §.§.§ Bounds in Sobolev spaces For q≤ r, we canonically associate with the covariance (<ref>) the (compact, symmetric and positive) trace-class operator K : 𝕎^q,2(𝕌)→𝕎^r,2(𝕌) given by 𝕎^q,2(𝕌)∋ h = {h_i(x) : i∈ [n_L+1], x∈𝕌}↦ Kh := { ( Kh)_j(x_β) = ∑_J∈ℳ_q∫_𝕌 D_α^J h_j(x_α) D^J_α K^(L+1)_αβ dx_α : j ∈ [n_L+1], x_β∈𝕌}, and denote by λ_1;q^(L+1)≥λ_2;q^(L+1)≥⋯≥λ_k;q^(L+1)≥⋯≥ 0 its eigenvalues. Let T be a generic smooth covariance kernel, and let K_T : 𝕎^q,2(𝕌)→𝕎^q,2(𝕌) be the operator obtained from (<ref>) by replacing K^(L+1) with T. Then, exploiting the fact that K_T g = 0 for all g∈ H∩𝕎^r,2(𝕌)^⊥, one deduces that K_T^2_HS = T^2_𝕎^q,2(𝕌)⊗𝕎^q,2(𝕌) :=n_L+1×∑_I,J∈ℳ_q∫_𝕌∫_𝕌 D_α^J D_β^I T(x_α,x_β)^2 dx_α dx_β In addition to the Wasserstein distances W_2;q introduced above, we will consider a smoother notion of discrepancy between the distributions of Hilbert space-valued random elements, that we borrow from <cit.>. More precisely, following Bourguin and Campese <cit.>, given two random elements X,Y with values in a real separable Hilbert space K, we define the distance d_2, between the distributions of X and Y, to be the quantity d_2(X,Y) := sup_g∈ C^2_b(K) : sup_x∈ K∇^2 g(x)_K^⊗ 2≤ 1 |𝔼[g(X)] - 𝔼[g(Y)]|, where C_b^2(K) indicates the class of twice Fréchet differentiable real-valued mappings on K with bounded second derivative; it is a standard fact that d_2 metrizes convergence in distribution on K. The following statement contains explicit bounds on the functional Gaussian approximation of random neural networks with arbitrary depth. It is one of the main contributions of our work. Let the above assumptions prevail (in particular, σ is polynomially bounded to the order r≥ 1), and suppose moreover that the infinite width covariance structure {K^(ℓ): ℓ = 1,...,L+1} is canonically non-degenerate up to the order q≤ r-1, in the sense of Definition <ref>, for all finite subsets x_𝒜⊂𝕌. Then, one has the following two estimates: (1) There exists a constant C>0 depending on q,𝕌,𝒫 (see (<ref>)) such that, d_2(z^(L+1)_𝕌 , Γ^(L+1)_𝕌) ≤ Cn^-1/2, where d_2 is the distance defined in (<ref>), with K = 𝕎^q,2(𝕌). (2) Suppose moreover that there exists p∈ (0,1) such that ∑_k=1^∞( λ_k;q^(L+1))^p<∞; then, there exists a constant C>0 depending on q,p,𝕌,𝒫 such that W_2;q(z^(L+1)_𝕌 , Γ^(L+1)_𝕌) ≤ C(1/n)^1-p/2(2-p). The conclusions of Points (1) and (2) hold, more generally, for every q≤ r if one of the following set of assumptions is verified: (i) the non-linearity σ is smooth (regardless of any non-degeneracy assumption on the infinite-width covariance structure); (ii) Assumption <ref> holds and the infinite width covariance structure {K^(ℓ): ℓ = 1,...,L+1} is canonically non-degenerate up to the order r. A careful inspection of the proof reveals that the constants in Theorem <ref> depend on the volume measure of 𝕌 and hence, indirectly, on the input dimension n_0. To make the comparison with the existing literature more transparent, it is convenient to normalise 𝕌 to have unit measure, analogously to what was done in <cit.>, <cit.> and <cit.>, see the discussion below in Example <ref> and Section <ref>. Consider the case in which r=1 and the infinite-width covariance structure is canonically non-degenerate to the order q=0 on every finite collection of inputs x_𝒜⊂^n_0. Then, the conclusions of Points (1) and (2) in Theorem <ref> continue to hold when replacing the ball 𝕌 with any bounded subset 𝕍⊂^n_0 (possibly lower dimensional) endowed with some finite measure μ. In this case, one has to interpret 𝕎^0,2(𝕌) to be the Hilbert space L^2(𝕍, dμ), in such a way that the eigenvalues considered in (<ref>) are those of the integral operator on L^2(𝕍, μ) → L^2(𝕍, μ) given by h↦∫_𝕍 K^(L+1)(x,y) h(y) μ(dy). If σ(x) = max{0,x} (ReLU), then we know that Assumption <ref> is verified, and that the infinite-width covariance structure is canonically non-degenerate up to the order q=0 on every finite collection of inputs x_𝒜⊂^n_0. We can therefore apply the content of Remark <ref> to the case 𝕍 = 𝕊^n_0-1 (sphere) endowed with the unit mass Haar measure. In this case, the results of <cit.> imply that (<ref>) is true for q=0 and all p> 1/(2+n_0), which implies that, for all ϵ >0, there exists a constant C>0 depending on ϵ, ℙ such that W_2;0(z^(L+1)_𝕌 , Γ^(L+1)_𝕌) ≤ C(1/n)^n_0+1/4n_0+12 - ϵ. This result can be compared with the bounds established in similar circumstances but for shallow networks (i.e., for L=1) by <cit.>, where a logarithmic rate O((log n)^-1) is obtained, and by <cit.>, whose bound is of order O(n^-3/(4n_0-2)); see Section <ref> for more discussion and details. §.§.§ Embedding of smooth non-linearities In this section, we assume that Assumption <ref> is verified for a certain non-linearity σ∈ C^∞(ℝ) that is moreover polynomially bounded to the order r, for every r≥ 1 (this is equivalent to saying that all derivatives of σ are polynomially bounded). In this case, one has that K^(L+1)∈ C^∞, ∞( ℝ^n_0×ℝ^n_0), and the results stated in <cit.> yield that, since 𝕌 is an open ball, the estimate (<ref>) holds for all r,p>0. Thanks to the last part of Theorem <ref>, this implies in turn that, for all r≥ 1 and all ϵ >0, there exists a constant C>0 depending on r, ϵ, 𝒫 (see (<ref>)) such that W_2;r(z^(L+1)_𝕌 , Γ^(L+1)_𝕌) ≤ C(1/n)^1/4 - ϵ. By using (<ref>) one deduces from (<ref>) some remarkable uniform estimates, that can be expressed in terms of the transport distance W_∞; k, defined as follows: given two random elements X,Y with values in C^k(𝕌̅), we set W_∞; k(X,Y):= ( inf_(T,S)𝔼[ T-S^2_C^k(𝕌̅)])^1/2, where the infimum runs over all random elements (T,S) such that Tlaw=X and Slaw=Y. Let the assumptions of the present section prevail, and fix k≥ 1 and ϵ>0. Then, there exists a probability space (Ω_0, ℱ_0, ℙ_0) supporting two random elements X,Y with values in C^k(𝕌̅) such that (i) Xlaw= z^(L+1)_𝕌; (ii) Ylaw=Γ^(L+1)_𝕌; (iii) There exists a constant C>0 depending on r,k, ϵ, 𝕌,𝒫 such that 𝔼_0[X- Y^2_C^k(𝕌̅)]^1/2≤ C(1/n)^1/4 - ϵ. In particular, one has that, for all ϵ>0, W_∞; k(z^(L+1)_𝕌,Γ^(L+1)_𝕌)≤ C(1/n)^1/4 - ϵ, where the constant C>0 is the same as in (<ref>). The previous result implicitly uses the fact that, under the assumptions in the statement, z^(L+1)_𝕌 and Γ^(L+1)_𝕌 take values in C_b^k(𝕌) (and, consequently, in C^k(𝕌̅)) with probability one. (a) The results in Theorem <ref> can be exploited to obtain useful bounds for the Wasserstein distance between the finite-dimensional distributions of neural networks and their Gaussian limit. Indeed for some i ∈ (1,...,n_L+1) consider the two M-dimensional vectors (z_i;α_k^(L+1))_k=1,2,...,M, G; under the previous assumptions and notation, equation (<ref>) and some simple algebra yield a bound of the form W_2 (z_i;α^(L+1),G) ≤ C ×√(M)×(1/n)^1/4 - ϵ, where W_2 is the distance defined in (<ref>) for K=ℝ^M. In view of the dependence of the constants in equations (<ref>) and (<ref>) on the dimension of the vectors (z_i;α^(L+1),G) (see also equation (<ref>)), it is immediate to check that, for M large enough with respect to n, the bound in (<ref>) can be tighter than those in Theorem <ref>. (b) We believe that the uniform convergence results established in Theorem <ref> can open several venues for further research. In particular, these results are the first step to establish weak convergence for geometric and topological functionals of neural networks at the initialization step: for instance, the distribution of their critical points and extrema, the Euler-Poncaré characteristic of their excursion sets and their nodal volume. On the one hand, these functionals can provide neat characterizations of the complexity of the neural network landscape; on the other hand, the analytic determination of their expected values and moments is made possible, in the limiting Gaussian case, by classical tools of stochastic geometry such as the Kac-Rice Theorem and the Gaussian Kinematic Formula, see <cit.> and <cit.>. We leave these topics for further research. (c) In our proofs, we will exploit the following property, valid for all k≥ 0 and analogous to (<ref>): if X,Y are random elements with values in C^k(𝕌̅) and V is a random element defined on the same probability space as X and taking values in some Polish space 𝒱 and with law ℙ_V, then W^2_∞; k(X,Y) ≤∫_𝒱 W_∞;k^2 ( ℙ_X|V=v, ℙ_Y) dℙ_V(v), see again <cit.>. § RELATED WORK A few papers have recently addressed quantitative functional central limit theorems for neural networks in the shallow case of L=1 hidden layers. More precisely, <cit.> and <cit.> have studied one-hidden-layer neural networks, with random initialization models that are slightly different than ours. In particular, in both cases it is assumed that x_α∈𝕊^n_0-1; also, their random weights are assumed to be Rademacher sequences for the second layer (in the special case of polynomial activations, <cit.> allows for more general random weights with finite fourth moments). The random coefficients in the inner layer are Gaussian for <cit.>, uniform on the sphere in <cit.>. For activation functions which are polynomials of order p, the bounds by <cit.> and <cit.> are respectively of order W_2 (z^(L+1)_𝕊^n_0-1,Γ_𝕊^n_0-1) ≤ Cn_0^5p/6-1/2/n^1/6 , W_2 (z^(L+1)_𝕊^n_0-1,Γ_𝕊^n_0-1) ≤ C(n_0+p)^n_0/n^1/2 ; these rates can be compared with those given above in Theorem <ref>, which for L=1 and 𝕌=𝕊^n_0-1 yield a decay of square root order, irrespective of the input dimension. In the more relevant case of ReLU nonlinearities, these same authors obtain, respectively: W_2 (z^(L+1)_𝕊^n_0-1,Γ_𝕊^n_0-1) ≤ C ( loglog n ×log n_0/log n )^3/4 , W_2 (z^(L+1)_𝕊^n_0-1,Γ_𝕊^n_0-1) ≤ 7 ×1/n^3/(4n_0-2) ; comparing to the results discussed above in Example <ref> it is easy to see that the bounds in the present paper improve from logarithmic to algebraic compared to <cit.>, and are exponentially more efficient in the input dimension n_0 compared to <cit.>. It should also be noted that both <cit.> and <cit.> use cleverly some special properties of the sphere and its eigenfunctions to construct explicit couplings of the neural networks with Gaussian processes; as such, it does not seem seems trivial to generalize their arguments to arbitrary input spaces and/or to the multi-layer framework. Even more recently, the authors in <cit.> have considered one-hidden-layer networks (L=1) on the sphere with Gaussian initializations; they have hence exploited functional quantitative central limit results by <cit.> to obtain the following bounds in the d_2 norm, in the ReLU and polynomial case, respectively: d_2 (z^(L+1)_𝕊^n_0-1,Γ_𝕊^n_0-1) ≤ C ( 1/log n )^3/4 , d_2 (z^(L+1)_𝕊^n_0-1,Γ_𝕊^n_0-1) ≤ C ×1/n^1/2 . The bounds obtained in the present paper are tighter: for instance, for the ReLU case they are algebraic rather than logarithmic in the number of nodes n, even when applied to the Wasserstein metric (which, we recall, is strictly stronger than d_2). Moreover, the argument in <cit.> exploits a direct computation of moments and cumulants which seems difficult to extend to networks of arbitrary depth. To the best of our knowledge, the only paper so far devoted to quantitative functional central limit theorems for multi-layer networks is <cit.>, where the authors establish bounds on the uniform Wasserstein distance between a neural network defined on a sphere, and a Gaussian field with matching covariance. The results in <cit.> allow for non-Gaussian weights and hold with respect to rather stringent distance functions. However, the bounds established in <cit.> do not apply to the regime n_1≍ n_2≍⋯ n_L-1 considered in the present paper. As a consequence, a direct comparison with our findings is not possible. § PREPARATORY RESULTS §.§ Variance estimates Our proofs rely extensively on the following estimates from <cit.> on the fluctuations of the random covariances Σ^(ℓ), defined in (<ref>). In what follows, we write κ(Z_1,...,Z_m) to indicate the joint cumulant of random variables Z_1,...,Z_m (see e.g. <cit.> for definitions), with the special notation κ_m(Z) in the case Z_1= ⋯ = Z_m=Z. Let x_α∈^n_0↦ z_α^(L+1)∈^n_L+1 be a random neural network verifying Assumption <ref> where, for r≥ 1, σ is polynomially bounded to order r in the sense of Definition <ref>. Fix also a collection of distinct non-zero network inputs x_𝒜:=x_α, α∈𝒜 and directional derivative operators {V_1,...,V_p} as in (<ref>). Suppose that either σ is smooth or that the infinite width covariance structure K^(ℓ) is non-degenerate to order q≤ r on x_𝒜 with respect to V, in the sense of Definition <ref>. Then, the following asymptotic relations are in order: (1) for ℓ=1,…, L, all multi-indices J_1,J_2 of order at most q, and any network inputs x_α_1,x_α_2∈ x_𝒜 we have for all n≥ 1 max Var(V_α_1^J_1 V_α_2^J_2Σ_α_1α_2^(ℓ)), V_α_1^J_1V_α_2^J_2{Σ_α_1α_2^(ℓ)- K_α_1α_2^(ℓ)}|≤ Cn^-1, where for a multi-index J=j_1,…, j_p we have used notation (<ref>) and have adopted the notational conventions V_α_1^J_1 V_α_1^J_2Σ_α_1α_1^(ℓ) := V_α_1^J_1 V_α_2^J_2Σ_α_1α_2^(ℓ) |_x_α_1 =x_α_2 , V_α_1^J_1 V_α_1^J_2𝔼[Σ_α_1α_1^(ℓ)] := V_α_1^J_1 V_α_2^J_2𝔼[Σ_α_1α_2^(ℓ)] |_x_α_1 =x_α_2 , V_α_1^J_1V_α_1^J_2K_α_1α_1^(ℓ) := V_α_1^J_1V_α_2^J_2K_α_1α_2^(ℓ) |_x_α_1 = x_α_2. The constant C depends on α_1,α_2,J_1,J_2,ℓ,r,q, 𝒫 (see (<ref>)) but is uniform over α _1,α_2 when the ratios x_α_1^2/n_0, x_α_2^2/n_0 vary over a compact set. (2) When r=1 and 𝒜 = {α} is a singleton, one has also that κ_3(Σ_αα^(ℓ)) ≤ C_1 n^-2, κ_4(Σ_αα^(ℓ)) ≤ C_2 n^-3, where the constants C_1,C_2 depend on α,ℓ, 𝒫 and are uniform over α when the ratio x_α_1^2/n_0 varies over a compact set. (3) Again when r=1 and 𝒜 = {α} is a singleton, there exist strictly positive constants B_1, B_2 and D_1, D_2 (depending on α,ℓ, 𝒫 and uniform over α when the ratio x_α_1^2/n_0 varies over a compact set) such that | Var(Σ_αα^(ℓ)) - B_1 n^-1|≤ B_2 n^-2, and | Σ_αα^(ℓ)- K_αα^(ℓ) - D_1 n^-1|≤ D_2 n^-2. Although the proof of Theorem <ref> is somewhat technical, for the sake of completeness we indicate here a few of the key ideas and refer the interested reader to 7 of <cit.> for the details. The starting point for the approach is based on the following structural properties of random neural networks: * The sequence of fields z_α^(ℓ) is a Markov Chain with respect to ℓ. * Conditional on the sigma algebra ^(ℓ) defined by z_α^(ℓ) is a Gaussian field with independent components z_i;α^(ℓ+1). * The conditional variance Σ_αα^(ℓ) of each component z_i;α^(ℓ+1) depends on z_α^(ℓ) only through random variables of the form _f^(ℓ):=1/n_ℓ∑_j=1^n_ℓf(z_i;α^(ℓ)). The article <cit.> refers to such random variables as collective observables. * Centered moments of collective observables depend on n as if the random variables f(z_i;α^(ℓ)) were independent: _f^(ℓ)-_f^(ℓ)^q=O_qn^-⌈q/2⌉, q≥ 0. Establishing this is the most difficult technical aspect of <cit.>. The basic idea is to proceed by induction on ℓ. When ℓ=1, the neuron pre-activations z_i;α^(1) are independent and hence the estimate (<ref>) is straight-forward. When ℓ≥ 2, however, the neuron pre-activations z_i;α^(1) are not independent. The idea is to analyze them by first using the law of total cumulance to write cumulants of collective observables in layer ℓ+1 in terms of cumulants of such objects at layer ℓ. Once the estimate (<ref>) is established, it now fairly straight-forward to study the mean and variance of Σ_αα^(ℓ). So let us now explain, mostly dispensing with rigor, how these four ideas come together to obtain a recursive description of the distribution of the field z_α^(ℓ+1) in terms of that of z_α^(ℓ) (we stick here to the case of a single input x_α). Denoting by ξ=ξ_1,…,ξ_m dual variables, consider the characteristic function p^(ℓ+1)(ξ):=exp[-i∑_i=1^m ξ_i z_i;α^(ℓ+1)] of m neuron pre-activations z_i;α^(ℓ), i=1,…, m. Conditioning on z_α^(ℓ) and using conditional Gaussianity allows us to write p^(ℓ+1)(ξ):=exp[-1/2ξ^2 Σ_αα^(ℓ)]. where we note that Σ_αα^(ℓ) is a collective observable in layer ℓ. Writing as before κ_αα^(ℓ):=Σ_αα^(ℓ), Δ_αα^(ℓ):=Σ_αα^(ℓ) - Σ_αα^(ℓ), we find p^(ℓ+1)(ξ):=exp[-1/2ξ^2 Δ_αα^(ℓ)]exp[-1/2ξ^2κ_αα^(ℓ)]. The second term is precisely the characteristic function of a centered m-dimensional Gaussian with iid components of variance κ_αα^(ℓ). Moreover, at least heuristically, the first term can be written exp[-1/2ξ^2 Δ_αα^(ℓ)] = ∑_q≥ 0Δ_αα^(ℓ)^q(-1)^q/2^qq!ξ^2q. Since -ξ^2 = Laplacian in the variables z_i;α^(ℓ+1). we have for any reasonable test function f that f(z_i;α^(ℓ+1), i=1,…, m) = ∑_q=0^∞1/2^qq!Δ_αα^(ℓ)^q∑_i=1^m ∂_z_i;α^2^q f(z_i;α, i=1,…, m)_κ_αα^(ℓ), where (z_i;α, i=1,…, m) is a vector of iid centered Gaussians with variance κ_αα^(ℓ). The concentration estimates (<ref>) ensure that this expression is a power series in 1/n. In particular, f(z_i;α^(ℓ+1), i=1,…, m) = f(z_i;α, i=1,…, m)_κ_αα^(ℓ) +(Δ_αα^(ℓ))^2/8∑_i=1^m ∂_z_i;α^2^2 f(z_i;α, i=1,…, m)_κ_αα^(ℓ) + O(n^-2). To derive usable recursions for cumulants of z_i;α^(ℓ+1) in terms of those of z_i;α^(ℓ) one now notes that (Δ_αα^(ℓ))^2 is precisely three times the 4-th cumulant of z_i;α^(ℓ) (see Lemma <ref>) and takes f to be various polynomials. The following statement – used repeatedly in our proofs – is a direct consequence of Theorem <ref>-(1). As before, we write ℳ_r to denote the class of all multi-indices J such that |J|≤ r. Fix a compact domain 𝕋⊂ℝ^n_0, consider a Borel-measurable 𝕌⊂𝕋. Suppose that the assumptions of Theorem <ref> are satisfied for every finite collection of distinct network inputs x_𝒜⊂𝕌, and let μ be a finite measure on ℳ_q×𝕌. Then, defining A_n:= ∫_ℳ_q×𝕌∫_ℳ_q×𝕌 Var(V_α_1^J_1 V_α_2^J_2Σ_α_1α_2^(L)) μ(dJ_1, dx_α_1)μ(dJ_2, dx_α_2), and B_n:= ∫_ℳ_qr×𝕌∫_ℳ_q×𝕌𝔼[( V_α_1^J_1 V_α_2^J_2Σ_α_1α_2^(L) - V_α_1^J_1V_α_2^J_2 K_α_1α_2^(L))^2] μ(dJ_1, dx_α_1)μ(dJ_2, dx_α_2), one has that max{A_n, B_n}≤ C μ(ℳ_q×𝕌)^2 n^-1, where the constant C depends on 𝕋, 𝒫,q,r. §.§.§ Connection with output cumulants The variances appearing in (<ref>) admit a direct interpretation in terms of the cumulants of the network outputs {z^(L+1)_i;α} and their derivatives. To see this, we record the following elementary statement (the proof is left to the reader). Consider a random vector (X,Y) as well as a positive definite 2× 2 symmetric random matrix Σ = {Σ (i,j) : 1≤ i,j≤ 2} with square-integrable entries. Assume that, conditionally on Σ, (X,Y) is a centered Gaussian vector with covariance Σ. Then, X and Y have finite moments of order 4 and 2 Var (Σ(1,2)) = κ(X,X,Y,Y) - Cov(Σ(1,1), Σ(2,2)). Applying Lemma <ref> to X = V^J_1_α_1 z^(ℓ+1)_i;α_1 and Y = V^J_2_α_2 z^(ℓ+1)_i;α_2, and exploiting Lemma <ref>, yields the remarkable identity 2 Var(V_α_1^J_1 V_α_2^J_2Σ_α_1α_2^(ℓ)) + Cov(V_α_1^J_1 V_α_1^J_1Σ_α_1α_1^(ℓ), V_α_2^J_2 V_α_2^J_2Σ_α_2α_2^(ℓ)) =κ(V_α_1^J_1z^(ℓ+1)_i;α_1, V_α_1^J_1z^(ℓ+1)_i;α_1, V_α_2^J_2z^(ℓ+1)_i;α_2, V_α_2^J_2z^(ℓ+1)_i;α_2), where we have used (<ref>); in particular, 3 Var(V_α_1^J_1 V_α_1^J_1Σ_α_1α_1^(ℓ)) = κ_4(V_α_1^J_1 z^(ℓ+1)_i;α_1). In the next two sections, we will focus on probabilistic bounds based on the so-called Stein's method for normal approximations. The reader is referred e.g. to <cit.> for a general introduction to this topic. §.§ Stein's bounds in dimension 1 Our main tool for one-dimensional probabilistic approximations is the following new estimate on the normal approximation of condtionally Gaussian random variables. Let F be a centered random variable with finite variance σ^2>0, and consider Z∼ N(0,σ^2). Assume that there exists an auxiliary integrable random variable A≥ 0 such that, conditionally on A, the random variable F has a centered Gaussian distribution with variance A. Then, for all functions f : ℝ→ℝ continuously differentiable and Lipschitz and every φ : ℝ_+ →ℝ bounded, 𝔼[ F f(F) φ(A)] = 𝔼[A f'(F) φ(A)], so that, in particular, σ^2 = 𝔼(A). Moreover, the following two properties hold: (1) if A is square-integrable, then d_TV(F,Z) ≤ 8/σ^4 Var (A) W_1(F,Z) ≤ 4/σ^2 Var (A); (2) if 𝔼(A^4) <∞, then min{ 2 d_TV(F,Z) ; W_1(F,Z)}≥ e^-σ^2/2| 1/8 Var(A) -1/48𝔼[(A-σ^2)^3] +R |, where |R| ≤ 384^-1 e^σ^2/2𝔼[(A-σ^2)^4]. By virtue of Lemma <ref>, one has that Var(A) = 1/3κ_4(F). Formula (<ref>) follows by conditioning and Gaussian integration by parts, and we can consequently focus on the proof of Point (1). Using the fact that the random variable F := F/σ verifies the assumptions in the statement with A := A/σ^2, one sees that it is sufficient to only consider the case σ=1. Combining Stein's method with Lusin's theorem (see <cit.>) as in <cit.> yields that d_TV(F,N) ≤sup_f : |f|≤ 1, |f'|≤ 2| 𝔼[Ff(F) - f'(F)]|, where the supremum runs over all mappings f: ℝ→ℝ of class C^1() such that |f| and |f'| are bounded by 1 and 2, respectively. Similarly, <cit.> yields that W_1(F,N) ≤sup_f : |f'|≤ 1| 𝔼[Ff(F) - f'(F)]|, where the supremum runs over all mappings f: ℝ→ℝ of class C^1() such that |f'| is bounded by 1. Combining (<ref>) with the two estimates above and taking conditional expectations yields that d_TV(F,Z)≤ 2 𝔼[|𝔼(1-A | F) |], W_1(F,Z)≤𝔼[|𝔼(1-A | F)|] (recall that, in this part of the proof, σ^2 = 1 by assumption). The key step (corresponding to a strategy already exploited in <cit.>) is now to observe that 𝔼[|𝔼(1-A | F) |] =𝔼[ sign(𝔼(1-A | F)) 𝔼(1-A | F)], so that, by using once again Lusin's theorem in the form of <cit.> one deduces that 𝔼[|𝔼(1-A | F) |]≤sup_g∈𝒞| 𝔼[g(F)(1-A)] |, where the supremum runs over the class 𝒞 of all continuous functions g : ℝ→ℝ that have compact support and are such that |g|≤ 1. Fix g∈𝒞. Since 𝔼(A) = 1, one has that 𝔼[g(F)(1-A)] = 𝔼[(g(F)-𝔼[g(Z)] )(1-A)]. To estimate the right-hand side of the previous equation, we use the classical fact that, according to e.g. to <cit.>, the differential equation g(x) - 𝔼[g(Z)] = f'(x) - xf(x), admits a unique bounded solution f_g ∈ C^1(ℝ ) such that |f'_g| ≤ 4. As a consequence, one has that 𝔼[g(F)(1-A)] = 𝔼[f'_g(F) (1-A) ] - 𝔼[F f_g(F) (1-A) ] = 𝔼[f'_g(F) (1-A) ] - 𝔼[f'_g(F) A (1-A)] = 𝔼[f'_g(F) (1-A)^2 ], where in the second equality we have used the fact that 𝔼[ Ff_g(F) | A] = A𝔼[ f'_g(F) | A], by (<ref>). This implies that |𝔼[g(F)(1-A)]| ≤ 4 Var(A), and the proof of Point (1) is complete. To deal with Point (2), we consider a generic σ^2>0 and observe that, according e.g. to <cit.>, 2 d_TV(F,Z) = sup_h : |h|≤ 1 | 𝔼[h(F)] -𝔼[h(Z)] |, where the supremum runs over all Borel measurable functions h whose absolute value is bounded by 1. By virtue of (<ref>) one has therefore that both 2 d_TV(F,Z) and W_1(F,Z) are bounded from below by the quantity | 𝔼(cos(F)) - 𝔼(cos(Z)) | = | 𝔼[ e^-A/2 - e^-σ^2/2 ] | Relation (<ref>) now follows by writing the Taylor expansion e^-A/2- e^-σ^2/2 = - e^-σ^2/2 (A/2 - σ^2/2) +e^-σ^2/2/2 (A/2 - σ^2/2)^2 - e^-σ^2/2/6(A/2 - σ^2/2)^3+ R_0, with |R_0| ≤1/24(A/2 - σ^2/2)^4, and taking expectations on both sides. If Z_1 ∼ N(0,σ^2_1) and Z_2 ∼ N(0,σ^2_2), then <cit.> implies that d_TV(Z_1,Z_2) ≤2/σ_1^2∨σ_2^2× |σ_1^2 - σ_2^2|. Also, choosing as a coupling T = σ_1 · Z and S = σ_2 · Z, with Z∼ N(0,1), one infers that W_1(Z_1,Z_2) ≤ | σ_1 - σ_2|. §.§ Multidimensional Stein's bounds When dealing with multidimensional normal approximations in the convex distance d_c, one has to deal separately with the case of singular and non-singular target covariance matrices. The next statement deals with the non-singular case; the proof can be deduced by reproducing the arguments leading to the proof of <cit.>, and is omitted for the sake of brevity. Let F = (F_1,...,F_M) be a centered random vector with square-integrable entries. Assume that there exists a M× M random matrix Σ = {Σ(i,j) : i,j=1,...,M} with square-integrable entries and such that, for all twice differentiable functions h : ℝ^M →ℝ that are 1-Lipschitz and such that sup_x∈ℝ^M Hess h(x)_HS≤ 1, one has the identity 𝔼[ ⟨ F , ∇ h(F) ⟩] = 𝔼[ ⟨Σ, Hess h(F)⟩_HS]. Then, Cov(F_i, F_j) = 𝔼[Σ(i,j)], i,j=1,...,M. Moreover, denoting by N = (N_1,...,N_M) a centered Gaussian vector with covariance B>0, the following estimate is in order: d_c(F,N) ≤ 402 {λ_min(B)^-3/2 +1 } M^41/24√(Σ - B^2_HS), where λ_min(B) is the smallest eigenvalue of B. In the parlance of <cit.>, any random matrix Σ verifying relation (<ref>) is a Stein's kernel associated with F. It is a well-known fact that, for m≥ 2, Stein's kernels are in general not unique (see e.g. the discussion contained in <cit.>). The second result of the section is new and allows one to deal with singular covariance matrices in some specific situations (that are relevant to the present paper). The proof uses ideas already exploited in <cit.>. Let F = (F_1,...,F_M) be a centered random vector with square-integrable entries. Assume that there exists a m× m positive definite symmetric random matrix Σ = {Σ(i,j) : i,j=1,...,M} with square-integrable entries and such that, conditionally on Σ, F has a centered Gaussian distribution with covariance Σ. Then, Cov(F_i, F_j) := C(i,j) = 𝔼[Σ(i,j)], i,j=1,...,M. Moreover, denoting by N = (N_1,...,N_M) a centered Gaussian vector with covariance C, the following estimate is in order d_c(F,N) ≤ 402 {λ_+(C)^-3/2 +1 } rk(C)^41/24√(∑_i,j=1^M Var(Σ(i,j))), where λ_+(C) is the smallest positive eigenvalue of C and we have written rk(C) for the rank of C. If C has full rank, then the result follows from Proposition <ref>. We can therefore assume that rk(C) = k<M. Without loss of generality, we may also assume that C = U^T D U, where U is an orthogonal matrix, and D is a diagonal matrix whose diagonal entries d_i are such that d_i>0 if i≤ k and d_i=0 otherwise. Following a strategy put forward in <cit.>, we now introduce an auxiliary random vector Z = (Z_1,...,Z_M) defined as Z := U F. A direct computation shows the following facts: (i) conditionally on Σ, the vector Z is centered and Gaussian with covariance Σ_0 := UΣ U^T; (ii) as a consequence, Z is centered with covariance given by the diagonal matrix D = U C U^T, which yields Z_i = 0, a.s.-ℙ, for all i>k, and Σ_0(i,ℓ) = 0, a.s.-ℙ, whenever max(i,ℓ)>k; (iii) the vector UN is centered and Gaussian, with covariance matrix given by D. To conclude the proof, we observe that d_c(F , N) =d_c(Z, U N) ≤ d_c(Z(k) , U N (k)), where Z(k) and U N_0 (k) denote, respectively, the first k entries of Z and UN_0. Applying Proposition <ref> to the right-hand side of the previous inequality yields the desired conclusion, by virtue of the relation Σ - C _HS = U^T Σ U - U^T C U_HS = Σ_0 - D _HS, where the first equality follows from the unitary invariance of the Hilbert-Schmidt norm. Let N_1 and N_2 be M-dimensional centered Gaussian vectors with covariances C_1 and C_2, respectively. Then, choosing the pairing T = √(C_1) N and S = √(C_2) N, where N is a standard Gaussian vector, one has the following estimate: W_2(N_1, N_2) ≤√(C_1) - √(C_2)_HS. See e.g. <cit.> for optimal bounds. §.§ Comparison of Hilbert-space valued Gaussian random elements Let H be a separable real Hilbert space, and endow H with the Borel σ-field associated with the norm ∙_H. We consider two centered, Gaussian H-valued random elements X_1,X_2, and denote by S_1 and S_2 their covariance operators. We recall that S_i is the unique symmetric, positive and trace-class linear operator S_i : H→ H such that, for all g∈ H, ⟨ X_i, g⟩_H is a centered Gaussian random variable with variance ⟨ S_i g, g⟩_H≥ 0 (see e.g. <cit.>). The following classical bound allows one to compare the distributions of X_1 and X_2 in the sense of the 2-Wasserstein distance. It is a direct consequence of Gelbrich <cit.>; see also <cit.> for a modern discussion of Gelbrich's results. Let the above assumptions and notations prevail. Then, W_2(X_1,X_2)≤√(S_1) - √(S_2)_HS. In order to deal with the norm √(S_1) - √(S_2)_HS (which is typically not directly amenable to analysis), we will use a new variation of the classical Powers-Størmer's inequality from <cit.>. See also <cit.> for some related bounds. Under the assumptions of the present section, denote by λ_1≥λ_2≥⋯λ_k ≥⋯≥ 0 the eigenvalues of the operator S_1. Then, for all p∈ (0,1) one has that √(S_1)-√(S_2)_HS≤ 2^3-p/2-p (∑_k=1^∞ (λ_k)^p)^1/4-2pS_1-S_2_HS^1-p/2-p. We can assume that ∑_k=1^∞ (λ_k)^p<∞, because otherwise the inequality is trivial. For all h∈ H, the action of the operators S_1 and √(S_1) on h can be written, respectively, as S_1 h = ∑_i λ_i ⟨ e_i, h⟩_H e_i and √(S_1) h = ∑_i √(λ_i)⟨ e_i, h⟩_H e_i, for some orthonormal basis {e_i} of H. For every λ>0, we now write (Π_1,λS_1) h := ∑_i : λ_i≥λλ_i ⟨ e_i, h⟩_H e_i and we define analgously (Π_1,λ√(S_1)) h; finally, we set Π_1,<λ S_1 := S_1 - Π_1,λS_1 and Π_1,<λ√(S_1) := √(S_1) - Π_1,λ√(S_1). We have √(S_1)-√(S_2)_HS ≤ Π_1,λ√(S_1) - √(S_2)_HS + Π_1,<λ√(S_1)_HS = √(Π_1,λS_1)-√(S_2)_HS + Π_1,<λ√(S_1)_HS ≤ 1/√(λ)Π_1,λS_1-S_2_HS + Π_1,<λ√(S_1)_HS (*) ≤ 1/√(λ)S_1-S_2_HS + 1/√(λ)Π_1,<λS_1_HS+ Π_1,<λ√(S_1)_HS ≤ 1/√(λ)S_1-S_2_HS + 2 Π_1,<λ√(S_1)_HS, where the inequality (*) follows from <cit.>, since Π_1,λS_1≥λ Id.. The bound Π_1,<λ√(S_1)_HS = √(∑_λ_k<λλ_k)≤λ^1-p/2√(∑_k=1^∞ (λ_k)^p), now leads to √(S_1)-√(S_2)_HS≤1/√(λ)S_1-S_2_HS + 2λ^1-p/2√(∑_k=1^∞ (λ_k)^p). Choosing λ =( S_1-S_2_HS/ 2 √(∑_k=1^∞ (λ_k)^p))^1/1-p/2 yields the desired conclusion. We also record the following bound from <cit.>: for the sake of completeness, we provide here a direct proof neither appealing to the notion of abstract Wiener space nor assuming that X_1 and X_2 are non-degenerate (as in <cit.>). Let the assumptions and notation of the present section prevail. Then, d_2(X_1,X_2)≤1/2 S_1-S_2_HS. Let h∈ C^2_b(H) be such that sup_x∈ H∇^2h(x)_H^⊗ 2≤ 1. Without loss of generality, let us assume that X_1 and X_2 are independent, and let us set U_t=√(t)X_1+√(1-t)X_2 for t∈[0,1]. We have 𝔼[h(X_1)] - 𝔼[h(X_2)] = ∫_0^1 d/dt𝔼[h(U_t)]dt = ∫_0^1 ( 1/2√(t)𝔼[⟨∇ f(U_t),X_1⟩_H] - 1/2√(1-t)𝔼[⟨∇ f(U_t),X_2⟩_H])dt = 1/2∫_0^1 𝔼[⟨∇^2 f(U_t),S_1-S_2⟩_HS]. Therefore |𝔼[h(X_1)] - 𝔼[h(X_2)]| ≤ 1/2sup_x∈ H∇^2 f(x)_HS S_1-S_2_HS, and the desired conclusion follows. We will use Propositions <ref> and <ref> in combination with (<ref>), in order to compare the distributions of H-valued random elements Z,Y such that Y is Gaussian and Z is conditionally Gaussian. To simplify the discussion, the corresponding statement is provided below in the special case in which H is a subspace of a L^2 space. Let (T, 𝒯 , ν) be a measure space such that (T,𝒯) is Polish and ν is a finite positive Borel measure. Write L^2(ν):= L^2(T, 𝒯 , ν), consider a closed subspace H_1⊂ L^2(ν), and select two H_1-valued random elements Z,Y with the following properties: – Y = {Y(x) : x∈ T} is a centered Gaussian field with covariance K(x,y) = 𝔼[Y(x)Y(y)] such that ∫_T∫_T K(x,y)^2ν(dx)ν(dy)<∞; – there exists a symmetric positive definite random field Σ = {Σ(x,y) : x,y∈ T} such that 𝔼[∫_T∫_T Σ(x,y)^2 ν(dx)ν(dy)]<∞ and, conditionally on Σ, Z = {Z(x) : x∈ T} is a centered Gaussian field with covariance Σ; – K is an element of H_1⊗ H_1; – with probability one, Σ is an element of H_1⊗ H_1. Then, the following estimates hold. (1) One has that d_2(Z,Y)≤1/2√(𝔼[∫_T∫_T(K(x,y) - Σ(x,y))^2 ν(dx)ν(dy) ]) (2) Let λ_1≥λ_2≥⋯≥ 0 denote the eigenvalues of the covariance K, that we identify with the integral operator H_1→ H_1 h↦∫_T K(·, y)h(y)ν(dy). Then, for all p∈ (0,1), W_2(Z,Y)≤ 2^3-p/2-p (∑_k=1^∞ (λ_k)^p)^1/4-2p{𝔼[∫_T∫_T(K(x,y) - Σ(x,y))^2 ν(dx)ν(dy) ]}^1/21-p/2-p. At Point (1) and Point (2), the distances d_2 and W_2 are defined with respect to the Hilbert space H_1. To prove Point (1), we can assume without loss of generality that X, Y, Σ are defined on the same probability space, and that (Z,Σ) and Y are stochastically independent. Now, for every h∈ C^2_b(H) such that h_C^2_b(H)≤ 1 one has that | 𝔼[h(Z)] -𝔼[ h(Y)]| ≤𝔼[| 𝔼[h(Z) | Σ] -𝔼[ h(Y) | Σ] | ], and the result follows by applying Proposition <ref> in the case S_1 = Σ and S_2 = K. The proof of Point (2) follows by applying (<ref>) to the case q=2 and U=Σ, and then by applying Proposition <ref> to the case S_1 = K and S_2 = Σ. § PROOF OF THE MAIN RESULTS §.§ Proof of Theorem <ref> Fix J and x_α as in the statement. Then, conditionally on ℱ^(L), the random variable V^J_α z_i;α^(L+1) is centered and Gaussian, with variance V_α^J V_β^JΣ^(L)_αβ |_x_α = x_β := A. Writing d for either d_TV or W_1 and denoting by Y a centered Gaussian random variable with variance 𝔼(A), we infer that d(V^J_α z_i;α^(L+1), Z) ≤ d(V^J_α z_i;α^(L+1), Y) + d(Y,Z):= P + Q, and the conclusion of Point (1) is obtained by bounding P and Q by means of (<ref>)–(<ref>) and (<ref>)–(<ref>), respectively, and then by applying (<ref>) in the case J_1=J_2 = J, ℓ = L and α_1 = α_2 = α. Point (2) in the statement follows from (<ref>) in the case A = Σ^(L)_αα and σ^2 = 𝔼(Σ^(L)_αα), that one should combine with (<ref>), and the fact that, in this specific configuration and by virtue of (<ref>), |R+ 𝔼[(A-σ^2)^3] |≤ Q n^-2, for some constant Q independent of n. We observe that, in order to deduce (<ref>), we used the two elementary identities: 𝔼[(A-σ^2)^3] = κ_3(A), and 𝔼[(A-σ^2)^4] = κ_4(A) + 3 κ_2(A)^2. Taking ℓ = L and applying the estimates (<ref>) and (<ref>) from Theorem <ref> yields (<ref>) and (<ref>), completing the proof of Theorem <ref>.□ §.§ Proof of Theorem <ref> Write M_0 := M· n_L+1. We start by observing that, conditionally on ℱ^(L), the M_0-dimensional random vector F:= V_α_ℓ^J_ℓ z_i;α_ℓ_1≤ i ≤ n_L+1 (J_ℓ,α_ℓ) ∈ B is Gaussian and centered, with covariance Σ(i,(J_ℓ, α_ℓ) ; j,(J_k, α_k) ) := δ_ij V_α_ℓ^J_ℓV_α_k^J_kΣ^(L)_α_ℓα_k, where we used the convention (<ref>) to deal with the case α_k = α_ℓ. Gaussian integration by parts yields, in particular, that, for all twice differentiable functions h : ℝ^M_0→ℝ that are 1-Lipschitz and such that sup_x∈ℝ^M_0 Hess h(x)_HS≤ 1, one has the identity 𝔼[⟨∇ h(F), F⟩_ℝ^M_0] = 𝔼[𝔼[⟨∇ h(F), F⟩_ℝ^M_0 | ℱ^(L)]] = 𝔼[⟨Σ , Hess h(F)⟩_HS]. Now suppose that the assumptions of Point (1) in the statement are in order. One can apply Proposition <ref> in the case M=M_0 and N = G to deduce that the quantity d_c(F,G) is bounded by a multiple of √(B_n), where B_n is defined in (<ref>) with μ(dJ, dx) equal to the counting measure on B, and the conclusion follows from (<ref>). Similarly, under the assumptions of Point (2) in the statement, one can exploit Proposition <ref> in the case M=M_0 and N = G' to deduce that the quantity d_c(F, G') is bounded by a multiple of √(A_n), where A_n is defined in (<ref>) with μ(dJ, dx) equal to the counting measure on B, and (<ref>) yields once again the desired conclusion. This remark will contain: (i) a discussion of `Stein's kernels', (ii) a proper acknowledgment of Basteri-Trevisan; (iii) a discussion of isoperimetric constants. For example, The estimate W_1(z^(L+1)(m) , N_0)≤ W_2(z^(L+1)(m) , N_0)≤λ_+(τ^2)^-1/2·γ', is proved in <cit.>. The arguments in the proof use ideas contained in <cit.> in combination with techniques preseted in <cit.> and <cit.>. need to adjust notation in the proof below for general F, N, C. We seek to show that <cit.> d_c(z^(L+1)(m) , N_0) ≤ 400·{λ_+(τ^2)^- 3/2 +1}· rank(τ^2)^41/24·γ' where (γ' )^2 :=𝔼Σ^(L) - τ^2^2_HS = ∑_k,ℓ=1^m Var(Σ_α_kα_ℓ^(L) ). We first establish Proposition <ref> in the case when rank(τ^2) = m and then explain how to deduce the general case. TO DO Finally, we consider the case rank(τ^2) = k<m. Without loss of generality, we may assume that τ^2 = U^T D U, where U is an orthogonal matrix, and D is a diagonal matrix whose diagonal entries d_i are such that d_i>0 if i≤ k and d_i=0 otherwise. Following a strategy put forward in <cit.>, we now introduce an auxiliary random vector Z = (Z_1,...,Z_m) defined as Z := U z^(L+1)(m). A direct computation shows the following facts: (i) writing Σ^(L) = {Σ^(L)_α_jα_ℓ : j,ℓ=1,...,m}, one has that, conditionally on 𝒰, the vector Z is centered and Gaussian, with covariance Σ_0 := UΣ^(L) U^T (ii) as a consequence, Z is centered with covariance given by the diagonal matrix D = Uτ^2 U^T, which yields Z_i = 0, a.s.-ℙ, for all i>k, and Σ_0(i,ℓ) = 0, a.s.-ℙ, whenever max(i,ℓ)>k; (iii) the vector UN_0 is centered and Gaussian, with covariance matrix given by D. To conclude the proof, we observe that d_c(z^(L+1)(m) , N_0) =d_c(Z, U N_0) ≤ d_c(Z(k) , U N_0 (k)), where Z(k) and U N_0 (k) denote, respectively, the first k entries of Z and UN_0. Applying Proposition <ref>-(b) to the right-hand side of the previous inequality yields the desired conclusion, by virtue of the relation Σ^(L) - τ^2_HS = U^T Σ^(L) U - U^T τ^2 U_HS = Σ_0 - D _HS, where the first equality follows from the unitary invariance of the Hilbert-Schmidt norm. We are now in a position to complete the proof of Theorem <ref>. For this, let us recall the notation. Namely, we fix c_1>c_2>0, L,n_0, n_L+1≥ 1,σ: as well as C_b≥ 0, C_W>0. We then suppose that x_α∈^n_0↦ z_α^(L+1)∈^n_L+1 is a random neural network with input dimension n_0, output dimension n_L+1, hidden layer widths n_1,…, n_L, and non-linearity σ as in Definition <ref> with c_2 n≤ n_1,…, n_L≤ c_1 n. for some n≥ 1. Finally, fix m≥ 1 and a finite collection of network inputs x_α∈^n_0, α∈ A=α_1,…, α_m and let Z=Z_i;α_1≤ i ≤ n_L+1 α∈ A∈^m× n_L+1 be a centered Gaussian random matrix with CovZ_i;α,Z_j;β = δ_ijK_αβ^(L+1). Write z_A^(L+1):=z_i;α_j, i=1,…, n_L+1, j=1,…, m. By Lemma <ref> and Gaussian integration by parts we have, for any h∈𝒥 that z_A^(L+1)∇ h(z_A^(L+1)) =z_A^(L+1)∇ h(z_A^(L+1)) | ℱ^(L) =Σ_A^(L)Hess h(z_A^(L+1)), where Σ_A = (Σ_A)_(i_1,j_1),(i_2,j_2)_i_1,i_2 = 1,…, n_L+1, j_1,j_2=1,…, m∈^n_L+1× m with (Σ_A)_(i_1,j_1),(i_2,j_2)=Covz_i_1;α_j_1^(L+1),z_i_2;α_j_2^(L+1) = δ_i_1i_2Σ_α_j_1α_j_2^(L). The estimate (<ref>) from Theorem <ref> yields (<ref>), completing the proof of Theorem <ref>. □ §.§ Proof of Theorem <ref> The statement follows from Proposition <ref>, as applied to the following configuration – T = ℳ_q×[n_L+1]×𝕌 and ν = ν_0⊗ν_1 ⊗ dx, where ν_0 and ν_1 are counting measures; – Y = Γ^(L+1)_𝕌, regarded as a random element with values in H_1=𝕎^q;2(𝕌)⊂ L^2(ν) ; – Z = z^(L+1)_𝕌, regarded as a random element with values in H_1=𝕎^q;2(𝕌)⊂ L^2(ν) ; – for (J_1, i_1, x_α_1), (J_2, i_2, x_α_2)∈ T, Σ((J_1,i_1,x_α_1) ; (J_2,i_2,x_α_2)) = δ_i_1 i_2 D^J_1_α_1D^J_2_α_2Σ^(L)_α_1α_2, where the convention (<ref>) has been implicitly applied. Proposition <ref> implies therefore that, under the assumptions of Theorem <ref>-(1), the quantity d_2(z^(L+1)_𝕌 , Γ^(L+1)_𝕌) is bounded by a multiple of √(B_n), where B_n is defined according to (<ref>) in the case μ = ν_0⊗ dx, so that the conclusion follows from (<ref>). Analogously, under the assumptions of Theorem <ref>-(2), Proposition <ref> yields that the quantity W_2;q(z^(L+1)_𝕌 , Γ^(L+1)_𝕌) is bounded by a multiple of B_n^1/21-p/2-p, and (<ref>) yields once again the desired conclusion. The las statement in the theorem follows by an analogous route. §.§ Proof of Theorem <ref> Fix 𝕌 and k≥ 1 as in the statement, and define r := k+1+⌊n_0/2⌋. In view of <cit.>, it is sufficient to prove formula (<ref>). To accomplish this task, we will exploit relation (<ref>) in the following setting: X = z_𝕌^(L+1), Y =Γ^(L+1)_𝕌 and V = Σ^(L)= {Σ^(L)_αβ: x_α, x_β∈𝕌̅}, as defined in (<ref>). We regard z_𝕌^(L+1) and Γ^(L+1)_𝕌 as random elements with values in C^k(𝕌̅), such that ℙ(z_𝕌^(L+1)∈ C^∞(𝕌̅) )= ℙ(Γ_𝕌^(L+1)∈ C^∞(𝕌̅)) = 1. Similarly, we regard Σ^(L) as a random element with values in the space C^k,k(𝕌̅×𝕌̅) such that ℙ_Σ^(L)(C^∞,∞(𝕌̅×𝕌̅) )=1, where ℙ_Σ^(L) is shorthand for the law of Σ(L). By construction, there exists a version of the conditional probability ℚ_S := ℙ_z_𝕌^(L+1) | Σ^(L) = S such that, for ℙ_Σ^(L)-almost every S, one has that S∈ C^∞,∞(𝕌̅×𝕌̅) and, under ℚ_S, the random element z_𝕌^(L+1) is a centered Gaussian random field with n_L+1 independent components with common covariance S; when these two requirements are met, one has that ℚ_S(C^∞(𝕌̅)) = 1. The following statement gathers together the main results one can deduce from the construction of coupled smooth Gaussian fields detailed in <cit.>. Let the above notation and assumptions prevail, and let S be a symmetric and positive definite element of C^∞,∞(𝕌̅×𝕌̅). Let K be the operator defined in (<ref>) for r = k+1+⌊n_0/2⌋, and let K_S be the operator obtained from (<ref>) by replacing the kernel K^(L+1) with S. Then, there exists a probability space (Ω_1, ℱ_1, ℙ_1) supporting two random elements E,F, with values in C^k(𝕌̅) and such that: (a) E has the law of a centered Gaussian field on 𝕌̅ with n_L+1 independent components having common covariance S; (b) Flaw=Γ^(L+1)_𝕌; (c) the following estimate is in order: 𝔼_1[ E-F^2_𝕎^r;2(𝕌)] = √( K) - √( K_S)^2_HS. For E,F as in Lemma <ref>, one has that ℙ_1(E,F∈ C^∞(𝕌̅))=1, and on can apply (<ref>) to deduce that, for some absolute constant A depending on 𝕌, one has the bound E-F_C^k(𝕌̅)≤ A· E-F_𝕎^r;2(𝕌), a.s.-ℙ_1. Since, by virtue of Proposition <ref>, one has that, for all p∈ (0,1), √( K) - √( K_S)_HS≤ c· K - K_S^1-p/2-p_HS for some finite constant c uniquely depending on p and on the deterministic operator K, we deduce from (<ref>) that W_∞; k(z^(L+1)_𝕌,Γ^(L+1)_𝕌) is bounded by a multiple of B_n^1/21-p/2-p, where B_n is defined according to (<ref>) in the case μ = ν_0⊗ dx. The conclusion now follows from relation (<ref>). alpha
http://arxiv.org/abs/2307.07199v1
20230714071920
Ed-Fed: A generic federated learning framework with resource-aware client selection for edge devices
[ "Zitha Sasindran", "Harsha Yelchuri", "T. V. Prabhakar" ]
cs.DC
[ "cs.DC", "cs.LG" ]
Ed-Fed: A generic federated learning framework with resource-aware client selection for edge devices Zitha Sasindran, Harsha Yelchuri, T. V. Prabhakar Department of Electronic Systems Engineering Indian Institute of Science Bengaluru, India, 560012. Email: {zithas, harshay, tvprabs}@iisc.ac.in August 12, 2023 ======================================================================================================================================================================================================== Federated learning (FL) has evolved as a prominent method for edge devices to cooperatively create a unified prediction model while securing their sensitive training data local to the device. Despite the existence of numerous research frameworks for simulating FL algorithms, they do not facilitate comprehensive deployment for automatic speech recognition tasks on heterogeneous edge devices. This is where Ed-Fed, a comprehensive and generic FL framework, comes in as a foundation for future practical FL system research. We also propose a novel resource-aware client selection algorithm to optimise the waiting time in the FL settings. We show that our approach can handle the straggler devices and dynamically set the training time for the selected devices in a round. Our evaluation has shown that the proposed approach significantly optimises waiting time in FL compared to conventional random client selection methods. § INTRODUCTION With the advances in hardware and software technologies, edge devices are becoming increasingly powerful and intelligent. This enables the researchers to bring machine intelligence from cloud-based data centres to edge devices such as mobile phones. For instance, Google Pixel's “Now Playing" option which allows us to recognise any song without internet and “recorder" app with on-device speaker diarization show how powerful the mobile phones are becoming. Federated learning (FL) <cit.> introduces an additional dimension in the machine learning community in which models are collaboratively trained on edge devices, with the data remaining local to the device, in contrast to the centralised training approach, and hence ensures the data privacy of the users. A general FL setting consists of a set of clients (edge devices) and a server. The server randomly chooses a subset of the devices from the available ones that want to take part in the FL round. A copy of the global model, which the server maintains, is first sent to this subset of clients. These clients use their data in the device to train models locally, and then send the updated weights back to the server. In conventional FL, the server cannot move on to the next step until it has received updates from every client. As a result, FL performance is constrained by the variability in waiting time experienced by each client due to model training on the device or communication delay during the transfer of model weights to the server. These slowdowns caused by the clients with weaker network connections or limited computation capabilities is known as “straggler effect". Once the server gets the updates from all the selected clients, the weights are aggregated using a strategy. This process is repeated for several rounds until the global model achieves the desired accuracy. The random client selection approach for FL works well when there are no straggler devices. Prioritising for resource rich devices every time will result in the inability of straggler devices to participate in the FL process, and can lead to a loss of generalisation in the global model and fairness in the learning process. To address this challenge, a more sophisticated client selection algorithm is needed that considers the presence of stragglers while minimising waiting time. Hence the algorithm should aim to balance the participation of straggler devices with less waiting time and fairness in the FL process. Current FL frameworks face several challenges when it comes to deployment on edge devices such as mobile phones. These devices often have limited memory and computational resources, which makes it difficult to store and execute complex models. To overcome the existing challenges and make FL a practical and scalable solution for edge devices, it is important to develop FL frameworks that are designed specifically for these devices, taking into account their computational, privacy, and power constraints. We provide a methodology for training the models on the device, along with an efficient client selection algorithm to handle straggler devices and FL functionalities, allowing them to be deployed on edge devices for FL settings. We also monitor the client devices' resources to ensure that they continue to function seamlessly even in dynamic environments. Since this work is related to client selection, we do not consider asynchronous federated learning approaches. In this paper, we focus on the use case of automatic speech recognition (ASR), demonstrating how our FL framework can be used to improve the accuracy and robustness of speech recognition models. We summarise our contributions as follows: * We present Ed-Fed, an end-to-end FL framework for edge devices with a resource-aware time-optimised client selection algorithm. * We provide a complete methodology for training entire or fine tuned models on mobile phones with support for FL related functionalities. * We formulate a client selection algorithm by considering the computation, storage, power, and phone-specific capabilities of the client devices with ability to handle stragglers, optimise the waiting time and adaptively assign the training time for devices based on the these information. * We demonstrate the implementation with deployments on multiple mobile phones to quantify the waiting time in client selection. * We present an extensive evaluation of our framework on both simulations and mobile phone settings using a custom created audio corpus with a heterogeneous set of speakers. This paper is organised as follows. We provide a brief literature survey in Section II. Then, we discuss briefly about our Ed-Fed framework for FL settings in Section III. In Section IV, we discuss our proposed resource-aware client selection algorithm used in the server, followed by the framework evaluation in Section V and results in Section VI. Finally, concluding remarks are presented in Section VII. § RELATED WORKS FL Framework Recently, there has been a lot of interest in training ASR models in FL settings <cit.>. Implementing ASR for federated settings in mobile phones is a challenge in itself due to the complex model architectures and dynamic nature of the speech signals, as well as the limited resources available in mobile devices. Furthermore, due to factors such as noisy backgrounds, multiple accented speech, and different voice characteristics such as gender, pitch, phonation, loudness and tempo, the local speech data available on devices are non-IID in nature. This further complicates the training of ASR models. Several FL frameworks, including TensorFlow-Federated (TFF) <cit.>, and LEAF <cit.>, support only the simulation of FL systems and do not propose an edge device deployment. Flower framework <cit.>, on the other hand, supports extending FL settings to edge devices. However, we can see that the ASR task with the flower framework is also limited to system-level simulations <cit.>. Moreover, they provide only transfer learning techniques <cit.> but not entire retraining of the model from scratch. To the best of our knowledge, no existing framework has provided a comprehensive implementation of ASR on-device training, as well as FL settings for mobile phones. Client selection We focus mainly on bandit based client selection techniques in FL. In <cit.>, the authors formulated the client selection as a traditional multi-arm bandit based problem to select clients with better quality of data to improve the FL accuracy. The authors proposed upper confidence bound based client selection in <cit.> with the goal of reducing the overall time consumption of FL training including transmission time and local computation time. In <cit.>, the authors intelligently select clients by exploiting the data correlations among clients to improve FL learning performance. However, none of these studies used actual computation resource information from the devices to explain training latency. Meanwhile, <cit.> took a similar approach to ours, taking into account resource information and grouping clients accordingly. Even though the clients with similar resources are selected together, the need for adaptive setting training epochs is needed to control the stragglers. Furthermore, they did not take the battery information into account, which is critical. § ED-FED FRAMEWORK This section provides a detailed overview of our Ed-Fed framework. We first discuss the methodology for facilitating on-device training and weight updation of models in the clients, followed by a brief overview on the communication protocol and the server-side algorithms used. §.§ Client It is critical to develop a better methodology for converting a larger global model to a memory efficient and optimised version suitable for edge devices. We use TensorFlow <cit.> to create a well-optimised Flatbuffer format based conversion, which allows for significant model size reduction. We use the signature functions in Tensorflow to interface with the optimised model and perform operations such as training, inference, loading and saving the checkpoint weights, as described in <cit.>. Furthermore, we build extra signature functions specifically for FL configuration to aid in efficient communication between clients and server. Figure <ref> depicts the optimised model with eight signature functions that allow us to successfully train the model on the device, and perform FL related functions in the mobile devices. Along with the existing signature functions such as train, evaluate, save, load, and calculate_loss, we build three new signature functions: * Get_1D_weights: reshapes an N-dimensional weight tensor from each node in the model graph to a 1-D array and returns a list of 1-D arrays. * Get_nodenames_shapes: returns all the node names as well as the actual tensor shapes. * Set_weights: reshape the aggregated 1-D weight array to N-D tensor and reloads it into the model. One key functionality of these signatures is packing and unpacking the weights into 1-D and N-D arrays respectively. In <cit.>, the authors explained that important sensitive data can be decoded if an intruder gets access to weights during communication. But if the weights are packed into a 1-D array, private information like shape of the weight matrices of each layer which can give useful insights about the model architecture and data patterns are hidden. These three main signatures, Get_1D_weights, Get_nodenames_shapes and Set_weights not only facilitate in implementing Ed-Fed framework but also induce privacy to the framework. Due to the limited resources available in the edge devices, we prevent repeated storing of checkpoints after each epoch of training. Instead, we update a single checkpoint throughout the training. Hence, the optimised model (tflite) generated with the signature functions has all the functionalities which include support for training the models efficiently on the device as well as FL-based weight updation. The optimised model is utilised in our android application and our simulation systems, hence facilitating end-to-end real-time FL. §.§ Communication protocol We use the gRPC <cit.> communication protocol to ensure efficient communication between the server and clients. gRPC, like many RPC systems, is built on the concept of establishing a service, which describes the methods that can be called remotely with their input and return types. Protocol buffers are the most common interface definition language (IDL) used by gRPC to describe both the service interface and the message structure of the payloads. In our framework, three RPCs were used. “CommunicatedText" is the first, which is used to send the current context of the client and server. “GetGlobalWeights" is the second, which is used to send the current global FL weights to the clients who have been chosen for training. The final one is “GetFLWeights," which is used to share the aggregated weights between the server and client. §.§ Server On the server side, there are two major components: the Client selection with fairness and Aggregation strategy. Client selection involves selecting k clients out of N available clients. In case of waiting time optimised client selection algorithms, the clients should be selected in such a way that the overall waiting time of the clients is minimised, while ensuring fairness in the selection of clients. More about the client selection algorithms is discussed in further sections. The weight aggregation is an integral part of FL. The server strategy algorithms aggregates the weights (w_t^i) obtained from the selected k clients by choosing algorithms such as FedAvg <cit.>, FedProx <cit.>, etc., and the updated weights are sent back to the clients. Typically, in real-world settings the client's local data will not be representative of the global data distribution because of the noisy background or the different voice characteristics. Hence, simply aggregating weights from such clients with low quality data will lead to a deviation from the global model. An improved solution is to generate a weighted aggregation of the models, with a weighting coefficient reflecting the quality of the model <cit.>. A client with a higher word error rate (WER) denotes poor performance of the global model. In such cases, we assign a lower weighting coefficient to that model during the aggregation. We use a weighted WER-based strategy in our Ed-Fed setting and is denoted as follows: w_t+1∑_i=1^kα_iw^i_t+1 α_i = exp( 1- WER_i)/∑_j=1^k exp( 1 - WER_j) where the weights α_i's are calculated using the softmax distribution obtained from the WER values from multiple clients. Once clients are selected for training, the server notifies them to start training. The clients begin their training, and once they complete their training, they send their updated weight to the server. The server then aggregates them through a weight aggregation strategy. The aggregation strategy considers the uncertainties in the quality and quantity of data by using the WER based weighting used by each client obtained during the training process. § RESOURCE-AWARE TIME-OPTIMISED CLIENT SELECTION §.§ Need for waiting time optimisation Waiting time is the amount of time the client waits for server to fulfill the request. This is mainly observed when the clients send their updated weights and request the aggregated weights to be returned by the server. Since client platforms have different training times, the server cannot compute the aggregated weights until responses are obtained from all clients, leading to higher waiting times for clients with faster computing capabilities. To quantify the waiting time, we considered two common scenarios : (1) Select one fast client and one slow client and (2) Select one client with insufficient battery life made to run more number of epochs. Figure <ref> shows client 1 having higher compute capabilities which has waited for a significant amount of time for the slower client 2 to finish its training process. Figure <ref> presents the results of the experiment (Scenario 2). We can see that client 1 switches off in the middle of training, making client 2 wait for an infinite amount of time. Considering all these problems, we created a resource-aware time-optimized client selection algorithm which takes the resources of the clients into consideration and assigns work adaptively to each client depending on that client's resources. Our approach has three important phases, a resource information extraction, neural reward generation and a resource-aware time-optimised algorithm for clients. §.§ Resource information We aim to investigate the relationship between the resource availability and the fluctuations in the training time. We also aim to assess the impact of training on battery drain, which is crucial in real-world scenarios to prevent device shutdown during extended usage. The different resources which we considered for estimating the training time and battery drain of a particular phone are (1) memory-related information, (2) power-related information, (3) CPU-usage-related information (CI), and, (4) phone-specific information (PI). Memory-related information is captured using: (a) total RAM (TR), and, (b) available RAM (AR). In case of power-related information, (a) available battery charge (AC) and (b) battery status (BS) are used. For CI, the average usage of CPU across multiple cores is used. In order to find the PI, we used the Antutu score. The breakup of these resources into individual components is important because each of these can individually effect the training time. All these resources together form the context information of the client device, i.e., c=[TR,AR,AC,BS,CI,PI]. Obtaining context information from the phone before each FL round is important as time taken to complete the training process depends on it. We make use of this information by training a neural network to understand these dependencies and predict the expected training time and battery charge consumption based on a context vector. Once, we get to estimate these two parameters of each client, we can group clients together in a manner that minimises the waiting time of each client. §.§ Neural reward generator ruled In this section, our goal is to learn the training time, and battery drop based on the given resources. Further, use these values to learn a better client selection by removing straggler clients or by doing an adaptive setting of the training time for clients. We will first give a brief overview of the basic contextual combinatorial multi-armed bandits (CC-MAB) problem using neural upper confidence bound (NeuralUCB)<cit.> policy. We will then use this in our FL framework setting to optimise the overall waiting time, based on the context information. We formulate our client selection problem as a CC-MAB setting with N arms in which the agent interacts with the arms for T rounds. Initially, the true rewards generated by the arms are unknown, and the agent observes only the N context vectors from corresponding arms: {𝐜_𝐭,𝐬∈ℝ^d|s∈{1⋯ N}}. Let 𝐒⊆ 2^N be the set of all proper subsets of N available arms in the setting. At each round t, the agent predicts the rewards {r̂_t,s}_s∈𝐒_𝐭 using a reward estimating function (f) parameterised by θ, such that r̂_t,s = f(𝐜_𝐭,𝐬;θ). Then, the agent selects a subset, 𝐒_𝐭∈𝐒 containing k arms based on the calculated rewards. In most cases, the agent is aware of the parametric form of the reward estimating function that is being used. In linear upper confidence bound problems (LinUCB)<cit.>, for example, the reward is calculated as r̂_t,s = θ_*^T𝐜_𝐭,𝐬. After playing the arms in 𝐒_𝐭, the agent observes the true rewards given as {r_t,s}_s∈𝐒_𝐭. The goal of the agent is to maximise the expected reward, which is equivalent to minimising R_T, the cumulative regret over T rounds: R_T = 𝔼[∑_t=1^T( ∑_s ∈𝐒^*_𝐭r_t,s - ∑_s ∈𝐒_𝐭r̂_t,s) ] where 𝐒^*_𝐭 is the set of k optimal arms with maximum true rewards at round t. Unlike the classical linear contextual bandit where the reward estimating function f(𝐜_𝐭,𝐬;θ) is linear, we utilises a neural network to deal with the intricate relationship between context features and rewards. The neural network based reward estimating function f(𝐜_𝐭,𝐬;θ) estimates the expected reward of an action based on past observations, and is parameterised as: f(𝐜_𝐭,𝐬;θ) = √(m)𝐖_𝐋σ(𝐖_𝐋-1σ(⋯σ( 𝐖_1( 𝐜_𝐭,𝐬)))) where L is the total number of hidden layers, m is the hidden layer size (assumed to be same for all the layers for convenience), σ is the activation function, and 𝐖_𝐥 corresponds to the weight matrix in l^th layer. Here θ is the vectorised weight matrices from all the hidden layers in the neural network and is given as θ = [vec(𝐖_1^T),vec(𝐖_2^T) ⋯vec(𝐖_𝐋^T) ]. Our proposed approach for client selection considers the clients or edge devices participating in FL as the arms with resource information discussed in Section <ref> as the context vectors. We also observe that using a single reward generation model f(𝐜_𝐭,𝐬;θ_t-1) for multiple client devices (NeuralUCB-s) may lead to performance degradation if the edge devices have different intrinsic characteristics such as age and usage history. For example, consider two identical phones, one of which has been in use for 5 years and the other of which is brand new. However, if we calculate the training time and battery drop of both phones under similar contexts, they do not match. We see that the older phone performs badly and drains faster due to the aging of batteries. Also, it is dependent on how extensively the phone is used over time. If we use a single model, we miss out on these relationships, and the model gets confused if such clients exist in our rounds with degradation in performance. On the other hand, personalised reward generation models f_s(𝐜_𝐭,𝐬;θ_t-1^s) (NeuralUCB-m), can better adapt to the unique characteristics of each client device and provide more accurate results. In our approach, the neural network predicts the training time per batch of samples and also the drop in battery percentage given the resource information as the context vector, i.e., [b̂_̂t̂_t,s, d̂_t,s]=f_s(𝐜_𝐭,𝐬;θ_t-1^s). We use the negative of training time per batch (-b̂_̂t̂_t,s) for each arm as the reward in the UCB setting and the battery drop is used for straggler handling. Once the training is completed for round t, the clients will send their [b_t_t,s, d_t,s] along with the updated weights. Then, (𝐜_𝐭,𝐬,[b_t_t,s, d_t,s]) will be added to the corresponding clients' dataset (𝐃_t,s) and will be used for training the neural network. The detailed algorithm for neural combinatorial contextual bandits is shown in Algorithm <ref>. ruled §.§ Resource-aware time-optimised algorithm for client selection Initially, all the available clients {s∈{1⋯ N}} who wish to participate in t^th FL round, express their interest by sending their context vectors {c_t,s} and number of samples available for training (n_t,i) to the server. The server then selects k clients based on Algorithms <ref> and <ref> and notifies the selected clients about their selection along with the number of epochs (e_t,i) they need to run in that round which is computed using Algorithm <ref>. The methodology is as follows: * Calculate: (a) time taken to complete training a batch of data (b̂_̂t̂_t,i), (b) drop in battery on training a batch of data (d̂_t,i), and (c) the maximum number of batches (b_max_t,i), each client can run while maintaining the battery above γ%. * Calculate the maximum number of epochs (e_max_t,i) each client can run from b_max_t,i, n_t,i, e_max and bs * Filter out the clients who can run a minimum of e_min epochs into a set (P_t) * Create a set (S_t) by picking min(k, |P_t|) clients of P_t using Algorithm <ref>. * Calculate the time taken to run e_max_t,i epochs for each client and take the minimum of all these times (m_t). m_t will be the maximum time the FL round can happen while ensuring that no client switches off in this time. * Calculate the number of epochs (e_t,i) each client can run in m_t time. * Notify each client of S_t about their selection for that FL round along with their respective e_t,i. Thus, this algorithm ensures to adaptively choose the number of training epochs for the chosen clients while taking into account the available resources and minimising the waiting time for each client during FL rounds. § ED-FED FRAMEWORK SETUP We explain the entire setup used for evaluating Ed-Fed framework in both simulation and mobile environments. §.§ Setup for system-based evaluation In this section, we present the simulation settings used for evaluating our Ed-Fed framework for ASR tasks. The objective of this experiment to make the global model robust to multiple accents by learning from different accented clients. We use an end-to-end acoustic model similar to DeepSpeech2 <cit.> architecture, collectively trained on datasets such as Librispeech <cit.>, commonvoice <cit.> and tedlium <cit.> as our initial global model. For FL experiments, we created an audio corpus using a text-to-speech (TTS) system <cit.> for 15 different accented speakers to simulate unique clients. Each speech sample is about 8-10 seconds, with an average label length limited to 150 characters. We run the FL experiments by associating one speaker data to one client. To evaluate the global model's accuracy, we created a test set consisting of speech samples from various speakers. The simulation environment uses a single server and multiple python clients. We conduct a series of experiments, where we use our Ed-Fed framework to train the baseline ASR model on our audio corpus. We repeated the experiment for different values of k varying from 3 to 5. Each experiment is run for T=5 rounds with a fixed k and the k clients are randomly chosen from a pool of 10 readily available clients. We use the server strategy algorithm explained in Section <ref> to aggregate the weights from the selected k clients. We train the model with 25 samples and a validation set of 10 samples. All of our experiments were carried out using NVIDIA RTX 3090 and 3080 GPUs on a 10-core Intel i9-10900K CPU. §.§ Mobile phone based Evaluation With this set of experiments, we discuss the technical details associated with running Ed-Fed on edge devices. We present results from deploying our framework on our custom-built android application which allows the user to record speech samples. These recorded samples gets stored into the local memory of the application. We save the datasets for training and testing in the storage cache of the mobile phone. We train the model with 25 samples and a validation set of 10 samples. We host our python Ed-Fed server with WER based aggregation strategy on a local machine. The Ed-Fed clients are the Android mobile phones listed in Table <ref>. We use the optimised model mentioned in Section <ref> for the on-device personalisation of the ASR model. § RESULTS §.§ Effect of resources on the training time Figure <ref> depicts the results of an experiment to show the effect of varying RAM on training time with the help of two scenarios: one with background apps running alongside our FL android application (less AR) and other with no background apps (high AR). We observe that with decrease in available RAM, there is a significant increase in training time per batch, across all the mobile phones. This is especially noticeable in Figure <ref> and Figure <ref>, where we see a jump of 49 and 33 seconds in training time respectively. Figure <ref> presents the results obtained during the experiment conducted to check the effect of the battery percentage on the phone's training time. We can infer from the figures that training time shoots up abruptly when in lower battery bands (γ= 20%), whereas it is almost constant in the upper battery bands for all the phones except Xiaomi 11 Pro. This is particularly evident in the OnePlus 5T phone where training time increased 2.4 times the regular time in lower battery bands. §.§ Neural reward generator For our contextual bandits experiment, we chose N=4 clients and the number of rounds T= 475. We ran multiple iterations of on-device training on the N mobile devices to generate the context vectors containing the resource information and noted down the time taken per batch and the battery drop. We use the experimental setup described in <cit.> for LinUCB. The neural networks used in the NeuralUCB-s and NeuralUCB-m share the same architecture, consisting of a simple fully connected feedforward network with two hidden layers of 32 and 16 units, respectively with ReLu activations. The input to the network is a d dimensional context vector, and the outputs are the time and battery drop. For NeuralUCB-s and LinUCB, we use a single reward generating function for all the clients. As a result the context vector c=[TR,AR,AC,BS,CI,PI] contains all six features as discussed in Section <ref>. In contrast, we do not need to provide the total RAM (TR) and phone-specific information (PI) as features in the NeuralUCB-m approach with personalised models for each client device. Hence, d=4 is used in this case. We do a grid search over { 0.01,0.1,1.0,10} for the exploration term multiplier α_t=α for all the experiments, and tuned the parameter in such a way that the fairness achieved is similar across all the algorithms. For LinUCB, we selected α=10.0 and a value of 0.01 for NeuralUCB based algorithms. Figure <ref> trace the model's mean square error loss over rounds. We can see that neural network-based algorithms outperform LinUCB. Figure <ref> demonstrates that linear models are unable to accurately estimate the output and learn feature representations from the data. Comparing the results of NeuralUCB-s and NeuralUCB-m from Figures <ref> and <ref> respectively, we can observe that both loss curves looks similar with NeuralUCB-m performing slightly better over a long run. Figure <ref> plots the overall regret of all the algorithms against rounds. We present the findings based on an average of five repeated experiments using various random dataset shuffles. Further, we can also infer from Figure <ref> that NeuralUCB-m with disjoint personalised models for each client appears to outperform all other algorithms. §.§ Resource-aware time-optimised client selection Figure <ref> and Figure <ref> are the results obtained when we redo the experiments of Scenario 1 and Scenario 2 of Section <ref> with our client selection algorithm using our neural reward generator at t = 476. Table <ref> contains the detailed information of these two experiments. In Scenario 1, our algorithm identified that client 1 has weaker computing capabilities based on the b̂_̂t̂_t,i values computed. Since, client 1 has weaker computing capabilities, our algorithm assigns a smaller e_t,1 of 4 epochs in contrast to the 7 epochs given by random client selection approach. Thereby, reducing the overall waiting time to 7.42 minutes when compared to random client selection's 114.92 minutes. In Scenario 2, it is clear from e_max_t,i values of the Table <ref> that client 1 has weak battery resources. Our client selection algorithm deduced that client 1 cannot run 7 epochs and prevented the whole FL round from stopping by assigning a smaller e_t,1 of 3 epochs unlike random client selection. Further, our algorithm adjusted e_t,2 with respect to e_t,1 by giving it a value of 4 and reduced the waiting time even more. Whereas, the random client selection without any knowledge of resource information asks the client 1 to run e_max epochs. This leads to power shutdown of client 1, thereby making client 2 wait for infinite amount of time. §.§ Ed-Fed framework evaluation Figure <ref> displays the results obtained by choosing various values of k for each FL experiment. The figure shows three line plots, each corresponding to different values of k. Each line plot shows how the global model performs on the global test set after each round of FL. The global test set consists of 15 unseen speech samples each from 10 speakers with 4 different accents. We can deduce from the figure that as k increases, the WER of the global model decreases. Figure <ref> depicts the findings obtained on deployment of our Ed-Fed framework on multiple phones. The experiment is carried for 5 rounds on 4 mobile devices. In each round, 2 clients are selected. The round 0 in the figure refers to the initial global weights. All the checkpoints that are obtained at the end of each FL round are put to the test on a global test set. As could be predicted, the WER declines as the number of FL rounds grow. § CONCLUSIONS AND FUTURE DIRECTIONS In this work, we present Ed-Fed, a first-of-its-kind end-to-end federated learning framework that will serve as a foundation for future research in practical FL systems. We also propose a client selection algorithm that takes into account factors such as computation, storage, power, and device-specific capabilities to handle stragglers, optimise waiting time, and adapt training time based on resource information. The framework has been thoroughly tested in simulations and on actual edge devices, and in the future, we plan to incorporate communication latency parameters to measure waiting time. 1 FL McMahan, B., et al. “Communication-efficient learning of deep networks from decentralized data", Artificial intelligence and statistics. PMLR, 2017. flasr_wer Yan, G. et al. “End-to-end speech recognition from federated acoustic models." ICASSP 2022-2022 IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022. flasr_ibm Cui, X., Songtao L., and Brian K., “Federated acoustic modeling for automatic speech recognition.", IEEE Int. Conf. on Acoustics, Speech and Signal processing (ICASSP), 2021. flasr_3 Yu, W., Freiwald, J., Tewes, S., Huennemeyer, F. and Kolossa, D.,“ Federated learning in ASR: Not as easy as you think", In Speech Communication, 14th ITG Conference (pp. 1-5), 2021. flasr_4 Guliani, D., Beaufays, F. and Motta, G.,“Training speech recognition models with federated learning: A quality/cost framework", In IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), 2021. flasr_5 Dimitrios D., Kenichi K., Robert G.,Yashesh G., and Eskimez, S E., “A federatedapproach in training acoustic models,” inProc. Interspeech, 2020. tff Google. “Tensorflow federated: Machine learning on decentralized data.", https://www.tensorflow.org/federated, 2020. accessed 25-Mar-20. leaf Caldas, S., Duddu, S. M. K., Wu, P., Li, T., Konecn y, J., McMahan, H. B., Smith, V., and Talwalkar, A., “Leaf:A benchmark for federated settings", arXiv preprint arXiv:1812.01097, 2018. flower Beutel, D. J., et al. “Flower: A friendly federated learning research framework", arXiv preprint arXiv:2007.14390 (2020). flower_old Mathur, A., Beutel, D. J., de Gusmao, P. P. B., Fernandez-Marques, J., Topal, T., Qiu, X., ... & Lane, N. D. (2021). On-device federated learning with flower. arXiv preprint arXiv:2104.03042. oort Lai, Fan, et al. “Oort: Efficient Federated Learning via Guided Participant Selection." OSDI. 2021. csucb Xia, Wenchao, et al. “Multi-armed bandit-based client scheduling for federated learning.", IEEE Transactions on Wireless Communications (2020). mabcs Yoshida, N., et al. “Mab-based client selection for federated learning with uncertain resources in mobile networks." 2020 IEEE Globecom Workshops. birdsoffeathers Cao, Hangrui, et al. “Birds of a Feather Help: Context-aware Client Selection for Federated Learning." Int. Workshop on Trustable, Verifiable and Auditable Federated Learning in Conjunction with AAAI (FL-AAAI). 2022. optimisedadaptiveFL Banerjee, S., Vu, X. S., Bhuyan, M.,“Optimized and Adaptive Federated Learning for Straggler-Resilient Device Selection", In 2022 International Joint Conference on Neural Networks (IJCNN) (pp. 1-9). tf Abadi, M. et.al., “TensorFlow: a system for Large-Scale machine learning", In 12th USENIX symposium on operating systems design and implementation. tflite Tflite, “On-device model personalization", https://blog.tensorflow.org/2019/12/example -on-device-model-personalization.html, 2020. ourarxiv Sasindran, Z., Suresh, R.R., Rao, P. and Prabhakar, T.V., “Training end-to-end speech-to-text models on mobile phones.", arXiv preprint arXiv:2112.03871,2021. credit Carlini, N., Liu, C. and Kos, J. ,Erlingsson, U., and Song, D., “The secret sharer: Measuring unintended neural networkmemorization & extracting secrets", URL http://arxiv.org/abs/1802.08232. grpc Foundation, C. N. C, “ grpc: A high performance, opensource universal rpc framework", URL https://grpc.io. Accessed: 2020-03-25. fedprox Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A. and Smith, V., “Federated optimization in heterogeneous networks", Proceedings of Machine Learning and Systems, pp.429-450, 2020. neuralucb Zhou, D., Li, L., and Gu, Q, “Neural contextual bandits with ucb-based exploration" In International Conference on Machine Learning, 2020. linucb Li, L., Chu, W., Langford, J., and Schapire, R. E.“ A contextual-bandit approach to personalized news article recommendation", In Int. Conf. on World wide web. ds2 Amodei, D., et al. “Deep speech 2: End-to-end speech recognition in english and mandarin." Int. Conf. on machine learning, PMLR, 2016. librispeech Panayotov, V., Chen, G., Povey, D. and Khudanpur, S.,“ Librispeech: an asr corpus based on public domain audio books", IEEE Int. Conf. on Acoustics, Speech and Signal processing (ICASSP), 2015. commonvoice Ardila, R., et al. “Common voice: A massively-multilingual speech corpus." arXiv preprint arXiv:1912.06670 (2019). tedlium Hernandez, F. et. al..,“ TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation", In Int. Conf. on Speech and computer, 2018. naturalreader Flood, J., “NaturalReader: A New Generation Text Reader",. Developmental Disabilities Bulletin, 35, pp.44-55, 2017.
http://arxiv.org/abs/2307.04049v1
20230708212820
Parallel Algorithms Align with Neural Execution
[ "Valerie Engelmayer", "Dobrik Georgiev", "Petar Veličković" ]
cs.LG
[ "cs.LG" ]
[ Parallel Algorithms Align with Neural Execution Valerie Engelmayeraux Dobrik Georgievcam Petar Veličkovićdm auxDepartment of Applied Computer Science, University of Augsburg, Augsburg, Germany camDepartment of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom dmGoogle DeepMind, London, United Kingdom Valerie [email protected] Machine Learning, ICML 0.3in ] Neural algorithmic reasoners are parallel processors. Teaching them sequential algorithms contradicts this nature, rendering a significant share of their computations redundant. Parallel algorithms however may exploit their full computational power, therefore requiring fewer layers to be executed. This drastically reduces training times, as we observe when comparing parallel implementations of searching, sorting and finding strongly connected components to their sequential counterparts on the CLRS framework. Additionally, parallel versions achieve strongly superior predictive performance in most cases. § MOTIVATION In neural algorithmic reasoning, neural networks (NN) act as computational machines. In graph neural networks (GNN), graph nodes take on the role of storage space (interpreting edge labels as nodes adjacent to its endpoints throughout this paper), while edges indicate which ways information may flow. The update function of choice defines the set of constant (neural) time operations. But note how nodes update their features in parallel, each one acting as a processor of its own rather than sheer memory. The parallel nature of neural networks is widely known. Running them in parallel fashion on processing devices like GPUs and TPUs drastically saves computational resources <cit.>. It seems natural that this translation between computational models would also hold the other way around. And indeed, Loukas loukas_what_2020 proves how Neural Networks (NN) are analogous to distributed computational models under certain assumptions. Kaiser & Sutskever kaiser2015neural exploit the advantages of parallel processing in their Neural GPU. Freivalds et al. freivalds_neural_nodate derive their architecture from the parallel computational model of Shuffle-Exchange-Networks. Xu et al. xu_what_2020 observe how their model learns to compute a shortest path starting from both ends in parallel when executing Bellman Ford. Veličković et al. velickovic_clrs_2022 and Veličković et al. velickovic_neural_2020 hint at parallelized computations whenever possible. It is time the parallel processing capabilities of NN are exploited systematically. Theory on parallel computational models and algorithms explicitly designed for them are abundant <cit.>. Their trajectories are shorter and align more closely with neural architectures, as illustrated in figure <ref>. Hinting at these during training teaches NN to execute algorithmic tasks much more efficiently than when providing hints for sequential algorithms, as we demonstrate in section <ref> for the examples of searching, sorting and finding strongly connected components. While it is common practice to modify the neural architecture for better alignment <cit.>, it seems promising to narrow the gap from the other side, by choosing algorithms that naturally align with neural execution. § PARALLEL COMPUTING Fundamentally, the parallel computational models addressed here assume multiple processors collaborating to solve a task. The line between parallel and distributed computing is blurry and depends on how controlled interactions between processors are. We assume a fixed and known interconnection graph, uniquely identified processors and a common clock to govern computation. Therefore, we choose to speak of parallel computing. §.§ Parallel Computational Models Processor Arrays. Communication may take place via hard-wired channels between the processors. These induce an interconnection graph that may in principle take any shape. At every time step, each processor executes some computation based on the contents of its local memory and the information received from its neighbours in the previous step, and may in turn send out a tailored message through any of its channels. PRAM Models. Alternatively, communication may be realised by reading from and writing to global memory, giving rise to PRAM (parallel random access machine) models <cit.>. Submodels allowing for concurrent reading and writing by multiple processors are referred to as CRCW PRAM. Different conventions exist on whether attempting to concurrently write different values is permitted, and if so, how to decide who succeeds. In the most powerful model, the priority CRCW PRAM, the value from the processor with the lowest index taking part in the concurrent write will be taken on. §.§ Efficiency Since multiple steps can be carried out at the same time, the required number of operations in a parallel algorithm does not impose a lower bound to its run time as in the sequential case, but the product of time and processor number. Optimal speedup is achieved if the use of n processors speeds up computation by a factor of n. This gives rise to a notion of efficiency frequently used in parallel computing <cit.>. The efficiency of a parallel algorithm solving a task of sequential complexity C on p processors in time t is defined as C/pt. It is not hard to see that optimal speedup entails an efficiency of Ω(1). §.§ Examples of Parallel Algorithms Searching. For a simple parallel search for value x in a descending list of n items, assume a priority CRCW PRAM with n processors. Distribute the first item to processor 1, the second to processor 2 etc., while x is stored in the global memory. If a processor's item is ≥ x, it tries to write its index to a designated location in the global memory. Since the one with the smallest index will succeed, the location now contains the desired position of x. The run time is independent of the input size[Distributing values to processors can be done in constant time by routing over the shared memory. We neglect distributing/returning in-/outputs from/to a host computer in the following as it is omitted in neural execution.], so the time-processor-product is Θ(n), missing optimal speed-up as searching can be done in O(log n). Sorting. Habermann habermann_parallel_1972 proposes a simple parallel sorting algorithm for a linear array of processors called Odd Even Transposition Sort (OETS). Each processor holds one item. In an odd (even) round, all neighbouring pairs starting at an odd (even) index swap their items if they are out of order. The two types of rounds take turns for at most n rounds total when n items are to be sorted, yielding O(n^2) operations when accounting for the n processors. Again, this is not optimal for comparison-based sorting, which may be done in O(n log n). Strongly Connected Components. Fleischer et al. rolim_identifying_2000 propose a Divide-and-Conquer algorithm for computing strongly connected components (SCC) of a digraph, which they call DCSC. First, find all descendants and predecessors of an arbitrary node, e.g. by carrying out breadth-first search (BFS) in the graph and its reversed version. The intersection of both sets constitutes a SCC. Observe how each further SCC has to be completely contained in either the descendants, the predecessors or the undiscovered nodes, such that the described routine may be called recursively for start nodes in each subset independently, until each vertex is assigned to a SCC. They prove an expected serial time complexity of O(n log n) for graphs on n nodes whose degrees are bounded by a constant. This is not optimal, but parallelization of the two searchs per vertex, as well as the recursive calls may significantly speed up execution. §.§ Analogy to Neural Networks Loukas loukas_what_2020 formally establishes an analogy between models like processor arrays and GNN by identifying processors with graph nodes and communication channels with edges. Therefore, the width of a GNN corresponds to p, and its depth to t. Loukas coins the term capacity for the product of width and depth of a GNN, reflecting the time-processor product of parallel algorithms. The shared memory of a PRAM finds its neural analog in graph-level features. Since the computation of a graph feature may take into account positional encodings of the nodes, we may assume a priority CRCW PRAM, encompassing all other PRAM models. § EFFICIENCY OF EXECUTING ALGORITHMS NEURALLY Inspired by the definition of efficiency in parallel computing, we define the efficiency of a neural executioner as follows. Let be a GNN with capacity c(n) executing an algorithm of sequential complexity C(n). Define its node efficiency as η (, ) C(n)/c(n). This definition implies an important assumption we make throughout this paper. When executing an algorithm on a GNN, one constant-time operation is to be executed per node per layer. This is not entirely unproblematic as discussed in section <ref>, but often expected when providing hints and helps to identify theoretical properties. Under this assumption, node efficiency denotes the share of nodes doing useful computations throughout the layers. Since the computational cost of a GNN also scales with the number of messages that are being sent, it is insightful to study the share of edges that transport relevant information as well. Let be a GNN operating over a graph G=(V,E), m | E |, to execute an algorithm . Then we call an edge (i,j) ∈ E active at layer t for a certain input x, if the operation to be executed by node j at time t involves information stored at node i at time t-1. Let a(t) be the number of active edges at time t, and T the total number of time-steps. Then define edge efficiency as worst case share of active edges when processing inputs x_n of size n, ϵ (, ) x_nmin 1/T∑_t=1^T a(t)/m. Note how neural efficiencies are defined relative to the algorithm they are executing as opposed to the task they solve. This allows for a neural executioner to be efficient in executing an algorithm that is itself not efficient in solving a task. §.§ Parallel Algorithms Entail Higher Efficiency Contradicting a GNN's parallel nature by teaching it to execute sequential algorithms artificially impedes the task. Training to solve tasks in parallel instead is more efficient, which may also simplify the function to learn. Shorter Trajectories. As observed by Loukas loukas_what_2020, the complexity of an algorithm lower bounds the capacity of a GNN executing it. If the number of processors is one, the depth alone needs to match the complexity, while the width might theoretically be set to one. But in practice, the width has to scale with the input size n to ensure applicability to different n. Therefore, training sequential algorithms forces overspending on capacity by a factor of n. Setting the width to n, as is often done to distribute one unit of information over each node, entails n available processors. Making use of them may shorten the trajectory of an algorithm by a factor of up to n in the case of optimal speedup, which allows the capacity to take on its lower bound. The capacity of a GNN directly translates to the time needed to train and execute it. Additionally, long roll-outs give rise to an issue Bansal et al. bansal_end–end_2022 refer to as overthinking, where many iterations degenerate the behaviour of a recurrent processor. Less Redundancy. Neural efficiencies denote the share of nodes and edges involved in useful computations. Redundant computations not only harm run times, but may also interfere with the algorithmic trajectory. Parameterising them correctly to prevent this can complicate the function to learn. Assuming the redundant nodes (grey in figure <ref>) need to preserve their information to be processed or put out later, their self-edges should execute an identity, while the additional incoming messages need to be ignored, i.e. mapped to a constant. In practice, this will be hard to do, which could entail a temporal variant of oversmoothing, where relevant information gets lost throughout the layers <cit.>. Oyedotun et al. skipconnections highlight how skip connections help to avoid the issue, Ibarz et al. ibarz_generalist_2022 introduce a gating mechanism to leave information unchanged, Bansal et al. bansal_end–end_2022 let their architecture recall the original input. So let's explore the efficiency of executing sequential and parallel algorithms. Let be a scalable GNN operating over a graph with n nodes and m edges. Further let be a sequential, and an efficient parallel algorithm on n processors, both of complexity C. Then executing and on , respectively, entails efficiencies η(, ) = O (1/n), ϵ(, ) = O( 1/m), η(, ) = O(1), ϵ(, ) = O(n/m). As observed above, the capacity c of a GNN executing a sequential algorithm of complexity C has to be c ≥ nC, while it may be c=C in the case of optimal speedup. Node efficiencies follow immediately. Since one processor can read only so much information, only a constant number of edges can be active at each layer during sequential processing, while up to a multiple of n edges can be active during parallel algorithms. This yields the stated edge efficiencies. Therefore, the share of nodes avoiding redundant computation cannot exceed 1/n when executing sequential algorithms, whereas it may reach up to 1 for efficient parallel algorithms. At the same time, the number of redundant messages is reduced by a factor of n. Removing the artificial bottleneck of a single processor prevents data from having to be stored until the processor gets to it. Allowing nodes to carry out meaningful computation frees them of the dead weight of acting as memory. Local Exchange of Information. In neural networks, information exchange is inherently local. The feature h_i^t of node i at time t may only depend on itself and its neighbours _i. E.g. for permutation invariant MPNN <cit.>, h_i^t = f (h_i^t-1, j ∈_i⊕ g(h_i^t-1, h_j^t-1)) This paradigm is often not respected by classical algorithms, as depicted in figure <ref>. In the RAM model, the state h_i_t^t of register i_t updated at time t may depend on any two registers j_t and k_t: h_i_t^t = f^t_i (h_k_t^t-1, h_j_t^t-1), j_t, k_t arbitrary. Not being able to restrict which nodes have to communicate may render it advisable for a GNN to operate over a complete graph to make sure all necessary information is available at all times (see e.g. <cit.>). The situation is different in the setting of interconnected processing arrays, see figure <ref>. For example OETS only ever requires neighbouring processors to compare their items. In general, at time t, the memory state h_i^t of processor i is computed by h_i^t = f^t_i (h_i^t-1, j ∈ J_i^t|| h_j^t-1), J_i^t ⊆_i, where concatenation indicates how i may tell apart its neighbours. Therefore it suffices for the GNN to only rely on edges present in the interconnection graph. To emulate a PRAM algorithm, an empty graph would in principle be enough, though it might not deem advantageous to route all communication over the graph feature in practice. Restricting the number of edges further reduces the use of resources and may help performance, since fewer unnecessary messages are being passed. Interconnection graphs are mostly chosen to be sparse, enabling maximum edge efficiency. § METHODOLOGY To test the hypothesis, we consider the two elementary tasks of searching and sorting, as well as computing SCC as an example of a graph algorithm. The parallel algorithms are chosen from section <ref>; as sequential counterparts we use binary search, bubble sort and Kosaraju's SSC algorithm from the CLRS-30 benchmark <cit.>. Key data of the GNN we use are listed in table <ref>. We compare performances across various processor networks, namely the wide-spread architectures of DeepSets <cit.>, GAT <cit.>, MPNN <cit.>, and PGN <cit.>. The trajectories of the new algorithms are encoded for the CLRS framework as follows below. Note that in every case, randomized positional information, as proposed by Mahdavi et al. mahdavi_towards_2023 and standard on CLRS, is provided as part of the input, to emulate the situation of uniquely identified processors. §.§ Searching Parallel Search. The hints for parallel search of x in A closely resemble its template. As to be seen in figure <ref>, each item A_i of A is represented by one node of an empty graph. A node indicates whether A_i ≤ x. The position rank_A (x) of x in A is predicted by the graph feature as categorical variable over the nodes ( in <cit.>). Therefore we introduce an extra node carrying x as a placeholder to allow for as many categories as possible positions of x. To perfectly predict the outcome in this setting, the graph nodes may be updated by h_i = ReLU (A_i -x), yielding h_i = 0 if and only if A_i ≤ x. So the graph feature may be computed by rank_A (x) = min{i=1,…,n : h_i = 0 }. These steps closely align with the considered neural update functions, especially since the function updating the graph level possesses its own set of parameters. Additionally, the roll-out has constant length, leaving room for only a constant number of redundant edges, see figure <ref> and table <ref>. Altogether, we expect high performance on parallel search. Binary Search. Opposed to parallel search, binary search has an optimal complexity of O(log n). But given the need for n nodes, it still requires an enhanced capacity of O(n log n), yielding low node efficiency. In CLRS-30, binary search is executed on a complete graph (whose edges are omitted in figure <ref> to avoid clutter), impairing edge efficiency, see table <ref>. Low efficiency is visible in figure <ref> by the amount of grey components. §.§ Sorting OETS. Actually swapping the items would require making numerical predictions. Instead, we predict changing predecessors as , following preimplemented examples. To still provide edges between nodes holding items to compare, we have to operate on a complete graph, sacrificing edge efficiency (see table <ref>), since only Θ(n) edges are active in each round, so ϵ = n/n^2. As hints, we feed for each round the current predecessors along with an edge indicating whether two nodes have to switch their role, and a graph-level with the parity of the round, serving as rudimentary clock. Bubble Sort. Though Bubble Sort induces the same amount of operations O(n^2) as OETS, it requires a larger network to be executed on (table <ref>). Again, along with operating over a complete graph, this entails low efficiencies. §.§ Strongly Connected Components DCSC. We input the undirected adjacency matrix as edge , along with the directed one as . Parallelizing the recursive calls of DCSC on multiple disjoint sets would require an extra feature dimension for every search that is going on. Therefore we only let the two BFS starting from the same source node be executed in parallel, which we each encode as is standard in CLRS-30. Additionally, a binary on each node is flipped to 1 as soon as it is discovered from both directions, indicating it belongs the currently constructed SCC (this is reset at the start of every new search). At the same time, it receives a to the source, which in the end constitutes the output. Throughout, we keep track of undiscovered nodes in another node . We choose the node with the smallest index from this set as next source. DCSC spends most of its time on the repeated BFS, a subroutine known to be learned well even on relatively simple architectures <cit.>, as it aligns well with neural execution <cit.>. Note how they let each node consider all its incoming edges in parallel, as is done on CLRS-30. This not only allows the trajectory to be shortened from O(n+m) to O(n), but also prevents redundant computations from having to be handled explicitly. Except for the source, each node can carry out the same computation at each step (see <cit.> for details) – just that this will only change its state whenever information flowing from the start node reaches it. DCSC only has to pass the index s of the source node instead of computing predecessor pointers, so computation looks like depicted in figure <ref>, closely resembling the situation in figure <ref>. Therefore, efficiency is expected to be less important for predictive performance in this special case. An obvious upper bound to DCSC's run time is O(n^2), accounting for one (two-sided) BFS per node, resulting in the big capacity reported in table <ref>. There is also no guarantee for more than one node and edge being active per step per BFS, resulting in low efficiencies. But this represents edge cases at best, such that the average trajectories will be much shorter and more efficient, as experiments will show. The core of DCSC aligning so well with neural execution promises good results. Kosaraju. The skeleton of Kosaraju's algorithm as implemented in CLRS-30 on the other hand is formed by a depth first search (DFS), which is more challenging for neural executioners <cit.>. As opposed to the closely related BFS, it is hard to parallelize. In fact, when relying on lexicographic ordering for tie-braking, it is considered an inherently sequential algorithm <cit.>. Since nodes have to wait for the search to retract from its siblings, computation cannot be carried out as in figure <ref>, so processing needs be timed correctly. The total run time is O(n+m), entailing the capacity and efficiencies reported in table <ref>. § RESULTS Predictive performance is reported in table <ref>. As expected, parallel search achieves almost perfect results. Meanwhile, training time is reduced by a factor of almost 3 as compared to binary search (see figure <ref>). Despite DCSC's only partial parallelization and the asymptotically optimal linear run time of its sequential opponent, training time is more than halved for the SCC task. At the same time, predictions become up to more than twice as accurate. On the sorting task, the sequential algorithm entails better accuracy, with the parallel one mostly falling within one standard deviation. Though both algorithms require the same asymptotic number of operations, training OETS takes a fraction of the time needed for bubble sort (figure <ref>). § DISCUSSION Neural efficiency only loosely correlates with predictive performance when comparing tables <ref> and <ref>. This is not too surprising, since correctly parameterising redundant computations is only one of many aspects that make a function hard to learn. We propose a rather one-sided relationship, where low efficiencies can harm accuracy (if not circumvented as in BFS, see section <ref>), but high efficiencies do not necessarily enhance learning success. We would like to highlight the importance of taking the perspective on neural networks as computational models when executing algorithms, as it opens access to the rich theory of computational complexity. E.g. the classes of NC (efficiently parallelizable) and P-complete problems (mostly thought of as inherently sequential) <cit.> inform us on which tasks may be hard to execute neurally, to tackle them more effectively. However in doing so, it is important to keep in mind the gap between the respective sets of constant time operations, with none being strictly more powerful than the other. On the one hand, a single RAM instruction may need to be approximated by entire subnetworks. On the other hand, one neural step suffices to process all incoming edges of a node during execution of BFS <cit.>. This breaks up the strict correspondence between time-processor product and capacity. § CONCLUSION As suggested in section <ref>, parallel algorithms prove to be a lot more efficient to learn and execute on neural architectures than sequential ones. Often, OOD predictions on algorithmic tasks are significantly improved as well, suggesting that higher node and edge efficiency can help learning. Future work has to show how performance is impacted for other tasks, on more elaborate architectures like in <cit.>, and in generalist settings. § ACKNOWLEDGEMENTS We would like to thank Razvan Pascanu and Karl Tuyls for their valuable comments, as well as Pietro Liò for insightful discussions and Torben Hagerup for the support he provided. icml2023
http://arxiv.org/abs/2307.05240v1
20230711131402
The filament determination depends on the tracer: comparing filaments based on dark matter particles and galaxies in the GAEA semi-analytic model
[ "Daria Zakharova", "Benedetta Vulcani", "Gabriella De Lucia", "Lizhi Xie", "Michaela Hirschmann", "Fabio Fontanot" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.CO" ]
firstpage–lastpage Does pre-training on brain-related tasks results in better deep-learning-based brain age biomarkers? Bruno M. Pacheco1 Victor H. R. de Oliveira1 Augusto B. F. Antunes2 Saulo D. S. Pedro3 Danilo Silva1, for the Alzheimer’s Disease Neuroimaging Initiative Data used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: <http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf> August 12, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Filaments are elongated structures that connect groups and clusters of galaxies and are visually the striking feature in cosmological maps. In the literature, typically filaments are defined only using galaxies, assuming that these are good tracers of the dark matter distribution, despite the fact that galaxies are a biased indicator. Here we apply the topological filament extractor DisPerSE to the predictions of the semi-analytic code GAEA to investigate the correspondence between the properties of z=0 filaments extracted using the distribution of dark matter and the distribution of model galaxies evolving within the same large-scale structure. We focus on filaments around massive clusters with a mass comparable to Virgo and Coma, with the intent of investigating the influence of massive systems and their feeding filamentary structure on the physical properties of galaxies. We apply different methods to compare the properties of filaments based on the different tracers and study how the sample selection impacts the extraction. Overall, filaments extracted using different tracers agree, although they never coincide totally. We also find that the number of filaments ending up in the massive clusters identified using galaxies distribution is typically underestimated with respect to the corresponding dark matter filament extraction. Cosmology: large-scale structure of Universe, Galaxies: clusters: general, Galaxies: evolution, Galaxies: formation § INTRODUCTION On scales of the order of a few Mpcs, the distribution of dark matter and galaxies in the Universe is not uniform. These inhomogeneities, that are well visible in observations (SDSS ,  ; 2MASS , , SAMI ), are also identified in cosmological simulations assuming a cold dark matter and an accelerated expansion of the Universe  <cit.> and define what is often referred to as `cosmic web'. The formation of large-scale structure is predicted as a natural outcome of the evolution of the non-linear growth of primordial density perturbations <cit.>: over-dense regions grow in size and become denser, while underdense regions expand under the action of gravity <cit.>. As a result, the cosmic web can be described as an ensemble of haloes of different mass connected by filaments <cit.>. The formation and evolution of galaxies takes place inside the cosmic web, and the properties of galaxies are inextricably linked to their environment. Galaxies in rich clusters tend to have a more spherical shape, to be redder and more massive <cit.>, to have lower star formation activity  <cit.>, and contain less gas <cit.> than their field counterparts. Galaxies in filaments have properties that are intermediate between clusters and the field <cit.>. They are typically more massive, redder, more gas poor, and have earlier morphology than galaxies in voids <cit.>, but have later type morphology and are more star forming than cluster galaxies (e.g. ). Recent observational works also suggest that they have more extended ionized gas distributions <cit.> and reduced atomic HI gas reservoirs <cit.>. Various mechanisms have been proposed as responsible for the observed trends, e.g. mergers, stripping, tidal interactions, and starvation <cit.>. However, the relative role of the different physical mechanisms as a function of environment still needs to be quantified <cit.>. Many different tools have been developed to identify the different components of the cosmic web, including algorithms to identify filamentary structure. These include methods based on particle density distribution analysis (DisPerSE, ), on density and tidal fields analysis (NEXUS+, ), velocity shear tensor-based cosmic web analysis (COWS, ), and deep learning <cit.>. A detailed comparison of many of these algorithms has been presented in <cit.>. Albeit different filament finders are available, an operational and rigorous definition of a "filament" is still missing. Different studies adopt different detailing of the cosmic web and use datasets with different characteristics, with the result that they are sensitive to different properties of the structures to be identified. The availability of large spectroscopic samples has provided significant impetus, in the last years, to both theoretical and observational work focusing on the detection and analysis of filamentary structure. Theoretical studies (e.g.,  ) have the advantage to be able to study both dark matter and galaxy distribution. Typically, the tracer used depends on the aim: dark matter particles are used to characterize the cosmic web structure (e.g. ), while galaxies are used when interested in characterizing the effect of environment on galaxies (e.g. ). Observational studies (e.g., ) do not have access to the dark matter distribution and typically rely on the assumption that the distribution of galaxies reflects well enough that of the dark matter. This is true only in first approximation: albeit the two components are intrinsically linked, the density of galaxies is a non-trivial function of the dark matter density (e.g. ), with the result that some galaxies are locally more (“biased”) or less (“anti-biased”) clustered relative to dark matter. Very few studies have so far focused on the difference between the determination of filaments using galaxies and dark matter, and these have mainly used hydrodinamical simulations. For example, <cit.> compared the skeletons obtained from the distribution of galaxies in the COSMOS2015 <cit.> survey, and from the distribution of galaxies and dark matter in realistic mock catalogues extracted from a lightcone built from the cosmological hydrodynamical simulation HORIZON-AGN <cit.>. They found that a (small) fraction of dark matter filaments have no counterpart in the distribution of galaxies. In this paper, we characterize the cosmic web of simulated cosmological boxes, using separately dark matter particles and galaxies as tracers in the semi analytic model GAEA <cit.>. We focus on simulated regions around massive haloes, with the intent of investigating the influence of massive systems and their feeding filamentary structure on the physical properties of the galaxies. The two most studied clusters and infalling filaments in the local Universe are Virgo <cit.> and Coma <cit.>. Hence we focus on the filaments around simulated haloes of similar mass. While this paper is devoted to a purely theoretical analysis, in future work we will compare theoretical and observational results, and make dedicated predictions for testing the role of filaments in affecting the galaxy properties. The paper is organized as follows. In [sec:data_and_methods]Section 2, we provide a brief description of the semi analytic model used in our study, and describe the sampling and filament extraction method. In [sec3:properties]Section 3, we characterize the filamentary structures identified, and compare filaments detected in dark matter with their counterparts based on the galaxy distribution. In [sec:different_settings]Section 4, we discuss how galaxies with different properties (stellar mass and galaxy type) track filaments. We discuss and summarize our results in [sec:discussion]Section 5. § DATA AND METHODS §.§ GAEA In this work, we use the GAlaxy Evolution and Assembly (GAEA) semi-analytic model of galaxy formation and evolution. In particular, we use the model version that has been published in <cit.> that includes: (i) a detailed treatment for the chemical enrichment that accounts for the non instantaneous recycling of gas, metals and energy <cit.>, (ii) an updated treatment for the stellar feedback that provides a good agreement with the observed evolution of the galaxy mass-gas metallicity relation and of the galaxy stellar mass function up to z∼ 3 <cit.>, and (iii) an explicit treatment for partitioning the cold gas in its atomic and molecular component and for ram-pressure stripping of both the hot gas and cold gas reservoirs of satellite galaxies <cit.>. The model realization is coupled to the Millennium Simulation <cit.> - a dark-matter only N-body simulation of a box of side length equal to 500 Mpc/h comoving. The simulation adopts a WMAP1 cosmology with Ω_ m=0.25, Ω_ b=0.045, Ω_Λ=0.75, h=0.73, and σ_8=0.9. While more recent determinations suggest a lower value of σ_8, we do not expect this to have a significant impact on the results of this work other than on the number of massive haloes identified in the simulated box at z=0. The particle mass of the simulation is m_ DM = 8.6×10^8 M_⊙ h^-1, which translates in a galaxy stellar mass limit of approximately 10^9 M_⊙. In our analysis, we have used galaxies more massive than 10^9 M_⊙. GAEA follows the evolution of dark matter substructures until they disappear (i.e., they are stripped below the resolution of the simulation). When substructures are very close to the detection limit (20 particles), these could have issues with spuriousness at the low-mass end. However, model results are not significantly affected by this. Previous papers <cit.> have shown that model results converge down to galaxy stellar masses  10^9 M_⊙. Our study focuses on a detailed comparison between the filamentary structure identified using simulated galaxies and dark matter particles, when the full 3D information is available. In particular, we focus on the simulated volume at z=0, and select independent sub-volumes corresponding to boxes of 70 Mpc/h on a side. This size is similar to that typically adopted in recent studies on the environmental influence and properties of galaxies based on state-of-the-art hydrodynamical simulations (50 Mpc/h, TNG50, , or 100 Mpc/h,TNG100 , EAGLE , HORIZON AGN ), and it is sufficient to investigate a wide range of environments, from very sparse regions to very massive and dense structures. As mentioned in Sect. <ref>, the most investigated cosmic web regions in the local Universe are the Virgo and Coma clusters, located respectively 16 and 99 Mpc from us and with halo mass of 5 · 10^14 M_⊙ <cit.> and 6÷9 · 10^14 M_⊙ <cit.>, respectively. A number of observational studies investigate the filamentary structure around these clusters <cit.>, and future surveys will provide additional data for these regions (4HS <cit.>, DESI  <cit.>). To mimic the region around the Coma cluster, we focus on haloes with mass 6.0 · 10^14 M_⊙≤ M_h≤ 1.8 · 10^15 M_⊙ from the simulated volume. We also require that the selected haloes have no other companion haloes inside each cube with a mass equivalent to or larger than Virgo, to mimic real Virgo or Coma clusters. Only 9 haloes meet these criteria, within Millenium Simulation, and we will consider all of them in the analysis presented below. Similarly, to reproduce the Virgo cluster, we select haloes with 4.7 · 10^14 M_⊙≤ M_h≤ 6.0 · 10^14 M_⊙, and require them to be isolated and have no other Virgo-like haloes or more massive ones within the box. We find 12 haloes meeting our selection criteria, and we randomly extract 9 of them to be used in the following. Finally, as a comparison sample, we also consider 9 cubes centered randomly in the simulation to mimic the general field. Therefore, our analysis is based on 27 simulated sub-volumes that are representative of three different large-scale overdensity environments. §.§ Filament extraction (DisPerSE) To identify filaments, we use the topological filament extractor DisPerSE <cit.>, a commonly used tool to characterize the large-scale structure distribution <cit.>. It identifies persistent topological features in two steps: first, DisPerSE evaluates the density distribution of tracers using a Delaunay tessellation algorithm <cit.> from an input of discrete positions of tracers (either in 3D or 2D). In a second step, DisPerSE calculates density distribution gradients and identifies zero gradient points (critical points such as minima, maxima and saddle points), as well as all the segments that trace the ridges of the density field. In this way, the "skeleton" of the distribution is constructed, and DisPerSE gives as output information about the filament structure (FS) as a set of critical points (also called vertices) and lines (defined by points) connecting them. The lines between the vertices are also called “segments” of the skeleton. To assess the robustness of the filamentary network and control the scales at which filaments are found, DisPerSE allows the user to tune a signal-to-noise criterion, called persistence: a filament is identified as an integral line connecting two critical points that represent a critical pair. The persistence of such a pair is the difference (or ratio) of the density values at these points, and can be expressed in terms of standard deviations (σ) of a minimal signal-to-noise ratio or of a cut off value <cit.>. Depending on the goal of the analysis, it is therefore possible to decide whether faint tendrils should be included (with a trade-off of increased noise) or if the analysis should focus on large scale, collapsed cosmic filaments corresponding to a large signal-to-noise ratio. In this analysis, we are interested in the filamentary structure around massive haloes, and we will favour large values of persistence  (or threshold level) to identify only the predominant structures, while losing detail on small scales. Another important parameter that can be tuned while running DiSPerSE is the "smoothness" which reduce the integral lines inhomogeneities. It is possible ( but not necessary) to apply a smoothing procedure N times that averages the position of N point s of the filament. This setting will smooth out the lines of the skeleton of the filaments. While a larger value of the smoothing parameter might define the filament structures better, smoothing too much leads to a shift of the filament axis and will affect, for example, the density profiles. So the value of this parameter should chosen with care. The level of smoothing also affects the final length of the skeleton. The smoothing parameters selected in this work are indicated below (we have verified that the adopted smoothing induces no shift in the density profile). We stress again that the smoothing procedure is not necessary and does not affect the results of this work. In this work, we are interested in investigating the difference between the filaments based on galaxies and those based on dark matter particles. Therefore, we run DisPerSE on each extracted cube twice, once using the positions of galaxies as predicted by the GAEA model and once using the positions of the dark matter particles from the snapshots of the simulation. In both cases, we use the 3D positions of the tracers. From now on, we will refer to the network obtained in the first run as “Galaxy filament structure“ (GFS), and to the the network obtained in the second run as “Dark matter filament structure“ (DMFS). Since the number of galaxies and that of dark matter particles are very different in the cube considered (see below), adopting the same parameters to define the filament structures would entail the derivation of very different networks, complicating the comparison. To avoid this, we set the parameters for the GFS to best reproduce the visually observed filament structure, and then fine tune the parameters for the DMFS so that the total length of the identified filaments network is comparable to that of the GFS. In the case of GFS, only galaxies with M_* > 10^9 M_⊙ are considered, as this is the resolution limit of the model applied to the Millennium Simulation. Each cube comprises from 10 to 40 thousands galaxies. This variation is related to the different number of haloes with M_h > 10^14 M_⊙ in each cube and to their mass. Regardless of the different number of objects in the different cubes, we use the same values of the DisPerSE parameters for all the 27 cubes, and set the persistence threshold level to 10^4. This persistence level best reproduces the visually observed filament structure. We adopt the same value for all the 27 cubes, having checked that the cube-by-cube variation is not significant. This is a relatively high cutoff value, so that only the topologically most robust filaments are included in the analysis. To further reduce the level of noise and inhomogeneities, we apply to our skeleton the DiSPerSE smoothing procedure 5 times, i.e., averaging the positions of 5 adjacent segments forming the filament. Again, this value is chosen after visual inspection. In addition, we removed the 10 percent of the shortest (by number of points)[These filaments are also the shortest in length.] filaments. The total length of the obtained skeleton based on galaxy distribution, averaged over the 27 cubes, is L_GFS = 840 ± 162 Mpc/h. The characteristic length, defined as the total skeleton length divided by the total cube volume is 0.0024 ± 0.0005 Mpc/Mpc^3. This value is in good agreement with what found by other studies (e.g. ). We now use the total length of the GFS to set the parameters to extract the DMFS. We run DisPerSE using all the DM particles. In this case, each cube contains from 2 to 4 · 10^7 particles. To obtain a total length that is comparable, within the errors, to that of the GFS, we find that we need to increase the cut off threshold to 10^9 and set the smoothness parameter to 3000. These much higher values with respect to those adopted for the GFS are due to the fact that DM particles are at least a factor of thousand more numerous than galaxies. Finally, we remove the 10 percent of the shortest filaments in each cube to get rid of small structures that would only add noise to the analysis. The average total length of the DMFS skeletons is L_DMFS = 847 ± 179 Mpc/h (corresponding to a characteristic length of 0.0025 ± 0.0005 Mpc/Mpc^3). Since the number of particles in the dark matter cube is about 30 times larger than the number of galaxies, also the corresponding output skeleton DMFS contains about 30 times more points than skeleton GFS (there is a higher density of reference points along the lines describing the filaments). As a result, DMFS are specified with a higher sampling than GFS. We hence reduce the number of DMFS points, preserving the shape of the skeleton, so that the median segment of the DMFS is roughly equal in length to the median segment of the GFS. This procedure is necessary to perform meaningful comparisons of the distance of galaxies/dark matter particles from the skeleton, as done in the next section. Fig. <ref> shows an example of the final network obtained by DisPerSE for one of our cubes with a Coma-like halo in the center. The left panel shows the galaxy distribution and the corresponding GFS (red curves), the middle panel shows the DM distribution and the corresponding DMFS (blue curves), and the right panel illustrates the two filament structures, for an easy visual comparison. While the two networks are broadly consistent to first approximation, and this is expected as baryonic matter follows approximately the dark matter, some discrepancies are already evident. § DARK MATTER AND GALAXY FILAMENTARY STRUCTURE We now aim at quantifying the similarities and differences between GFS and DMFS, to help understanding results from different works, and guide the interpretation of observational data, for which we do not have access to the information about the dark matter distribution. §.§ Filament length As already discussed, the definition of what a single filament is varies greatly depending on the approach used to define the skeleton <cit.>. In this work, we define filaments as DisPerSE integral lines that connect maxima points (i.e. the nodes of the skeleton) with saddle-points. Although – by construction – the total length of the skeleton is comparable for DMFS and GFS, some differences in the skeleton properties are observed (Fig. <ref>). First of all, skeletons extracted by different tracers consist of different numbers of filaments. For all but two cubes, the number of filaments in the GFS exceeds the number of filaments in the DMFS. The median difference between number of filaments in the DMFS and GFS, considering all cubes is 4, but can be as large as 13. Obviously, if the total length in the GFS and DMFS is the same, and the number of filaments is larger in the GFS, dark matter filaments must be longer. The filaments length distribution is shown in Fig. <ref>. The median length of DMFS filaments is 35±5 Mpc/h versus 30±5 Mpc/h for GFS, indicating that indeed overall DMFS filaments are always longer than those defined by galaxies (we have run a KS test on the distributions shown in Fig. <ref>, and found a p-value of 0.01468). The length of a filament represents the characteristic distance between critical points for the sampled cube. So, different values of filament length in the DMFS and GFS reflect topological differences between the distribution of dark matter and galaxies with M_∗ > 10^9. Moreover, <cit.> notes significant differences for short and long filaments[L_f < 9 Mpc/h for short and L_f ≥ 20 Mpc/h for long filaments for the volume and DisPerSE settings used in <cit.>.], which also indicates that the length of the filaments is related to the properties of the distribution (topology) of the particles. Finally, we note that unlike the number of filaments, the length of each filament depends strongly on the smoothing of the skeleton (the stronger the smoothing, the shorter the filaments). We provide additional discussion on the filament length in Appendix <ref>. §.§ Coincidence In this section, we aim at investigating the degree of the coincidence between the filamentary structures identified by using dark matter particles and galaxies. To quantify the overlap between the two skeletons, we use two different methods, one based on the "distance" between the skeletons of the two FSs, and another using additional information about critical points of the skeletons (maximas, minimas, saddle-points, bifurcation). In both cases, we use the DMFS, which we assume to be the “real” network, as reference and quantify how much the GFS deviates from it. §.§.§ Coincidence by distance between skeletons The method we discuss in this section has been already used in the literature <cit.>. To quantify how much the two skeletons overlap we proceed as follows: for each segments of the reference DMFS skeleton, we measure the distance to the nearest segment of the GFS skeleton[We obtain very similar results if we use the GFS as a reference, i.e. if we compute the distances of the DMFS from the GFS.]. If the structures were exactly the same, we would expect a distance between the skeletons close to zero for all the segments. In practice, also considering that the parameters adopted to determine filaments in the particles and galaxies distribution are not the same, we have to allow for some scatter, so we expect to obtain a unimodal distribution of distances peaked close to zero. The top panel of Fig. <ref> instead shows that, for each cube, the distribution of the distances of the GFS from the nearest DMFS is characterized by a bimodal distribution, with a first peak at about 0.4 Mpc/h, a minimum at about 2 Mpc/h and a second peak at 8 Mpc/h. The two peaks have similar height, and the distribution can be well approximated by the superposition of two log-normal distributions. We assume that the first peak corresponds to the set of filaments that coincide in the DMFS and GFS, and the second one to the set of DMFS structures that do not have a corresponding GFS section. The similarity of the height of the peaks indicates that approximately only half of the structures match. More precisely, the percentage of the DMFS structure that has a GFS segment closer than 2 Mpc/h (first mode) is 59±6%. The color intensity of the lines in Fig. <ref> reflects the number of massive haloes (M_h > 10^14 M_⊙) in each cube. The figure shows that, in the cubes hosting a low number of massive haloes (light purple lines), the height of the left peak is significantly smaller than that of cubes hosting a larger number of massive haloes. This behaviour is better seen in the bottom panel of Fig. <ref>, showing the cumulative distributions of the distances. Therefore, the DMFS and GFS have a stronger overlap in regions around massive haloes. To further support this claim, we check if the number of massive haloes in a cube has an impact on the results. Selecting only the cubes with more than 12 massive haloes, we find that the coincidence by distance between the GFS and DMFS skeletons improves (to 62±4%), while selecting only the cubes with less than 6 massive haloes, it worsens (down 53±4%). Finally, we note that if we compute coincidence by distance between skeletons separately for the three sets of cubes (Coma-like, Virgo-like and randomly selected), we do not measure any statistical difference. This is due to the fact that in all these cubes a mix of haloes of different mass is present. §.§.§ Coincidence by coverage of DMFS critical points The method presented above relies on the knowledge of the integral lines obtained by DisPerSE, ignoring the information about the nodes of the network. In this section, we present an alternative method for comparing the two skeletons, taking into account both pieces of information. We define coincidence as the fraction of critical points of the DMFS skeleton that are covered (crossed) by the integral lines of the GFS[Similar results are obtained when inverting DMFS and GFS in the analysis.]. A simplified illustration of this method is shown in Fig. <ref>. The image on the left shows two DisPerSE-like structures consisting of a set of critical points and segments. The image on the right shows only the red segments that intersect at least one critical point. Only 2 out of 6 blue critical points are intersected by red segments. That is, the coincidence by coverage of blue points is 33%. As for the previous method, we measure the distance between each critical point of the DMFS and the nearest segment of the GFS. The distribution of these distances is shown in the top panel Fig. <ref>. Similarly to Fig. <ref>, the distribution is bimodal. The first peak is located at less than 0.1 Mpc/h for the vast majority of the cubes considered, indicating that the critical points of the DMFS are very close to the segments of the GFS (closer than the characteristic distance for filaments identified by both skeletons, which is 0.4 Mpc/h according to the coincidence by distance between skeletons). The second peak is located at 9 Mpc/h. As for the previous method, we assume that this peak includes DMFS critical points that are not covered by GFS segments. If we use the 2 Mpc/h division to determine the fraction of critical points of one FS that are captured by the other skeleton, then Fig. <ref> shows that only about ∼ 52±8% of all critical points of DMFS are matched by GFS segments in each cube. If instead of taking into account all the critical points we consider only those corresponding to the density peaks (maxima), we can estimate how the GFS captures the density peaks of the DMFS, rather than assessing the level of overlap between the two skeletons. In this case, we find that the median proportion of DMFS maxima that have a GFS segment closer than 2 Mpc/h is 38±12% for all cubes. This value is strongly dependent on the number of massive haloes in the cube: it raises to 45±14% when considering only cubes with at least 12 haloes with M_h > 10^14 M_⊙, and drops to 33±3% when considering only cubes with less than 6 such massive haloes. Results do not change when considering the GFS maximas captured by the integral DMFS lines: 38±11% of galaxies density peaks are recovered. To summarize, we find that coincidence by distance between skeletons and coincidence by coverage of DMFS critical points provide consistent results, highlighting that a non negligible fraction (about half) of the filaments obtained from the distribution of galaxies are not traced by overlapping dark matter filaments. §.§ Connectivity Filaments can be seen as bridges connecting haloes <cit.>. Another way to quantify how well the different filament structures overlap is to count the number of unique filaments that cross the virial radius of haloes more massive than 10^13.5 M_⊙. This parameter is called connectivity and is a commonly used characterization of the cosmic web <cit.>. To compute the connectivity, we do not require that the filament necessarily begins inside haloes, as we assume that a simple crossing of the virial radius can already impact the evolution of the halo. About 69 percent of haloes with M_h > 10^13.5 M_⊙ in all cubes are crossed by at least one filament, regardless of filament type (DMFS or GFS). The fraction is up to 90% for massive haloes (M_h > 10^14 M_⊙). However, some haloes can be intersected by several different filaments and this number can be different depending on the tracer considered. For example, in Fig. <ref> we show the regions around two haloes with mass M_h = 1.6 · 10^15 M_⊙ (Coma-like, top) and M_h = 4.9 · 10^14 M_⊙ (Virgo-like, bottom), with highlighted the DMFS on the left and the GFS on the right. It is immediately clear that the network of filaments crossing the virial radius vary depending on the tracer used, with different number of filaments detected around different haloes. The distribution of the number of individual GFS and DMFS filaments that cross the virial radius of haloes of M_h > 10^13.5 M_⊙ is shown in Fig. <ref>. The haloes of all cubes are parsed into mass bins and, for each bin, we calculate the mean and the variance. Regardless of the adopted tracer, the connectivity depends on the mass of the halo: the more massive the structure, the larger the number of filaments crossing its virial radius. Haloes less massive than Virgo are typically associated with one or no filament. Virgo-like haloes have one to three connected filaments, depending on the type of tracer used, and Coma-like haloes have even larger number of crossing filaments. For haloes with mass > 3 · 10^14 M_⊙, a larger number of dark matter filaments are detected than galaxy filaments. On average, the GFS connectivity is systematically lower than the DMFS connectivity by ∼1. Similar results, confirming the increase of connectivity with increasing mass of the halo, have been found by other studies <cit.>, albeit the values we find tend to be lower than those published in earlier work. This may be ascribed to the adopted threshold for considering filaments and detailing of FS. The connectivity we find for the Coma-like clusters is in agreement with <cit.>, who find a median value of the connectivity of 2.5. § DEPENDENCE OF THE GALAXY FILAMENT STRUCTURE ON THE PROPERTIES OF GALAXIES USED AS TRACERS In the previous sections, we have compared filaments extracted from the distribution of dark matter and from the distribution of galaxies with masses M_* > 10^9 M_⊙. Different choices could lead to significantly different results for what concerns the filament length, coincidence and connectivity. In what follows, we will inspect how the filament structures and some of the properties discussed above depend on the use of different tracers, and if/how much this affects the differences between GFS and DMFS that we have quantified in the previous section. Specifically, we use the GAEA semi-analytic model to test how results depend on galaxy stellar mass and what is the influence of `orphan' galaxies, i.e. those galaxies that are no longer associated with distinct dark matter substructures. In fact, the naive expectation is that, by using only galaxies that are either centrals of their own dark matter halo or associated with a distinct subhalo, we should obtain a better overlap with the filamentary structure that is traced by dark matter particles. §.§ Varying the stellar mass of the galaxy tracers It is well known that galaxies represent biased tracers of dark matter density, and it is now well established that galaxy bias depends on galaxy properties, such as stellar mass (for an overview, see e.g., ). Recent work has also highlighted that massive galaxies tend to concentrate around filaments, while less massive ones are found both in dense regions and in the `field' <cit.>. In this section, we investigate how the GFS skeleton changes when taking into account only galaxies above a given stellar mass. In particular, we inspect whether the overlap between the DMFS and GFS improves when considering galaxies with different stellar masses. Specifically, we consider four different thresholds for the stellar mass (M_* > 10^7 M_⊙, M_* > 10^8 M_⊙, M_* > 10^9 M_⊙ (GFS), and M_* > 10^10 M_⊙)[Using galaxies M_∗ < 10^9 M_⊙ is done for illustrative purposes, as the corresponding samples in GAEA are not fully resolved]. For each of the 27 cubes, and for each additional stellar mass cut considered, we run again DisPerSE and extract the corresponding filamentary structure, so that our condition on the total filament length is satisfied (the filament length - i.e., the sum of the lengths of all filaments - within each box should be approximately equal to that of the corresponding DMFS). Examples of regions around a Coma-like halo, a Virgo-like halo, and a region without any massive halo at the centre are shown in Fig. <ref>. The figure shows that the FS extracted only using the most massive galaxies (M_* >10^10 M_⊙) traces only the `strongest' filaments in the dark matter distribution, for all three cases considered. Lowering the mass threshold to M_* > 10^9 adds a significant number of galaxies, which also tend to aggregate around the densest dark matter regions. The corresponding GFS branches out significantly around massive clusters, getting much closer to the DMFS (see Sec. <ref>, for a quantitative comparison). Considering galaxies with lower stellar mass (10^8 < M_* < 10^9 M_⊙), entails a change in the topology of the galaxy distribution within the cubes. Indeed, such galaxies are distributed both in filaments and in the general field. As a consequence, we both detect new filaments (trace faint filaments that are not detected in the GFS using only more massive galaxies) and observe a change in the detailed shape of the filaments when comparing them to those that are identified by using only more massive galaxies. Going further down in stellar mass does not introduce any additional significant change. This is due to the small number (only ∼ 23% of the galaxies with M_∗ > 10^7 M_⊙) of galaxies with stellar mass 10^7 M_⊙ < M_∗ < 10^8 M_⊙ in the cubes, because this stellar mass is well within the resolution limit of our model applied to the Millennium Simulation. This simple visual comparison shows that the stellar mass completeness of the observational sample play an important role. Including galaxies with stellar mass lower than 10^9 M_⊙ improves the overlap between the DM and galaxy filamentary structure identified, but only slightly. In the following, we quantify the visual comparison just discussed. §.§.§ Coincidence by distance between skeletons In order to support the conclusions based on the visual inspection of Fig. <ref>, we quantify how much the coincidence by distance between the DMFS and the GFS varies when including galaxies of different mass. Fig. <ref> shows the probability distribution function of the distance between the GFS obtained using different stellar mass cuts with respect to the DMFS; the reference case, corresponding to the GFS obtained using all galaxies more massive than M_∗ = 10^9 M_⊙, is shown by the dark green line. Similarly to what was discussed in Section 3, the distances between the GFS and DMFS follow a bimodal distribution, for all samples considered. The right peak corresponds to distances of about 9-10 Mpc/h, and represents the `non-coincident' parts of the skeletons. There is no significant improvement of the agreement between the GFS and the DMFS when including lower-mass galaxies. The fraction of DMFS segments that overlap with a GFS segment is always about 60 percent with differences around 1-3% when considering different samples of tracers. It is noteworthy that the left peak of the distribution of distances obtained when including lower-mass galaxies is shifted towards lower values, indicating that lower-mass galaxies allow a slightly better tracing of the dark matter filamentary structure. This is connected to the fact that when low-mass galaxies are taken into account, the filaments contain a larger number of galaxies, which allows the galaxy density distribution to be defined with higher accuracy, and therefore a more accurate tracing of the filaments axes. §.§.§ Connectivity Next, we can also inspect how the inclusion of lower mass galaxies affects connectivity, as shown in Fig. <ref>. For haloes that are less massive than Virgo, we do not find significant differences between DMFS and all FSs that are extracted using galaxies less massive than 10^10 M_⊙: one or no filament typically cross such haloes. The probability for a halo to be crossed by a filament increases with halo mass, and reaches 1 for haloes with mass 3 · 10^14 M_⊙ (10^14.48 M_⊙). The increase in connectivity for larger halo masses is not significant when considering the filamentary structure identified using only the most massive galaxies, confirming the visual impression of Fig. <ref>. Therefore, massive galaxies trace groups and/or clusters well, but not filaments. Evaluating the connectivity of massive haloes from the distribution of only massive galaxies would underestimate the result. Fig. <ref> shows that the inclusion of lower mass galaxies increases the values of the connectivity for all haloes more massive than ∼ 10^14.4 M_⊙, bringing it closer to the connectivity that is estimated from the DMFS. We note, however, that this does not improve significantly the spatial coincidence of the filaments, as discussed in the previous section. §.§ Centrals and satellites as tracers Model galaxies in the GAEA simulation are classified into three types: centrals, satellites, and orphans. Centrals and satellites are associated with distinct dark matter subhaloes (centrals are associated with the `main halo', i.e., the bound part, of a friend-of-friend group), while orphan galaxies correspond to those whose parent subhalo has been stripped below the resolution limit of the simulation. The model assumes that, being more concentrated than dark matter, galaxies can survive longer, and assign them a residual merging time that is parametrized using a variation of the Chandrasekkar dynamical friction formula <cit.>. In GAEA, positions and velocities of orphan galaxies are traced by following the most bound particles of the subhaloes at the last time they could be identified. As mentioned above, the naive expectation is that a better overlap with the DMFS can be obtained when considering only centrals and satellites, as they are associated with dense regions of the DM distribution. Moreover, it should be noted that orphan galaxies represent 26±2% of the total number of galaxies in our simulated cubes, so that uncertainties of position orphan galaxies can significantly affect identifying filaments. To quantify the expectations just discussed, and the impact of orphan galaxies on the filamentary structure extracted, we consider two additional runs of DisPerSE obtained by using only centrals and satellites galaxies or only centrals (more massive than M_*^cen > 10^9 M_⊙). As in all the other runs, DisPerSE parameters are set by imposing that the total filaments length is close to that obtained by considering the GFS based on the sample including all galaxies. Fig. <ref> shows an example of the 3D and filamentary structure obtained when considering these different samples. §.§.§ Influence of `orphan' galaxies Orphan galaxies tend to be found in large numbers in massive haloes, as can be appreciated from comparing the second and third columns of Fig. <ref>. In general, we do not find significant differences in the filamentary structure identified when excluding orphan galaxies. This means that the central galaxies and their satellites (those associated with a distinct subhalo) identify filaments quite well: the coincidence value estimated in previous sections (59± 6 percent) reduces to 55± 8 percent when we exclude orphans. In Fig. <ref> we quantify the overlap between the GFS and that of the dark matter when including/excluding orphans. The GFS obtained considering centrals+satellites trace the DMFS as well as the GFS obtained by including all model galaxies. The overall small impact of orphan galaxies on the FS identified is confirmed by the very small changes obtained for the connectivity value, as shown in Fig. <ref>. Only around the most massive haloes, where orphans become more important, one finds larger variations (still not very significant within the scatter) of the connectivity. Given that the FS is not significantly modified when the orphan galaxies are removed, we can conclude that the central and satellites trace the distribution of dark matter as well as all galaxies. Additionally, the uncertainty on the positions of orphan galaxies does not affect extracting filaments definition even if orphans represent about one quarter of the total galaxy sample. We deem this to the fact that orphan galaxies are not randomly distributed, but over-abundant in dense regions like filaments and clusters. §.§.§ Centrals as tracers Figures <ref> and <ref> (first and third columns) show that centrals can trace the filamentary structures outside the virial radius of massive haloes, but can not trace (by construction) how filaments extend within the virial radius of haloes. Therefore, they cannot be used to define the connectivity. We note that dark matter filaments contain enough central galaxies inside to be distinguished as filaments in 70x70x70 Mpc/h^-3 volumes. The formal metric coincidence by distance is shown in Fig. <ref>. Around 58±7% of the GFS (centrals) segments can be found near (close that 2 Mpc/h) the segments of the DMFS. Despite the fact that the absolute value of the coincidence has not changed significantly, we note a worsening of the tracing of the DMFS axis: the peak of the distance distribution is shifted from 0.4 Mpc/h to 1 Mpc/h. § SUMMARY AND DISCUSSION In this work, we have applied the DisPerSE code to the outputs of the GAEA semi-analytic model to investigate how the filaments extracted from the distribution of galaxies match the filament structure obtained using dark matter particles. This analysis is relevant for interpreting observational studies that do not have access to the distribution of dark matter. To investigate how a biased galaxy distribution affects the identification of filaments at z=0, we used 27 simulated volumes: 9 are centred around Virgo-like haloes, 9 around Coma-like haloes, and 9 corresponds to regions of `average' density, that have been randomly selected from a much larger simulated volume. For each of these cubes, we extracted the FS separately for the dark matter distribution and galaxies. We fine tuned the parameters of DisPerSE to obtain the same total length of the FS and cleaned the final structures excluding the 10% shortest filaments. We introduced several metrics to quantify the overlap between the two structures: filament length, coincidence by distance between skeletons, coincidence by coverage of the DMFS critical points, and connectivity. Overall, DM filament are systematically longer than galaxy filament. The analysis of the coincidence showed that around 60% of dark matter filaments have a corresponding filament in the structure identified by the distribution of galaxies with stellar mass M_∗ > 10^9 M_⊙. This result depends on the number of massive (M_h > 10^14 M_⊙) haloes in the volume considered: for volumes containing less than 5 massive haloes, the overlap between the FSs traced by dark matter particles and galaxies gets worse. We also have shown that there is no significant effect on coincidence when uncertainties of position about a quarter of galaxy samples (orphan galaxies) or using only centrals galaxies. To further extent, the coincidence between DMFS and GFS can not be improved by including low-mass galaxies (M_∗ < 10^9 M_⊙) or using only massive (M_∗ > 10^10 M_⊙) galaxies. To further support this result, we measured the distance from galaxies of different mass to the DMFS (Fig. <ref>). More than 70 percent of galaxies with M_∗>10^11 M_⊙ are at a distance of less than 2 Mpc/h from DMFS (i.e. they belong to the filaments). In contrary, only 22 percent of 10^7 > M_∗ > 10^8 M_⊙ ( and 36% of 10^8 > M_∗ > 10^9 M_⊙) galaxies are that close to filaments. Hence, it appears that massive galaxies not only are preferentially found in clusters and groups, but also in filaments, at least in the local Universe, in agreement with previous results <cit.>. In contrast, low-mass galaxies are located not only near the filament, but also in regions with lower average density, introducing noise in the filament extraction. These results should be interpreted with caution given that the galaxy catalogues we use in our study becomes incomplete for galaxy masses lower than M_∗∼ 10^9 M_⊙. Considering connectivity, we found that overall 65% of all haloes more massive than 10^13.5 M_⊙ are crossed by at least one filament, regardless of the adopted tracer and of the galaxy stellar mass cut adopted to extract filaments. The connectivity though depends on the mass of the halo: the more massive the structure, the larger the number of filaments crossing its virial radius. One important result of our analysis is that regardless of the galaxy stellar mass threshold adopted, about 41% (Sec. <ref>) of the dark matter filaments do not contain enough galaxies to be detected when using galaxies as tracers. An example of this case is shown in panel A of Fig. <ref>. One possible explanation is that there was not enough gas in these filaments to form enough galaxies of a given mass to allow the detection of a galaxy filament, given our adopted parameters in DisPerSE (Sec. <ref>). Another possible explanation can be related to the filaments extraction. For instance, lowering the adopted thresholds helps to detect some of these `missing' filaments (panels B and C of Fig. <ref>), as well as to detect additional `faint' filaments. Nevertheless, even adopting a low-threshold (panel C), some dark filaments do not have counterparts in GFS (and vice versa). We see this as a naturale reflection of the existence of the galaxy bias. We refer to Appendix <ref> for a more in depth analysis of the influence of adopted thresholds on the skeleton determination in 3D. Thus, Fig. <ref> shows that `missisng` filaments are a consequence of both physical causes, and "operational" problems related to the definition and identification of filaments. The contribution of each cause cannot be reduced by fine tuning the other. We note that the FS also depend, to some extent, on the properties of the galaxy sample used as tracers (e.g. depending on the galaxy stellar mass limit adopted). This is expected, given that galaxy bias depends on galaxy properties <cit.>. Our conclusions are in agreement with the work by <cit.>, who compare the COSMOS2015 <cit.> galaxy sample (photometric and spectroscopic data) and the hydrodynamic simulations HORIZON-AGN to investigate the difference between skeletons obtained by spectroscopic data, photometric data and corresponding dark matter data from HORIZON-AGN. §.§ Virgo- and Coma-like clusters Our selection of volumes centred on Virgo- and Coma-like haloes is motivated by the fact that there are good observational datasets available for these clusters in our local Universe. While we defer to a future work a more careful comparison between available data and predictions from the theoretical model that we have used in our study, we have carried out some basic comparisons with published results. The size of our cubes (70 Mpc/h ≫ R_vir∼ 2 Mpc/h) allows us to explore not only the region around these clusters, but the entire filamentary structure that is expected to flow into these clusters. As discussed in Sec. <ref>, we do not find a better coincidence between DMFS and GFS in cubes containing Virgo- or Coma-like haloes. In general, this is due to the fact that the volume considered is 30-35 times larger than the size of the haloes. However, on smaller scales (for example, as inside cubes with a side of 14 Mpc/h as in the Fig. <ref>), the coincidence of the DMFS and GFS (M_∗ > 10^9 M_⊙) near Virgo- and Coma-like improves to 73±10% and 78±7%. Accounting for low mass galaxies (M_∗ < 10^9 M_⊙) does not significantly affect the results. Thus, the recovery of the cosmic web from galaxies around massive halos is better than for regions of more average density in the Universe. With our adopted parameters and in the typical regions we have considered, we find on average 3.3 (from 1 to 7) dark matter filaments for Coma-like haloes and 2.8 for Virgo-like haloes (between 1 and 4). When considering the corresponding FS traced by galaxies with M_∗ > 10^9 M_⊙, we find 2.6 filaments for Coma-like haloes (between is 1 and 4) and 1.9 filaments for Virgo-like haloes (between 1 and 3). Our results are consistent with an analysis based on SDSS7 data in a region around the Coma cluster of size similar to that of the volumes we have considered: <cit.> estimate a median connectivity for Coma cluster of about 2.5. Approximately 1 dark matter filament going into clusters like Virgo and Coma cannot be traced by the distribution of galaxies more massive than 10^9 M_⊙. The results we obtain for Virgo- and Coma-like haloes depend significantly on the criteria adopted to detect filaments: we have chosen a large scale and a high level of persistence, which leaves us with only the `strongest' (highest contrast) filaments. This should be taken into account when comparing with other studies. For example, <cit.> found 13 filaments around the Virgo cluster. By increasing lowering the threshold we adopt for filaments identification, as for example in panel C (Fig. <ref>), a larger number of small filaments can be found around Virgo-like clusters, bringing our results closer to those by <cit.>. To summarize, the work presented in this paper shows that while the filament extraction strongly depends on the tracers used, and on the adoted parameters, the `strongest' filaments can be robustly identified. § DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the corresponding author. § ACKNOWLEDGMENT We thank the referee for the comments, that helped to improve the presentation of the results. We are indebted to Volker Springel for making the snapshots of the Millennium Simulation available to us. DZ and BV acknowledge support from the INAF Mini Grant 2022 “Tracing filaments through cosmic time” (PI Vulcani). The authors thank the hospitality of the International Space Science Institute (ISSI) in Bern (Switzerland). Regular group meetings in these institutes allowed the authors to make substantial progress on the project and finalize the present work. MH acknowledges funding from the Swiss National Science Foundation (SNF) via a PRIMA Grant PR00P2 193577 “From cosmic dawn to high noon: the role of black holes for young galaxies”. mnras § THE EFFECT OF CHOOSING DIFFERENT PERSISTENCE THRESHOLDS IN DISPERSE As discussed in the Introduction, the filament identification is strongly linked to the adopted method and assumptions. The level of details in the resulting structure and the smallest filaments that are resolved can vary significantly from work to work. In DisPerSE, which is the tool used in the work, it is possible to tune parameters like the persistence threshold: all structures below the adopted value are not taken into account for calculating the FS. In this appendix we explore how the results presented in the paper depend on the choice of the threshold and how the overlap between DMFS and GFS depends on it. For simplicity, we here consider only one cube and run DisPerSE lowering the threshold of the persistence. In the first run, we adopt values that are approximately half the values adopted in our work, in the second run we half again such values. Hence, we adopt values of 5 · 10^8 and 3 · 10^8 for the dark matter, 10^3 and 10 for galaxies, in the first and second runs, respectively. This choice ensure us to always obtain DMFS and GFS of approximately the same total length. Results are presented in Fig. <ref>. The DMFS and GFS used in our work are shown in the left column (corresponding to panel A in Fig. <ref>). The central (corresponding to panel B) and right (panel C) columns show the consequences of lowering the thresholds of the persistence from 10^9 to 5 · 10^8( 3 · 10^8) for the DMFS and from 10^4 to 10^3( 10) for the GFS. When decreasing the threshold level, new filaments appear in both the DMFS and GFS: as shown in Fig. <ref>, some dark matter filaments acquire counterparts in GFS (and vice versa). As a consequence, also the coincidence by distance between skeletons improves (Fig. <ref>) and goes from  60% (Sec. <ref>) up to 76% and 79% respectively for panel B and panel C in Fig. <ref>. However, we note about 20 percent of dark matter filaments still do not have a GFS counterpart (within a radius of 2 Mpc/h) in case of each selected thresholds. We associate the increase in coincidence not only with the actual increase in the matching of two skeletons, but also with the fact that two larger structures coincide more strongly than smaller structures by construction. To sum up, simply lowering the threshold does not lead to guaranteed coincidence of the skeletons, but rather adds a lot of new elements, some of which intersect. § SUBSAMPLING EFFECTS The number of particles/galaxies in DM and galaxy cubes differs for each extracted cube by a factor ∼1000. In this Appendix, we briefly discuss the impact of having different sample sizes, by artificially reducing the number of DM particles in a cube. We have checked that the results presented below are valid when considering any of the 27 cubes we have at our disposal. We randomly extracted from a cube containing  3 · 10^7 DM particles 6 different subcubes, each with a different number of particles: 10^7, 5· 10^6, 10^6, 10^5, 2· 10^4 and 10^4. The two last cases mimic the typical number of galaxies with M_∗ > 10^9 M_⊙ over a similar region. For each sub-sample, we extracted the DMFS, as explained in Sec. <ref>, fine tuning the parameters to obtain a similar filament length in all realisations. Fig. <ref> shows 5 of 7 extracted FS[We excluded two FS (5· 10^6 and 2· 10^4) for sake of clarity in the plot.]. By visually comparing the different extractions, we find that in all the cases the main features of the filament structure are captured, even though some differences emerge. We attribute some of these to the changes in topology due to the decrease in the number of particles used to obtain the FS. To quantify the differences, we estimate the coincidence between skeletons by distance, as it was done in Sec.<ref>. We use the original cube as reference and plot the distribution of the distance between the different skeletons in Fig. <ref>. If the entire original skeleton was fully recovered, we would obtain an unimodal peaked distribution, whose width would represent minor discrepancies for the position of the skeleton. In contrast, the distribution is bimodal, highlighting how some of the DM filaments do not have sufficient density, when decreaseing the number of particles, to be detected. We also note the shift of the fist peak to right with decreasing the number of particles in the cube. It means that the samples with a lower number of particles reproduce the filament axis less accurately. The typical distance between the coinciding filaments (the position of the first peak) of the full DMFS and DMFS extracted using 10^7 particles differs by 0.1 Mpc/h while the difference between the full DMFS and 10^4 DMFS is 0.5 Mpc/h. All the other peaks are bracketed between these two values. We find that, using a different number of particles does not affect the average number of filaments detected: the number varies from 10 to 19, with a mean value of 15. We also measured the median filament length in each cube, and contrasted it to the number of DM particles in each cube (or, likewise, to the number density of the particles in each cube) in Fig. <ref>. Overall, the relation is rather flat, suggesting that the subsampling does not have an effect on the single filament length.
http://arxiv.org/abs/2307.04865v1
20230710192453
Social inequalities that matter for contact patterns, vaccination, and the spread of epidemics
[ "Adriana Manna", "Júlia Koltai", "Márton Karsai" ]
physics.soc-ph
[ "physics.soc-ph", "cs.SI", "stat.AP" ]
Social inequalities that matter for contact patterns, vaccination, and the spread of epidemics Adriana Manna^1 Júlia Koltai^2,3,4Márton Karsai^1,4,5* ^1Central European University, Quellenstraße 51, 1100 Vienna, Austria ^2Computational Social Science - Research Center for Educational and Network Studies, Centre for Social Sciences, Tóth Kálmánutca 4,Budapest, 1097, Hungary ^3Department of Social Research Methodology, Faculty of Social Sciences, Eötvös Loránd University, Pázmány Péter s étány 1/A, Budapest, 1117, Hungary. ^4 National Laboratory for Health Security, Hungary. ^5 Rényi Institute of Mathematics, Reáltanodautca 13-15, Budapest, 1053, Hungary. ^*Corresponding author: [email protected] =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Individuals socio-demographic and economic characteristics crucially shape the spread of an epidemic by largely determining the exposure level to the virus and the severity of the disease for those who got infected. While the complex interplay between individual characteristics and epidemic dynamics is widely recognized, traditional mathematical models often overlook these factors. In this study, we examine two important aspects of human behavior relevant to epidemics: contact patterns and vaccination uptake. Using data collected during the Covid-19 pandemic in Hungary, we first identify the dimensions along which individuals exhibit the greatest variation in their contact patterns and vaccination attitudes. We find that generally privileged groups of the population have higher number of contact and a higher vaccination uptake with respect to disadvantaged groups. Subsequently, we propose a data-driven epidemiological model that incorporates these behavioral differences. Finally, we apply our model to analyze the fourth wave of Covid-19 in Hungary, providing valuable insights into real-world scenarios. By bridging the gap between individual characteristics and epidemic spread, our research contributes to a more comprehensive understanding of disease dynamics and informs effective public health strategies. keywords: social inequalities, epidemics, human behaviour, mathematical models, contact matrices § INTRODUCTION Individuals' socio-demographic and economic characteristics are among the most significant factors that shape the dynamics of epidemic spreading processes. They not only influence the epidemic outcome in the hosting population but largely determine the course and severity of the disease for those who got infected <cit.>. There is a widespread agreement that pandemics disproportionately affect certain population groups rather than others <cit.>. Health-related inequalities in the burden of an epidemic partly arise from differences in the level of exposure to viruses and bacteria. These are related to differences in social interactions, mobility patterns and work-related conditions, which are aggravated by disparities in the ability to be compliant with non-pharmaceutical interventions (NPIs), such as self-isolation, home-office and avoiding crowded places <cit.>. At the same time, inequalities in the severity and fatality of a disease can be accounted for the heterogeneity in preexisting individual health conditions, protection attitudes and access to medical care, which are themselves related to socio-demographic and economic factors <cit.>. Although it is widely recognized that socioeconomic inequalities play a crucial role in the transmission dynamics of diseases, traditional mathematical approaches have often overlooked these factors. Indeed, the state-of-the-art framework of modelling infectious diseases incorporates stratification of the population according to age groups <cit.>, while discarding other potential relevant heterogeneities between groups of individuals belonging to different socioeconomic strata. They commonly ignore the mechanisms through which these heterogeneities come into play - both directly and indirectly, in the different phases of an epidemic process. In traditional epidemiological models, contact patterns are usually represented in the aggregate form of an age contact matrix (C_ij), which encodes information on the average number of contacts that individuals of different age groups have with each other <cit.>. Moreover, not only the description of contact patterns is limited to an age structure but also other epidemiological-relevant factors, such as vaccination uptake, infection fatality rates <cit.> or susceptibility <cit.> are usually described only by considering differences between different age groups. While age is unarguably one of the most important determinant of these characters, the current literature falls short to understand the role of other social, demographics, and economic factors in shaping human behaviour that are relevant to the epidemic spreading. In recent years, researchers have advocated including social aspects in infectious disease modelling, arguing that the epidemic modelling community is lacking a deep understanding of the mechanisms through which the socioeconomic divide translates into heterogeneities in the spread of infectious diseases <cit.>. With this in mind, we aim to shed light on these mechanisms to address the following interrogatives: which are the most important individual characters and corresponding sub groups of the population that differentiate the most their epidemic-relevant behaviours; and, how these differences translate into epidemiological outcomes? We address these questions by analyzing a large survey dataset coming from the MASZK study carried out in Hungary during the Covid-19 pandemic <cit.>. This data collects information on individuals' face-to-face interaction patterns in different contexts and other epidemiological-related behavioural patterns and opinions such as travel habits, vaccination attitude, or mask-wearing. The MASZK study consists of 26 cross-sectional representative surveys carried out in each month between 2020/04 and 2022/06 (for more details on the data see MM). By considering the course of the pandemic in the country, we aggregate the data in six periods covering four epidemic waves (Ws) and two interim periods (IPs) as demonstrated in Fig. <ref>a. Throughout this study we are mainly interested in the dynamics and most influential determinants of social contacts, that were recorded in the data as reported proxy interactions between pairs of individuals who spent at least 15 minutes within 2 meters from each other on a given day. Outside of home, we distinguish between two context where social interactions may evolve. We differentiate between work contacts that emerge at the workplace (or at school) of respondents (or their minors) and community contacts that they evolved elsewhere than home or work. Meanwhile, we do not take into account household contacts in our study as we assume they do not change significantly during the different phases of the pandemic. Through the analysis of contact patterns, our aim is to show existing significant differences among sub-groups of different socio-demographic characters, when accounting for the effect of age. Particularly, we demonstrate that dimensions such as employment situation and education level play a crucial role in determining contact numbers and vaccination uptake during a pandemic. Additionally, by proposing a new data-driven mathematical framework, which explicitly considers further social dimensions, other than age, we analyze the impact of such differences in terms of epidemic outcomes. Finally, by focusing on the Hungarian Covid-19 pandemic scenario, we reveal the unequal impact of the pandemic in terms of individuals belonging to different socio-economic statuses, where we differentiate individuals by their employment situation and income level. Note that although all the models have been completed on each pandemic periods, for the demonstration of our findings we exclusively show results about the 4th wave in the main text, while reporting our findings concerning other periods in the Supplementary Information. § RESULTS §.§ The main determinants of human contact patterns Human contact patterns represent the routes of infectious disease spreading by shaping the underlining transmission chain among susceptible individuals. During the Covid-19 pandemic, many aspects of human behaviour has experienced drastic interruption in most countries worldwide. This was largely due to the implementation of non-pharmaceutical interventions (NPIs) that were installed to mitigate the spreading and other effects of the pandemic. They aimed at controlling the number of contacts, as well as influencing individual attitudes, to change the ways humans meet and interact with each other <cit.>. Their effects are evident from Fig. <ref>a, where the average number of daily contacts in Hungary are shown. These numbers increase during interim periods (IPs) when the numbers of daily infection cases are low, and decrease during the epidemic waves (Ws) when infection risk is high, this way sensitively reflecting the adaptive behaviour of people throughout the pandemic. Although at the aggregate level these patterns are clear, there are non-trivial disparities at the level of individuals that may result in diverse contact patterns for given sub-groups of the population. To explore these effects, in our statistical analysis, we focus on several socioeconomic dimensions, that, interacting with age, may significantly affect the number of contacts that individuals have. We consider various socio-demographic variables such as individual's education, employment, income, gender, settlement type, actual chronic or acute disease or smoking habits (for more details and definitions see MM and SI). As a first observation, in Fig. <ref>b we show the distribution of the maximum confidence level of the effects of these variables that they had on the number of contacts in interaction with age, during each period (for definition see MM). In these distributions a higher value indicates that a given variable explains better differences in the number of contacts, given the age of individuals. Based on these results employment, education and income level were found to be the three most important dimensions in determining the number of contact. This observation stands if we consider the overall number of contacts including both work and community relationships, and it is true as well if we only consider community contacts (with results shown in the SI). To further investigate the ways individuals of different characters adapt their number of contacts to the actual epidemiological situation, in Fig <ref> we show the average number of contacts over time decoupled by education level (Fig <ref> a,c) and employment situation (Fig <ref> b,d) for adult individuals older than 15 years old (for the corresponding plot decoupled by age groups see SI). Results in panel (a) suggest that high and mid-educated individuals have consistently higher number of contacts in the community layer throughout the observed period. In addition, these groups are able to better adapt to the epidemiological situation and NPIs by decreasing their contact numbers during epidemic waves and increasing again during interim periods. At the same time, low-educated individuals maintain a lower number of contacts over time with smaller variation, reflecting their limited social environment and adaption capacities. By looking at the contact dynamics at workplaces it is evident that only highly educated individuals were able to adapt to the epidemiological situation, while the low- and mid-educated people presented similar dynamics and had less flexibility to adapt during the different pandemic periods. Interestingly, mid-educated individuals reported a higher number of contacts at work particularly during the 2nd and the 3rd epidemic waves. When we group people by their employment situation, it makes sense to compare groups in the community layer. From Fig <ref>b it is evident that employed people maintain more contacts even outside of their workplace as compared to not-employed individuals, which is a clear sign of behavioural differences between these two groups. Meanwhile, employed individuals follow somewhat similar contact dynamics as high-educated people (see Fig. <ref>d, signalling some correlation between these two groups. From an epidemic modelling perspective, the most convenient way to code interaction patterns between different groups is via contact matrices that represent a social network at an aggregate level. Contact matrices allow models to depart from the homogeneous mixing assumption, i.e. taking all individuals to meet with the same probability. Instead, they allow the introduction of non-homogeneous mixing patterns between different groups, while keeping the model computationally more feasible as compared to contact network based simulations. Conventionally, epidemic models incorporate C_i,j age contact matrices that code the average number of contacts between people from different age-groups (for formal definition see MM). Nevertheless, age contact matrices could be further stratified by other socio-demographic characters that influence the contact numbers of individuals. In Fig <ref>e-i we show the age contact matrices decoupled by education level and employment situation (C_di,j) for the 4th epidemic wave for the adult population (See MM for more details and SI for the corresponding matrices including children). These matrices have been computed by considering community, work and household contacts together. The emerging large differences between these matrices demonstrate clearly that beyond age, the identified variables, i.e. education and employment status, induce significant differences in the contact patterns of people. Although these variables may not be independent of the age of people, the observed distinct patterns suggest more complex mechanisms controlling contact patterns among sub-groups that can be explained by age alone. §.§.§ Beyond age stratification We demonstrated that social inequalities significantly influence human contact patterns, thereby shaping the network of proxy social interactions. This is critically important for the propagation of diseases as they determine the transmission chain of an infection spreading among a susceptible population. Consequently, incorporating the contact pattern differences among individuals of different socioeconomic background into epidemiological models is crucial. This could help to understand the unequal spread and uneven burden what an epidemic could impose for the different socio-demographic groups of a society. To this end, we propose a simple mathematical framework based on the extension of a conventional age-structured SEIR compartmental model <cit.>, which we call the extended SEIR model. The conventional SEIR model assumes that each individual in a population is in one of the mutual exclusive states of Susceptible (S), Exposed (E), Infected (I) or Recovered (I). Transitions of an individual between these states are controlled by rates (SE, EI, IR) with the λ rate influenced by the frequencies of interactions between age groups coded in a C_i,j age-contact matrix. The proposed extended model incorporates C_d̅,i,j age contact matrices instead, that are decoupled along important socio-demographic dimensions d̅ to model epidemic spreading in different sub-groups of the population (See MM for further details). Particularly, we analyze the impact of decoupled age-contact matrices along four dimensions: employment situation, education level, settlement, and income level (for exact definitions and possible variable values see MM). Taking the decoupled contact matrices as input we simulate the spread of infectious disease among an entirely susceptible population using both the conventional SEIR and the extended SEIR models. Having fixed the epidemiological parameters such as the transmission rates and seeding strategy, other input parameters like the population distributions and contact matrices have been estimated from data as we explain in the SI in more details. The proposed model allows us to investigate how differences in contact patterns along diverse social groups translate into an unequal burden of the epidemic. To quantify these differences in the epidemic outcome, we measure the attack rate defined as the population wise normalised fraction of individuals who contracted the infection from a given group. To follow the distribution of the people along the investigated dimensions, we show their population fractions in the different age groups in Fig. <ref>a-d. Meanwhile, in Fig <ref>e-h we depict the attack rates calculated using the extended SEIR models for different age and socio-demographic groups (and as reference only for age - see grey solid lines). Results are shown for the cases when we decouple each age-group along the four dimensions analyzed. As anticipated by the statistical analysis, employment, and education produce the largest differences between groups in terms of attack rate by age. Interestingly, the group of employed people happened to be the most infected group in all age groups, while mid and high educated individuals are more infected among those, who are 45-60 years old. When decoupling age contact matrices by settlement and income, although differences appear smaller between groups, high-income individuals and the ones living in the capital are more infected, particularly elderly ones with age 60+. These modelling results suggest the interesting overall conclusion that privileged people of the population report a higher attack rate, thus they are typically the most infected group relative to their population size. These results also demonstrate that the extended SEIR model is able to capture differences introduced by the considered socio-demographic variable and, in this way, to model the epidemic impact on the different groups of the population. These differences are also visible at the population level. In Fig <ref>i we show the differences between the attack rates predicted by the conventional and theextended SEIR models, for each age group and overall too. It is evident from these results that models using contacts only stratified by age may underestimate (negative difference) the size of the epidemic in different age groups or in the whole population. For example, our simulations based on data from the 4th wave demonstrate that the conventional SEIR model could predict higher attack rates for each age group with respect to the extended SEIR, which considers differences in contact numbers along the employment situation or the education level too. It is important to highlight that the uneven age distribution within the different subgroups sometimes reduces or annul the effect of difference in the contact patterns when we are computing aggregate quantities at the population level. This explains why, even if there is a significant difference in contact patterns, the difference in the attack rates only spans a small range between the two models. §.§ Vaccination uptake and contact number differences Beyond the crucial role played by the network of face-to-face interactions, individual vaccination attitudes may also substantially affect epidemiological outcomes by decreasing the morbidity and mortality. By applying the same pipeline of statistical analysis as we explained above, we identified the dimensions along which individuals made different decisions in terms of vaccination, given their age. In this case, the interaction with age is particularly important given that the Covid-19 immunization strategy implemented in Hungary followed an age-stratified outreach by prioritizing elderly individuals <cit.>. Interestingly, the statistical analysis in this case indicated income as the most important dimension along which individuals made different vaccination decisions (See SI for the results of the statistical analysis). Fig <ref>a-d show the percentage of vaccinated individuals by age and the investigated dimensions, during the 4th wave of the pandemic. Although by the 4th epidemic wave the vaccination saturated in Hungary, the effects of the age-dependent vaccination policy is clearly visible. More strikingly, we find that privileged groups of the population were more likely to get vaccinated against the Covid-19 virus. Convincingly, this observation is valid for all age groups and periods considered in the analysis (See SI for the corresponding figures for the other periods). To consider these observations, we model the vaccination uptake in the extendend SEIR framework. More precisely, we define the probability of getting vaccinated (i.e. immune or recovered from the point of the infection) to be dependent in this case on both the age and the subgroup of the population considered. Using this extended SEIR model, we are able to compare the effects of vaccination uptake, while keeping fixed the structure of contacts. Fig <ref>e-h shows the averted attack rate due to vaccination with respect to the non-vaccination scenario. We consider the probability of getting vaccinated along the four different investigated dimensions separately. In all of these scenarios the gain in averted infection is strongly dependent on the subgroup membership. As expected, the groups with higher vaccination uptakes are the ones, which reduce their attack rate the most in the vaccination scenario. However, this pattern is not linear. For example, among individuals aged 60+, although the not-employed people report a higher vaccination uptake, they are the ones that gain the less in terms of averted infections. This is because these individuals, having a low number of contact, are already protected from exposure to the virus thus they gain less from vaccination. It should be noted that our focus is solely on the number of infected individuals. It is important to acknowledge that alternative conclusions may arise if the number of averted deaths is taken into consideration. In any case, these results clearly show that although several other factors affect the outcome of an epidemic, by neglecting differences in vaccination uptake and the effects of vaccination campaigns among subgroups in modelling, we miss an important determinant, which significantly influence the final outcome of an epidemic. §.§ Stratified modelling of the Hungarian scenario To provide an example of how the proposed mathematical framework can be applied to a real case scenario, we model the 4th Covid-19 wave in Hungary between 09/2021 and 01/2022. As the statistical analysis showed that employment and income are the most important dimensions along which, respectively, contact patterns and vaccination uptake change the most, here we divide the population into subgroups by considering simultaneously these two additional dimensions other than age. In addition, we introduce a new compartment D to our SEIR model, that represents a dead state that infected individuals may enter with transmission rate ID. To simulate the SEIRD for the 4th Covid-19 wave in Hungary we calibrate our model using the Approximate Bayesian Computation (ABC) method  <cit.> on the total number of daily deaths from 09/2021 to 01/2022 <cit.>. Details about the fitting method and calibrated results are summarised in the SI. The results of the simulated model are presented in Figure <ref>, which shows the daily fraction of newly infected (panels (a)-(c)) and new dead (panels (d)-(f)) cases for different employment, income, and age groups. As expected, these curves suggests that group of employed people extracted the infection with the highest rate as compared to not employed others. At the same time, in terms of socioeconomic status and age, more affluent and younger people got infected more during the simulated epidemic wave. On the other hand, strikingly the contrary trend is suggested in terms of mortality rates. From the simulations we found that although not-employed, low-income and older individuals appeared with the lowest infection rates, they evolved with the highest mortality rate as compared to other groups. Considering that the fatality rate of infected individuals depends on their age, these observation can be largely accounted to the fact that not-employed and low-income individuals are also the oldest one in the population. To further explore the correlation between these dimensions, we examine the attack rates and mortality rates by age groups separately in each of the subgroups of the population stratified by income level and employment status (Fig. <ref>g-h). These results confirm an overall decreasing infection rate by age and that not-employed individuals experience the lowest attack rate in each age group. Further stratification of this group by income level reveals a clear pattern, with high-income people exhibiting a higher infection rate compared to mid-income and low-income individuals. In contrast, the infection pattern among income levels of employed individuals is age-dependent. For young employed individuals in age group 15-30, high and mid-income individuals register a higher share of infections, while among those older than 30, low-income individuals exhibit a higher infection rate compared to mid and high-income individuals. An exception is represented by the age group 60-70, for which the most infected group is still the high income. In terms of mortality we find an increasing trend by age, otherwise we can conclude a similar pattern. Once we account for the correlations among the dimensions of interest and age, our results show that employed people die with a higher rate, while in terms of income, other than the group of age 15-30 and 60-70, the lower income people suffered more death according to our simulations. The extreme decrease of mortality rate for the employed high-income group is due to data sparsity in the survey data, recording only a few data point in this category. § DISCUSSION There are several particular factors that may determine how an infection would turn-out for a given person. Some of them are coded genetically, or determined by physiological conditions, but many of them are environmental and correlate with one's socio-demographic characters. In any society people show uneven patterns along numerous social, demographic, and economic characters, like age, income or employment status. These characteristics not only induce medical disparities between people (as in immunity, overall health conditions, or chronic diseases) but they naturally translate to differences in adaption capacities and other behavioural patterns, letting certain groups more exposed to infection. The simultaneous actions of all these factors lead to observable inequalities in terms of epidemic burden between different groups at the population level. This study highlights the significant impact that social determinants have on human behaviours that are relevant to epidemic transmission. Specifically, exploiting the data of the MASZK study <cit.>, we show that contact patterns and vaccination uptake are influenced by socioeconomic factors. Our findings suggest that contact patterns are shaped by social factors not only in their absolute values but also in the extent to which they fluctuate in response to extraordinary events, such as a lockdown or curfew interventions. Specifically, our statistical analysis shows that socioeconomic factors such as employment situation and education level played a significant role in determining contact numbers and vaccination uptake during the COVID-19 pandemic in Hungary. Additionally, we find that privileged groups tend to have a higher number of a contact and are the ones able to better adapt to the epidemiological situation and NPIs by adjusting their number of contacts. Contrarily, less privileged groups maintain a lower number of contacts over time with smaller oscillations. We also find that privileged groups, such as those with higher education, income, and employment status, were more likely to get vaccinated. We propose a mathematical framework that extends the well-known age-stratified approach to model infectious diseases by explicitly accounting for differences in contact patterns and vaccination uptake for specific sub-groups of the population. This method allows us to better understand the mechanisms underlying the emergence of inequalities in epidemiological outcomes. Results demonstrate that traditional epidemiological models, that only consider age, could overlook crucial heterogeneities along other social and demographic aspects that may impact the spreading of an epidemic. By simulating a pandemic period in Hungary, we reveal the unequal health-related impact of the COVID-19 pandemic along individuals belonging to different socioeconomic groups. Although the higher number of contacts translates into higher attack rates for privileged individuals, the age structure and the vaccination decision of such groups translate into lower mortality rates for these individuals, while disadvantaged groups are the one suffering higher mortality. These results are in line with the empirical findings of <cit.> for the 2nd and 3rd Covid-19 waves in Hungary. Due to the limitation of the survey collection methodology, contact patterns of individuals can be differentiated only by the characteristics of participants. Indeed, the only information we know about the contacted peers is their age, while their other characteristics remain unknown. Thus, our extended SEIR model can only account for age-contact matrices that are decoupled along other social dimension of the participants (ego). In other words, although our model incorporates additional social dimensions, given the sub-group the ego belongs to, it still only considers the average number of contacts stratified by the age group of the contacted (alter). In order to introduce a generalised matrix <cit.> stratified along multiple socio-demographic dimensions of the contactee, we would need information about such dimensions. Such information can be collected via detailed contact diaries <cit.>, which are based on the reports of the respondents about the peers, and commonly suffer from recall biased and other limitations. By shedding light on the complex interplay between social, demographic and economic factors and disease transmission dynamics, our findings underline the need for a new mathematical framework for epidemic modelling that accounts for multidimensional inequalities. This would help us to better understand the socially stratified consequences of an epidemic and to highlight non-negligible inequalities between different socio-demographic groups. Additionally, incorporating social factors into epidemiological models will provide a valuable tool to design and evaluate targeted NPIs to cope more efficiently with the spread of an infectious disease. § MATERIALS AND METHODS §.§ Data description The data used in this study comes from the MASZK survey study <cit.>, a large data collection effort on social mixing patterns made during the COVID-19 pandemic. It was carried out in Hungary from April 2020 to July 2022 on a monthly basis. The data was collected via cross-sectional anonymous representative phone surveys using CATI methodology and involved a 1000 large nationally representative sample each month. During the data collection participants were not asked information that could be used for their re-identification. The data collection was fully complying with the actual European and Hungarian privacy data regulations and was approved by the Hungarian National Authority for Data Protection and Freedom of Information <cit.>, and also by the Health Science Council Scientific and Research Ethics Committee (resolution number IV/3073- 1 /2021/EKU). The primary goal of the data collection effort was to follow how people changed their social contact patterns during the different intervention periods of the pandemic. Relevant to this study, the questionnaires recorded information about the proxy social contacts, defined as interactions where the respondent and a peer stayed within 2 meters for more than 15 minutes <cit.> at least one of them not wearing mask. Approximate contact numbers were recorded between the respondents and their peers from different age groups of 0–4, 5–14, 15–29, 30–44, 45–59, 60–69, 70–79, and 80+. Contact number data about underage children were collected by asking legal guardians to estimate daily contact patterns. Beyond information on contacts before and during the pandemic, the MASZK dataset provided us with an extensive set of information on social-demographic characteristics (gender, education level etc.), health condition (chronic and acute illness etc.), financial and working situation (income, employment status, home office etc.), and attitude towards Covid-19 related measures and recommendations (attitude towards vaccination, mask-wearing etc.) of the participants. In order to study different stages of the pandemic, we consider six epidemiological periods including three epidemic waves (Ws) and three interim periods (IPs) (see Fig <ref>a). On the collected data a multi-step, proportionally stratified, probabilistic sampling procedure was elaborated and implemented by the survey research company using a database that contained both landline and mobile phone numbers. The survey response rate was 49 percent, which is expressly higher than the average response rate (being between 15-20 percent) of telephone surveys in Hungary. The sample is representative for the Hungarian population aged 18 or older by gender, age, education and domicile. Sampling errors were corrected using iterative proportional post-stratification weights. After data collection, only the anonymised and hashed data was shared with people involved in the project after signing non-disclosure agreements. §.§ Sociodemographic dimensions The sociodemographic dimensions that we analyze are the following: (i) education level, which can have three possible levels: low, mid and high; (ii) employment situation, which can be either employed or not-working, including students and retirees individuals; (iii) income can have three possible levels: low, mid and high; (iv) gender refers to the biological gender and can be either female or male; (v) settlement, which refers to the area where individual live and can be either capital, rural or urban; (vi) chronic disease is a boolean dimension indicating if an individual is affected by any chronic disease; (vii) acute disease is a boolean dimension indicating if an individual is affected by any acute disease; and (viii) smoking is a boolean dimension indicating if an individual is a smoker or not. A detailed explenation of these variables is provided in the SI. §.§ Statistical analysis In order to build an epidemiological model that explicitly takes into account social inequalities we need to identify, which are the main dimensions, that interacting with age affect contact patterns the most. To identify these dimensions, we model the expected number of contacts of respondents i using a negative binomial regression <cit.> as defined in Eq. <ref>: μ_i = α + β_1 age_group_i + β_2 X_i + β_3 age_group_i* X_i + ϵ_i, where age_group_i is the age class of i; X_i is the variable of interest (e.g., education, income etc.), age_group_i*X_i is the interaction term of age group and the variable of interest, and ϵ_i is the error term. Given μ_i, we define λ_i=exp(μ_i) to be the expected number of contacts for respondent i. Then we model the reported number of contacts for respondent i, y_i, as y_i ∼ Neg-Bin(λ_i, ϕ) where ϕ∈ [1, ∞) is a shape parameter that is inversely related to over-dispersion: the higher ϕ is estimated to be, the closest y_i’s distribution is to a Poisson distribution with rate parameter λ_i. We build a model <ref> for each variable of interest (X). Particularly, the interaction term between age_group_i and the variable of interest allows us to examine, whether there are differences in the effect of X_i on the number of contacts in the different age groups. To be able to provide a meaningful description of the interactions, we analyse the average marginal effect (AME) <cit.> of X_i on the number of contacts for different age groups, defined as: [ AME_X_i = 1/n∑_i=1^n∂μ_i/∂ age_group_i; ; ∂μ_i/∂ age_group_i = β_1 + β_3 X_i. ] Working mostly with categorical variables (e.g., education level or employment situation), we can calculate different AMEs for all categories of the categorical variables in each age_group. For each age group, we consider the maximum confidence level, at which the AMEs of two categories of the categorical variable show a statistically significant difference. Finally, to summarize the results, we consider the median of these maximum confidence levels. Therefore, we have one value for each variable of interest, which characterise the strength of the interaction between age and the variable of interest on the number of contacts. By following this procedure for each of the variables of interest, we are able to rank the variables according to their importance in driving differences in contact patterns additionally to age, in different periods of the Covid-19 pandemic. The pipeline of these analyses are illustrated in Fig. <ref>. Following the same methodology we investigate the dimensions that - in interaction with age - affect the most the probability of getting vaccinated against Covid-19. In this case, we model this probability using a logistic regression model instead of a negative binomial, as the dependent variable was binary and not a count one. In the Supplementary Information we provide the details of this models and we report their relative results. §.§ Decoupled contact matrices Conventionally, to compute age-contact matrices C_ij we divide a population into sub-groups according to their age and calculate the average number of contacts that individuals in age class i have with individuals in age class j <cit.>. Here, instead, we further stratify individuals from each age class i according to various dimensions, like employment status, settlement or education level. In detail, we decouple the conventional age contact matrix C_ij into D number of matrices, one for each of the sub-groups of the dimension that we want to take into account. More precisely, let d̅ be the sub-group of the dimension considered and let d̅∈ 1,..., D. We can write C_d̅i,j = CT_d̅i,j/N_d̅i, where CT_d̅i,j is the total number of contacts that individuals of age class i and belonging to sub-group d̅ have with individuals in age class j, regardless of the sub-group, to which the contacted individuals belong; and N_d̅i is the total number of individuals in age class i and sub-group d̅. For example, to differentiate between employed and not-employed individuals, we compute two age contact matrices: C_employed,i,j and C_not-employed,i,j. Note that here we are considering each of these dimensions (e.g., employment, education level, settlement) separately and only include one dimension additionally to age. However, this framework can be extended to any number of dimensions considered simultaneously, in this case, the length of the d̅ vector will correspond to the number of combinations of the levels of the dimensions considered. §.§ The epidemiological model In order to investigate the effect of the decoupled contact matrices on the dynamic of infectious disease transmission, we propose a simple mathematical framework as an extension of the conventional age-structured SEIRD compartmental model <cit.>. The conventional SEIRD model is defined on a population where individuals are assigned to five compartments based on their actual state: susceptible (S), exposed (E), infected (I), recovered (R) and dead (D). The model further defines the transition rates of individuals from one compartment to another by incorporating for each age class a given force of infection, which includes the average number of contacts with all the other age classes. The model proposed here extends this definition by taking into account not only the age structure of the contacts in the population but also their differences along a set of other dimensions d̅, such as education level, income level and employment situation. The model can be described by a set of ordinary coupled differential as presented in Eq. <ref>: [ Ṡ_d̅,i = -λ_d̅,i S_d̅,i; Ė_d̅,i = λ_d̅,i S_d̅,i -ϵ E_d̅,i; İ_d̅,i = ϵ E_d̅,i - μ I_d̅,i; Ṙ_d̅,i = μ (1-IFR_i) I_d̅,i; Ḋ_d̅,i = μ IFR_i I_d̅,i.; ] Here i indicates the age group of the ego, j indicates the age group of the peer, d̅ represents a vector of dimensions to which the ego belongs, β is the probability of transmission given a contact,ϵ is the rate at which individuals become infectious, μ is the recovery rate, IFR is the infection fatality rate, and C_d̅ is the age contact matrix corresponding to dimensions d̅. In this equation system we rely on the concept and of force of infection that is defined as: [ λ_d̅,i(t)= β∑_jC_id̅,j/N_jI_j ] Further we rely on the definition of the infection fatality rate (IFR_i), that is defined as fraction of infected individuals that died. See Supplementary Information for the details on the implementation of the numerical simulations. § ACKNOWLEDGMENT The authors gratefully thank to Alessandro Vespignani, Eszter Bokáni, Alessia Melegaro and Filippo Trentini for useful discussions. A.M. and M.K. were supported by the Accelnet-Multinet NSF grant. J.K. and M.K. acknowledges funding from the National Laboratory for Health Security (RRF-2.3.1-21-2022-00006). M.K. acknowledges support from the ANR project DATAREDUX (ANR-19-CE46-0008); the SoBigData++ H2020-871042; the EMOMAP CIVICA projects. § SUPPLEMENTARY INFORMATIONS § DATA PRE-PROCESSING All the analysis on the number of contacts have been performed after having delete the outliers at the 99% percentile with respect to the period of interest. All the results presented in this work have been computed by accounting each participant according to it's representative weight. The weight has been provided by the survey company as described in the MM section of the main text. § SOCIO-DEMOGRAPHIC DIMENSIONS The MASZK dataset provided us with an extensive set of information on social-demographic characteristics of the participants. In this section we provide a detailed explanation of all the variables used in this work. * Education level which can have three possible levels: low, mid and high. We considered low educated individuals those with any primary school degrees, mid educated those holding any diploma or certificate from professional schools, and high educate those with an university education or above (eg. BSc, MSc, PhD). * Employment situation which can be either employed or not-employed. In the not-employed category we include students and retirees individuals. * Income level can have three possible levels: low, mid and high. In particular, individuals were asked to report their perceived income with respect to the average using a scale from 1 to 10. We consider as low income individual those that answered from 1 to 4, mid income those who answered 5 or 6 and high income those that answered 7 or above. * Gender refers to the biological gender and can be either female or male. * Settlement which refer to the area where individual live and can be either capital, rural or urban. * Chronic disease is a boolean dimension indicating if an individual is affected by any chronic disease or not. * Acute disease is a boolean dimension indicating if an individual is affected by any acute disease or not. * Smoking is a boolean dimension indicating if an individual is a smoker or not. We consider as non smoker individuals that declared them self as not-smoker or that stopped, while we consider smokers individuals that smoke frequently or occasionally. § STATISTICAL MODEL In this section we provide all the results of the additional statistical analysis that we performed in order to support the findings presented in the Result section of the main text. §.§ Model on contacts §.§.§ Accounting for the high number of zeros Due to the implementation of NPIs (lockdowns and curfews) throughout the pandemic, people were forced, when possible, to reduce or completely reset their number of contacts. Thus, when analysing the distribution of the number of contacts we found a high presence of zeros. To test the robustness of the results, coming from the negative binomial regression model (nb), to this zero-inflated mechanism we implemented two additional models: * After having excluded the observations where the number of contacts were zero we re-run the negative binomial regression model on the non-zero number of contacts (nb_n contacts>0). μ_i = α + β_1 age_group_i + β_2 X_i + β_3 age_group_i* X_i + ϵ_i * We modelled the probability to have at least a contact using a logistic regression model (logit). log P_i(any contact=1) = α + β_1 age_group_i + β_2 X_i + β_3 age_group_i* X_i + ϵ_i where age_group_i is the age class of i; X_i is the variable of interest (e.g., education, income etc.), age_group_i* X_i is the interaction term of age group and the variable of interest, and ϵ_i is the error term. In model <ref> given μ_i, we define λ_i=exp(μ_i) to be the expected number of contacts for respondent i. While in model <ref> P_i(any contact=1) indicate the probability for respondent i to have at least one contact. By applying the same methodology as explained in the Method section we computed the max significance level by age for each of the variables and period considered in the analysis for model <ref> and <ref>. Fig <ref> shows the results of the three models that we implemented. Interestingly, we can see that the same qualitative patterns result from both the negative binomial models, that is if we include the observations where the number of contacts is 0 (nb), o we discard it (nb_n contacts>0) (Fig. <ref>a,b). In particular, although the maximum confidence level computed with these models differ in terms of variability, education, employment and income seem to remain the most significant dimensions in terms of explaining differences in contact numbers among subgroups of the population. The logistic regression model (logit) (Fig. <ref>c), shows similar results regarding the education and employment situation while it indicates that the other variable analyzed are not actually determining if individuals actually have contacts or not. §.§.§ Community contacts Furthermore, we also investigated the contacts happening exclusively in the community layer, excluding the ones happening at work. We show here the results of the statistical analysis on the number of contacts in the community layer. Particularly in Fig. <ref> we show the results of the three models we are implementing: 1. negative binomial regression (nb), 2. negative binomial regression only on positive observations of the number of contacts (nb_n contacts>0), and 3. logistic regression to model the probability of having at least one contact (logit). In all these models the dependent variable (y_i) refers to the community-level contacts of the individuals. Results are similar to the one run considering all the types of contacts together. Indeed, also in this case results indicate that education, employment and income are the most significant dimensions in terms of explaining differences in contact numbers in the community layer among subgroups of the population. Also the logistic regression model (logit) (Fig. <ref>c), shows similar results indicating that only education and employment are significant in determining if individuals actually have contacts or not in the community layer. §.§ Model on vaccination We model the probability of getting vaccinated against Covid-19 using a logistic regression model instead of a negative binomial, as the dependent variable was binary. Namely, we model the probability of getting vaccinated for respondent i using logistic regression as defined in Eq. <ref>: log P_i(vax=1) = α + β_1 age_group_i + β_2 X_i + β_3 age_group_i* X_i + ϵ_i where age_group_i is the age class of i; X_i is the variable of interest (e.g., education, income etc.), age_group_i*X_i is the interaction term of age group and the variable of interest, and ϵ_i is the error term. By applying the same methodology as explained in the Method section we computed the max significance level by age for each of the variables and period considered in the analysis (Fig. <ref>) § CONTACTS §.§ Average number of contacts in the community and in the work layer by sub-groups In the main text, we show the evolution of contacts over time decoupled by education level and employment. For completeness, here we report the same figures for income level Fig <ref>-a,c and settlement Fig <ref>-b,d. Looking at the community contact we can observe that there is a clear rank among the income levels in their number of contacts, with high-income individuals having the highest number of contact and low-income individuals having the lowest. The same is arguable for the individuals living in the capital, which appear to be the most active in the community layer, while, individuals living in rural areas are the less active. To what concern the contacts at work, we can clearly observe that high-income individuals were the ones who would better adapt to the epidemiological situation, while mid and low income maintained a fairly stable number of contact over time at the workplace. Particularly, they reported a higher number of contacts at the workplace during the Covid-19 waves. A similar conclusion can be drawn when we decoupled individuals according to their settlement. In this case, individuals living in rural areas appear to be the most active in the workplace while the ones living in the capital tend to have a lower number of interactions in the workplace. §.§ Average number of contacts in the community layer by sub-groups and age groups To show the robustness of our finding over different age groups in Fig <ref> we report the evolution of community contacts over time decoupled by education level, employment, income level and settlement. While the correlation with age influences the magnitude of differences among the examined sub-groups, the conclusion discussed in the main text appears to be still valid. §.§ Contact Matrices For each of the periods considered in this study, we computed the age contact matrix C_ij considering the whole population as shown in Fig <ref>. These are the matrices that have been fed to to the conventional SEIR model. Instead for the extended SEIR model, we computed the decoupled contact matrices (C_di,j) considering different dimensions as well: (i) employment situation (Fig <ref>), (ii) education level (Fig <ref>),(iii) settlement (Fig <ref>), and (iv) income level (Fig <ref>).All the matrices have been computed considering the contact at work, in the community and in the household. Detailed explanation of the computation of such matrices is provided in the Method section. § VACCINATION UPTAKE Here we show the probability of getting vaccinated against COVID-19 given age and another dimension of interest. Namely, we consider (i) employment situation, (ii) education level,(iii) settlement, and (iv) income level. From Fig. <ref> we can observe that privilege groups of the population tend to have higher vaccination uptake across all age groups. This fining is consistent over the four different periods considered in the analysis. § EPIDEMIC MODELS In this section, we report the ODE's equation of the (i) conventional age-stratified SEIRD model and, (i) extended SEIRD where beyond the age stratification we can differentiate the population along others dimension of interest (d̅). Specifically, the reported equations for the extended SEIRD model account also for vaccination. §.§ Conventional age-stratified SEIRD Let's consider an infectious disease that can be described with a Susceptible-Exposed-Infected-Recovered-Death model <cit.>. The epidemic dynamic is encoded in the set of differential equations in Eq. <ref>. Where, i indicates the age group of the ego, j indicates the age group of the alter, β is the probability of transmission given a contact, ϵ is the rate at which individuals become infectious, μ is the recovery rate, C_ij is the age contact matrix, and IFR_i is the infection fatality rate by age group. [ Ṡ_i = -λ_i S_i; Ė_i = λ S_i -ϵ E_i; İ_i = ϵ E_i-μ I_i; Ṙ_i = μ I_i; Ḋ_i = μ IFR_i I_i ] The force of infection is defined as in eq. <ref> λ_i= β∑_jC_ij/N_j I_j §.§ Extended SEIRD with vaccination We extend the conventional SEIRD model to account for different vaccination uptake of different groups of the population. Namely, each of the compartments is now considered separately for vaccinated and unvaccinated individuals. We define as g_1 and g_2 the efficiency of the vaccination respectively against infection and against death. In addition, we consider the delay in the official registrations of deaths by adding a new compartment Da and a delay of Δ^-1 days. The equations of the model are presented in equation <ref>. [ Ṡ_d̅,i = -λ_d̅,i S_d̅,i; Ṡv̇_d̅,i = -(1-g_1)λ_d̅,i Sv_d̅,i; Ė_d̅,i = λ_d̅,i S_d̅,i -ϵ E_d̅,i; Ėv̇_d̅,i = (1-g_1)λ_d̅,i Sv_d̅,i-ϵ Ev_d̅,i; İ_d̅,i = ϵ E_d̅,i - μ I_d̅,i; İv̇_d̅,i = ϵ Ev_d̅,i- μ Iv_d̅,i; Ṙ_d̅,i = μ (1-IFR_i) I_d̅,i; Ṙv̇_d̅,i = μ (1-(1-g_2)IFR_i) Iv_d̅,i; Ḋ_d̅,i = μ IFR_iI_d̅,i; Ḋv̇_d̅,i = μ (1-g_2) IFR_i Iv_d̅,i; Ḋ_̇ȧ_d̅,i = Δ^-1Ḋ_d̅,i; Ḋv̇_̇ȧ_d̅,i = Δ^-1Ḋv̇_d̅,i; ] The force of infection is defined as in eq. <ref> [ λ_d̅,i(t)= β∑_jC_d̅,ij/N_j [I_j+Iv_j] ] § EPIDEMIC SIMULATIONS We developed stochastic, discrete-time, compartmental models using chain binomial processes to simulate the transitions among compartments. Specifically, at each time step t, the model samples the number of individuals in group (d̅,i) and compartment X transitioning to compartment Y from PrBin(X_d̅,i(t),p_X_d̅,iY_d̅,i(t)). Here, p_X_d̅,iY_d̅,i(t) represents the transition probability. To illustrate this, let's consider the number of individuals in the group (d̅, i) and compartment S that at time t become exposed transiting to compartment E. Thus, the number of individuals in S_d̅, i(t) getting exposed are extracted from a PrBin(S_d̅, i(t), λ_d̅, i(t)) where λ_d̅, i(t) is the force of infection. The model has been initialize by computing the population distributions from the MASZK data, while we set the Hungarian population size to 9.750.000. The decoupled contact matrices have been computed considering the contacts happening at work, in the community and in with family members. While, the epidemiological parameters are set to realistic values to closely simulate the characteristics of Covid-19. These values are retrieved from the literature. In particular, ϵ is set to 2.4; γ is set to 1/6.6 <cit.>. The transmission rate β is computed in each of the periods using the Next Generation Matrix approach <cit.> on the aggregate age-contact matrices corresponding to the periods analysed. We fixed R0 = 2.5 and we derived β using Eq. <ref>. R_0 = β/μρ(C_ij) Where ρ(C_ij) is the spectral radius of the mage contact matrix. In the simulations in which we introduce vaccination, we respectively set the g_1 =0.6 and g_2=0.8 <cit.>. The initial size of the epidemic is set to 5. All the results in the main text refer to the median over 500 simulations of the model. §.§ Impact of different contact patterns In this section, we show the results of the extended SEIR model when differences in age contact matrices are considered for different sub-groups of the population. Particularly, we model age contact interactions differentiating individuals along their employment situation, education level, settlement and income level. For each of the dimensions considered, we run the extended SEIR model. We look at (i) and how the prediction of this model differs from the conventional SEIR and (ii) how the attack rate differs for different subgroups as a result of their differences in contact patterns. Specifically, in Fig. <ref> we show the difference between attack rate by age group as predicted by the extended SEIR model and the conventional SEIR model. The results are shown for each of the periods considered in the analysis. Again, as demonstrated in the main text, the conventional SEIR model tend to overestimate the attack rate by age group with respect to extended SEIR model, particularly when employment situation and education are taken into account. In Fig. <ref> we show the output of each of the extended SEIR models in terms of attack rate by age by differentiating along the subgroups taken into account. Results are shown for each of the periods considered in the analysis. As shown in the main text the analysis over the other periods confirms that, employed and highly educated individuals happened to be the most infected groups in all age groups. When decoupling age contact matrices by settlement and income, although differences appear smaller between groups, high-income individuals and the ones living in the capital are more infected, particularly elderly ones with age 60+. §.§ Impact of different vaccination uptake In order to show the impact of different vaccination uptake here we show the averted attack rate by age due to vaccination (Fig. <ref>). Specifically, we show the difference among the attack rate by age, for the different subgroups as predicted by the extended SEIR in the non-vaccination scenario with respect to the one in which individuals get vaccinated according to their age and subgroup- as shown in Fig. <ref>. The findings from the additional periods support the observations discussed in the main text. Indeed, Fig. <ref> clearly shows that vaccination benefits are disproportionately advantageous for more privileged population groups. § MODEL CALIBRATION We calibrate a SEIRD model with vaccination by modelling differences in contacts patterns and vaccination uptake among employed and not employed individuals belonging to different education levels. In particular, we calibrate the free parameters of the model using an Approximate Bayesian Computation (ABC) technique <cit.> . First, we define the prior distributions of the free parameters P(θ), a number of accepted sets N, an error metric m(E,E'), and a tolerance δ. We start sampling a set of parameters θ from P(θ), and generate an instance of the model using these parameters. Then, using the chosen error metric we compare an output quantity E' of the model with the corresponding real quantity E: if m(E,E') < δ then we accept the set θ, otherwise we reject it. We repeat this accept/reject step until N parameter sets are accepted. The empirical distribution of the accepted sets is an approximation of their real posterior distribution. Finally, we generate an ensemble of possible epidemic trajectories sampling parameter sets from the posteriors distributions. In this work, we consider the following free parameters and prior distributions: * the transmission rate parameter β: the prior distribution is set to U(0.02,0.15) * the delay in reporting deaths Δ: the prior distribution is set to U(5,20) * the initial recovered population R: the prior distribution is set to U(700K,2800K) * the initial exposed population E: the prior distribution is set to U(100,3K) * the initial infected population I: the prior distribution is set to U(200,9K) We calibrate our model on the aggregate number of daily deaths from 09/2021 to 01/2022. For simplicity, as the percentage of those who were vaccinated was quite stable <cit.> in the period considered we assume that the population got vaccinated at time 0. As an error metric, we use the Median Absolute Percentage Error (MdAPE). We also set the number of accepted sets N = 3000 and the tolerance δ = 0.3 In Fig <ref> are shown the posterior distributions of the parameters calibrated through the ABC rejection algorithm. The fixed parameters of the model have been informed from the literature. In particular: * the efficacy of the vaccine against infection and against death, is modelled as a normal distribution with mean respectively g_1 = 0.7, g_2 = 0.8, and standard deviation 0.05 <cit.>. This choice has been made to account for the variability of the efficacy of the different vaccination types and against the different variants. * the recovery rate μ = 1/2.5 <cit.> * the incubation period ϵ = 1/4 <cit.> * the infection fatality rate by age IFR_i is set as in Table <ref> <cit.> In Fig. <ref> we report the number of daily real and simulated deaths (median and 95% CI). 10 marmot2008closing Michael Marmot, Sharon Friel, Ruth Bell, Tanja AJ Houweling, and Sebastian Taylor. Closing the gap in a generation: health equity through action on the social determinants of health. The lancet, 372(9650):1661–1669, 2008. mamelund2021association Svenn-Erik Mamelund, Clare Shelley-Egan, and Ole Rogeberg. The association between socioeconomic status and pandemic influenza: systematic review and meta-analysis. PLoS One, 16(9):e0244346, 2021. kikuti2015spatial Mariana Kikuti, Geraldo M Cunha, Igor AD Paploski, Amelia M Kasper, Monaise MO Silva, Aline S Tavares, Jaqueline S Cruz, Tássia L Queiroz, Moreno S Rodrigues, Perla M Santana, et al. Spatial distribution of dengue in a brazilian urban slum setting: role of socioeconomic gradient in disease risk. PLoS neglected tropical diseases, 9(7):e0003937, 2015. mena2021 Gonzalo E. Mena, Pamela P. Martinez, Ayesha S. Mahmud, Pablo A. Marquet, Caroline O. Buckee, and Mauricio Santillana. Socioeconomic status determines covid-19 incidence and related mortality in santiago, chile. Science, 372(6545):eabg5298, 2021. Burstrom2020 Bo Burström and Wenjing Tao. Social determinants of health and inequalities in COVID-19. European Journal of Public Health, 30(4):617–618, 07 2020. paul2021socio Ayan Paul, Philipp Englert, and Melinda Varga. Socio-economic disparities and covid-19 in the usa. Journal of Physics: Complexity, 2(3):035017, 2021. zhao_harris_ellis_pebody_2015 H. Zhao, R. J. Harris, J. Ellis, and R. G. Pebody. Ethnicity, deprivation and mortality due to 2009 pandemic influenza a(h1n1) in england during the 2009/2010 pandemic and the first post-pandemic season. Epidemiology and Infection, 143(16):3375–3383, 2015. Gozzi2020 Nicolò Gozzi, Michele Tizzoni, Matteo Chinazzi, Leo Ferres, Alessandro Vespignani, and Nicola Perra. Estimating the effect of social inequalities in the mitigation of covid-19 across communities in santiago de chile. medRxiv, 2020. Jay2020 Jonathan Jay, Jacob Bor, Elaine O. Nsoesie, Sarah K. Lipson, David K. Jones, Sandro Galea, and Julia Raifman. Neighbourhood income and physical distancing during the covid-19 pandemic in the united states. Nature Human Behaviour, 12, 12 2020. valdano2021highlighting Eugenio Valdano, Jonggul Lee, Shweta Bansal, Stefania Rubrichi, and Vittoria Colizza. Highlighting socio-economic constraints on mobility reductions during covid-19 restrictions in france can inform effective and equitable pandemic response. Journal of travel medicine, 28(4):taab045, 2021. pullano2020evaluating Giulia Pullano, Eugenio Valdano, Nicola Scarpa, Stefania Rubrichi, and Vittoria Colizza. Evaluating the effect of demographic factors, socioeconomic factors, and risk aversion on mobility during the covid-19 epidemic in france under lockdown: a population-based study. The Lancet Digital Health, 2(12):e638–e649, 2020. bonaccorsi2020economic Giovanni Bonaccorsi, Francesco Pierri, Matteo Cinelli, Andrea Flori, Alessandro Galeazzi, Francesco Porcelli, Ana Lucia Schmidt, Carlo Michele Valensise, Antonio Scala, Walter Quattrociocchi, et al. Economic and social consequences of human mobility restrictions under covid-19. Proceedings of the National Academy of Sciences, 117(27):15530–15535, 2020. Sommer2015 Isolde Sommer, Ursula Griebler, Peter Mahlknecht, Kylie Thaler, Kathryn Bouskill, Gerald Gartlehner, and Shanti Mendis. Socioeconomic inequalities in non-communicable diseases and their risk factors: an overview of systematic reviews. BMC Public Health, 15, 09 2015. Bambra964 Clare Bambra, Ryan Riordan, John Ford, and Fiona Matthews. The covid-19 pandemic and health inequalities. Journal of Epidemiology & Community Health, 74(11):964–968, 2020. anderson1991infectious Roy M Anderson and Robert M May. Infectious diseases of humans: dynamics and control. Oxford university press, 1991. Leung2017 Kathy Leung, Mark Jit, Eric H. Y. Lau, and Joseph T. Wu. Scientific Reports, 7(1), 08 2017. mossong2008social Joël Mossong, Niel Hens, Mark Jit, Philippe Beutels, Kari Auranen, Rafael Mikolajczyk, Marco Massari, Stefania Salmaso, Gianpaolo Scalia Tomba, Jacco Wallinga, et al. Social contacts and mixing patterns relevant to the spread of infectious diseases. PLoS medicine, 5(3):e74, 2008. melegaro2017 Alessia Melegaro, Emanuele Del Fava, Piero Poletti, Stefano Merler, Constance Nyamukapa, John Williams, Simon Gregson, and Piero Manfredi. Social contact structures and time use patterns in the manicaland province of zimbabwe. PLOS ONE, 12(1):1–17, 01 2017. Mistry2021 Dina Mistry, Maria Litvinova, Ana Pastore y Piontti, Matteo Chinazzi, Laura Fumanelli, Marcelo F C Gomes, Syed A Haque, Quan-Hui Liu, Kunpeng Mu, Xinyue Xiong, M Elizabeth Halloran, Ira M Longini, Stefano Merler, Marco Ajelli, and Alessandro Vespignani. Inferring high-resolution human mixing patterns for disease modeling. Nature Communications, 12(1):323, 2021. prem2017projecting Kiesha Prem, Alex R Cook, and Mark Jit. Projecting social contact matrices in 152 countries using contact surveys and demographic data. PLoS computational biology, 13(9):e1005697, 2017. grijalva2015household Carlos G Grijalva, Nele Goeyvaerts, Hector Verastegui, Kathryn M Edwards, Ana I Gil, Claudio F Lanata, Niel Hens, et al. A household-based study of contact networks relevant for the spread of infectious diseases in the highlands of peru. PloS one, 10(3):e0118457, 2015. gozzi2022anatomy Nicolò Gozzi, Matteo Chinazzi, Jessica T Davis, Kunpeng Mu, Ana Pastore y Piontti, Marco Ajelli, Nicola Perra, and Alessandro Vespignani. Anatomy of the first six months of covid-19 vaccination campaign in italy. PLoS Computational Biology, 18(5):e1010146, 2022. Zhang2020 Juanjuan Zhang, Maria Litvinova, Yuxia Liang, Yan Wang, Wei Wang, Shanlu Zhao, Qianhui Wu, Stefano Merler, Cécile Viboud, Alessandro Vespignani, Marco Ajelli, and Hongjie Yu. Changes in contact patterns shape the dynamics of the covid-19 outbreak in china. Science, 368(6498):1481–1486, 2020. tizzoni2022addressing Michele Tizzoni, Elaine O Nsoesie, Laetitia Gauvin, Márton Karsai, Nicola Perra, and Shweta Bansal. Addressing the socioeconomic divide in computational modeling for infectious diseases. Nature Communications, 13(1):1–7, 2022. buckee2021thinking Caroline Buckee, Abdisalan Noor, and Lisa Sattenspiel. Thinking clearly about social aspects of infectious disease transmission. Nature, 595(7866):205–213, 2021. bedson2021review Jamie Bedson, Laura A Skrip, Danielle Pedi, Sharon Abramowitz, Simone Carter, Mohamed F Jalloh, Sebastian Funk, Nina Gobat, Tamara Giles-Vernick, Gerardo Chowell, et al. A review and agenda for integrated disease models including social and behavioural factors. Nature human behaviour, 5(7):834–846, 2021. zelner2022there Jon Zelner, Nina B Masters, Ramya Naraharisetti, Sanyu A Mojola, Merlin Chowkwanyun, and Ryan Malosh. There are no equal opportunity infectors: epidemiological modelers must rethink our approach to inequality in infection risk. PLoS computational biology, 18(2):e1009795, 2022. karsai2020hungary Márton Karsai, Júlia Koltai, Orsolya Vásárhelyi, and Gergely Röst. Hungary in mask/maszk in hungary. Corvinus Journal of Sociology and Social Policy, (2), 2020. koltai2022reconstructing Júlia Koltai, Orsolya Vásárhelyi, Gergely Röst, and Márton Karsai. Reconstructing social mixing patterns via weighted contact matrices from online and representative surveys. Scientific Reports, 12(1):1–12, 2022. brankston2021quantifying Gabrielle Brankston, Eric Merkley, David N Fisman, Ashleigh R Tuite, Zvonimir Poljak, Peter J Loewen, and Amy L Greer. Quantifying contact patterns in response to covid-19 public health measures in canada. BMC public health, 21(1):1–10, 2021. trentini2022investigating Filippo Trentini, Adriana Manna, Nicoletta Balbo, Valentina Marziano, Giorgio Guzzetta, Samantha O’Dell, Allisandra G Kummer, Maria Litvinova, Stefano Merler, Marco Ajelli, et al. Investigating the relationship between interventions, contact patterns, and sars-cov-2 transmissibility. Epidemics, 40:100601, 2022. rohani Matt J. Keeling and Pejman Rohani. Modeling Infectious Diseases in Humans and Animals. Princeton University Press, 2008. hethcote2000mathematics Herbert W Hethcote. The mathematics of infectious diseases. SIAM review, 42(4):599–653, 2000. sandor2022covid János Sándor, Ferenc Vincze, Maya Liza Shrikant, László Kőrösi, László Ulicska, Karolina Kósa, and Róza Ádány. Covid-19 vaccination coverage in deprived populations living in segregated colonies: A nationwide cross-sectional study in hungary. Plos one, 17(2):e0264363, 2022. cadeddu2022planning Chiara Cadeddu, Aldo Rosano, Leonardo Villani, Giovanni Battista Coiante, Ilaria Minicucci, Domenico Pascucci, and Chiara de Waure. Planning and organization of the covid-19 vaccination campaign: An overview of eight european countries. Vaccines, 10(10):1631, 2022. minter2019approximate Amanda Minter and Renata Retkute. Approximate bayesian computation for infectious disease modelling. Epidemics, 29:100368, 2019. sunnaaker2013approximate Mikael Sunnåker, Alberto Giovanni Busetto, Elina Numminen, Jukka Corander, Matthieu Foll, and Christophe Dessimoz. Approximate bayesian computation. PLoS computational biology, 9(1):e1002803, 2013. kimittud Covid-19 mortality and recovery data on settlement level, atlatszo.hu Accessed: May 9, 2023. oroszi2022characteristics Beatrix Oroszi, Attila Juhász, Csilla Nagy, Judit Krisztina Horváth, Krisztina Eszter Komlós, Gergő Túri, Martin McKee, and Róza Ádány. Characteristics of the third covid-19 pandemic wave with special focus on socioeconomic inequalities in morbidity, mortality and the uptake of covid-19 vaccination in hungary. Journal of personalized medicine, 12(3):388, 2022. oroszi2021unequal Beatrix Oroszi, Attila Juhász, Csilla Nagy, Judit Krisztina Horváth, Martin McKee, and Róza Ádány. Unequal burden of covid-19 in hungary: a geographical and socioeconomic analysis of the second wave of the pandemic. BMJ global health, 6(9):e006427, 2021. manna2023generalized Adriana Manna, Lorenzo Dall'Amico, Michele Tizzoni, Marton Karsai, and Nicola Perra. Generalized contact matrices for epidemic modeling. arXiv preprint arXiv:2306.17250, 2023. naih Nemzeti adatvédelmi és információ szabadság hatóság, date of access 2023.05.23. ProxyContactDef Surveillance definitions for covid-19, european centre for disease prevention and control, date of access 2023.05.23. feehan2021quantifying Dennis M Feehan and Ayesha S Mahmud. Quantifying population contact patterns in the united states during the covid-19 pandemic. Nature communications, 12(1):1–9, 2021. brambor2006understanding Thomas Brambor, William Roberts Clark, and Matt Golder. Understanding interaction models: Improving empirical analyses. Political analysis, 14(1):63–82, 2006. mood2010logistic Carina Mood. Logistic regression: Why we cannot do what we think we can do, and what we can do about it. European sociological review, 26(1):67–82, 2010. allison1999comparing Paul D Allison. Comparing logit and probit coefficients across groups. Sociological methods & research, 28(2):186–208, 1999. kissler2020projecting Stephen M Kissler, Christine Tedijanto, Edward Goldstein, Yonatan H Grad, and Marc Lipsitch. Projecting the transmission dynamics of sars-cov-2 through the postpandemic period. Science, 368(6493):860–868, 2020. backer2020incubation Jantien A Backer, Don Klinkenberg, and Jacco Wallinga. Incubation period of 2019 novel coronavirus (2019-ncov) infections among travellers from wuhan, china, 20–28 january 2020. Eurosurveillance, 25(5):2000062, 2020. blackwood2018introduction Julie C Blackwood and Lauren M Childs. An introduction to compartmental modeling for the budding infectious disease modeler. 2018. voko2022nationwide Zoltán Vokó, Zoltán Kiss, György Surján, Orsolya Surján, Zsófia Barcza, Bernadett Pályi, Eszter Formanek-Balku, Gergő Attila Molnár, Róbert Herczeg, Attila Gyenesei, et al. Nationwide effectiveness of five sars-cov-2 vaccines in hungary—the hun-ve study. Clinical Microbiology and Infection, 28(3):398–404, 2022. shapiro2021efficacy Julia Shapiro, Natalie E Dean, Zachary J Madewell, Yang Yang, M Elizabeth Halloran, and Ira Longini. Efficacy estimates for various covid-19 vaccines: what we know from the literature and reports. MedRxiv, pages 2021–05, 2021. statista Statista. Hungary: Number of people vaccinated against covid-19. <https://www.statista.com/statistics/1196109/hungary-number-of-people-vaccinated-against-covid-19/>, 2023. Accessed: May 9, 2023. salje2020estimating Henrik Salje, Cécile Tran Kiem, Noémie Lefrancq, Noémie Courtejoie, Paolo Bosetti, Juliette Paireau, Alessio Andronico, Nathanaël Hozé, Jehanne Richet, Claire-Lise Dubost, et al. Estimating the burden of sars-cov-2 in france. Science, 369(6500):208–211, 2020.
http://arxiv.org/abs/2307.07594v1
20230714193712
Non-factorisable effects in the decays $\bar B_{s}^0 \to D_{s}^+ π^-$ and $\bar B^0 \to D^+ K^-$ from LCSR
[ "Maria Laura Piscopo", "Aleksey Rusov" ]
hep-ph
[ "hep-ph", "hep-ex" ]
JWST/CEERS Sheds Light on Dusty Star-Forming Galaxies: Forming Bulges, Lopsidedness and Outside-In Quenching at Cosmic Noon Aurélien Le Bail1 Emanuele Daddi1 David Elbaz1 Mark Dickinson2 Mauro Giavalisco3 Benjamin Magnelli1 Carlos Gómez-Guijarro1 Boris S. Kalita4,5,6 Anton M. Koekemoer7 Benne W. Holwerda8 Frédéric Bournaud1 Alexander de la Vega9 Antonello Calabrò10 Avishai Dekel11 Yingjie Cheng12 Laura Bisigello13,14 Maximilien Franco15 Luca Costantin16 Ray A. Lucas7 Pablo G. Pérez-González16 Shiying Lu1 Stephen M. Wilkins17,18 Pablo Arrabal Haro2 Micaela B. Bagley15 Steven L. Finkelstein15 Jeyhan S. Kartaltepe19 Casey Papovich20,21 Nor Pirzkal22 L. Y. Aaron Yung23NASA Postdoctoral Fellow August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION The study of B-meson decays plays a crucial role in testing the Standard Model (SM), as well as in searching or constraining possible New Physics (NP) scenarios. Among these decays, those involving only non-leptonic final states are notoriously the most challenging to be described, due to the complicated underlying hadronic structure. Their investigation, however, can allow one to test the different QCD based methods designed specifically for the study of these processes. In the present work, we focus on two particularly interesting examples of non-leptonic two-body B-meson decays, namely B̅^0 → D^+ K^- and B̅_s^0 → D_s^+ π^-. In fact, as the flavour of all the quarks in the final state is different, see Figure <ref>, neither the penguin nor the weak-annihilation topologies can contribute, and these are generally considered to be among the theoretically cleanest non-leptonic B-meson decays. By now these modes are determined experimentally quite precisely, and the Particle Data Group (PDG) quotes the following values <cit.> Br (B^0_s → D^-_s π^+ )|_ exp. = (2.98 ± 0.14) × 10^-3 , Br (B^0 → D^- K^+)|_ exp. = (2.05 ± 0.08) × 10^-4 , based on measurements by the LHCb, Belle and CDF collaborations <cit.>. On the theoretical side, the amplitude for the decays B̅_(s)^0 → D_(s)^+ L^-, with L = {π, K}, can be computed by introducing the effective Hamiltonian H_ eff, governing the tree-level non-leptonic b-quark transition b → c u̅ q, with q ={d, s }. This reads <cit.> H_ eff = G_F/√(2) V_cb V_uq^* (C_1 O_1^q + C_2 O_2^q ) + h. c. , where G_F is the Fermi constant, V_q_1q_2 denote the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements, and the Δ B = 1 current-current operators O_1,2^q are defined in the Chetyrkin-Misiak-Münz (CMM) basis respectively as <cit.>[Note that the notation in eq. (<ref>) is opposite to the one used in Refs. <cit.>.] O_1^q = (c̅Γ^μ b ) (q̅Γ_μ u) , O_2^q = (c̅Γ^μ t^a b ) (q̅Γ_μ t^a u), with Γ_μ = γ_μ (1 - γ_5), and t^a being the SU(3)_c generators in the fundamental representation. In eq. (<ref>), the Wilson coefficients C_1,2 are evaluated at the renormalisation scale μ_b ∼ m_b, and are currently known up to NNLO logarithmic accuracy <cit.>. The amplitude then takes the form A (B̅_(s)^0 → D_(s)^+ L^-) = - G_F/√(2) V_cb V_uq^* (C_1 ⟨ O_1^q ⟩ + C_2 ⟨ O_2^q ⟩) , where we have introduced the shorthand notation ⟨ O_i^q ⟩≡⟨ D^+_(s) L^-| O_i^q | B̅_(s)^0 ⟩. The simplest approach to determine the two matrix elements appearing in eq. (<ref>) is naive QCD factorisation (NQCDF). Within this approximation, the matrix element of the colour-singlet operator factorises into the product of the light meson decay constant f_L and of the scalar form factor f_0^B_(s)D_(s)(q^2), parameterising the B_(s)→ D_(s) transition, whereas the matrix element of the colour-octet operator vanishes, namely ⟨ O_1^q ⟩|_ NQCDF= i f_L (m_B_(s)^2 - m_D_(s)^2) f_0^B_(s)D_(s) (m_L^2) , and ⟨ O_2^q ⟩|_ NQCDF= 0 . Because of eq. (<ref>), in the literature, and are commonly referred to as the factorisable and non-factorisable matrix elements. Also in the present work we follow this notation, however with the remark that the distinction applies strictly only to LO-QCD, since, by including perturbative gluon corrections, both the matrix elements receive factorisable and non-factorisable contributions. We stress in fact that the accuracy of our study limits at LO-QCD and, unless explicitly stated, , should always be intended as the corresponding tree-level matrix elements. A first estimate of the non-factorisable matrix element beyond the NQCDF approximation was obtained by Blok and Shifman in 1992 <cit.>. Using the framework of QCD sum rule (QCDSR) <cit.> with a two-point correlation function, the authors found positive non-factorisbale corrections of the order of few percent, to the amplitude in eq. (<ref>). Specifically, with the NLO values C_1 = 1.01 and C_2 =- 0.32, their result leads to [Using the LO values for the Wilson coefficients C_1= 1.03 and C_2 = - 0.53, which corresponds to the accuracy of Ref. <cit.>, yields instead C_2 ⟨ O_2^d ⟩ / C_1 ⟨ O_1^d ⟩∼ 13 %. ] C_2 ⟨ O_2^d ⟩/C_1 ⟨ O_1^d⟩|_ QCDSR∼ 8 % , B̅_s^0 → D_s^+ π^- . It is worthwhile pointing out that a later study, of the theoretically less clean mode B̅^0 → D^0 π^0, was performed in Ref. <cit.>, using the light-cone sum rule (LCSR) method <cit.> with pion light-cone distribution amplitudes (LCDAs). Also in the latter work, estimates of gave a sizeable and positive result, in consistency with Ref. <cit.>. However, a more recent analysis of the same decay B̅^0 → D^0 π^0 performed in Ref. <cit.>, again with the LCSR framework and pion LCDAs, but starting from a three-point correlation function, closely following the approach introduced in Ref. <cit.>, found the non-factorisable contribution to be sizeable, but negative. At the end of the '90s, a new framework for the computation of several non-leptonic two-body B-meson decays was developed in Refs. <cit.>, the QCD factorisation (QCDF) method. Within QCDF, the matrix elements ⟨ O_i^q ⟩ in eq. (<ref>) can be computed respectively as ⟨ O_i^q ⟩|_ QCDF = ∑_j f_j^B_(s) D_(s) (m_L^2) ∫_0^1 d u T_ij (u) φ_L (u) + O(Λ_ QCD/m_b), where T_ij (u) are the corresponding hard-scattering kernels, which can be calculated perturbatively in QCD, ϕ_L (u) denotes the L-meson LCDA, and f_j^B_(s) D_(s) (q^2) are the form factors parametrising the B_(s)→ D_(s) transition. The latter two inputs are related to the hadronic structure of the mesons considered and therefore must be determined using some non-perturbative technique like Lattice QCD or QCD sum rule. In some cases, they could also be extracted from data. It is important to emphasize that since the factorisation formula in eq. (<ref>) holds up to power corrections of the order of Λ_ QCD/m_b, the QCDF framework allows one to systematically compute only the leading power contribution to the amplitude, however, to higher order in α_s. Furthermore, the matrix element , vanishing at LO-QCD, constitutes at this order a purely next-to-leading power effect i.e. = O(Λ_ QCD/m_b) + O(α_s). The QCDF method was proven to be a very powerful tool for the computation of several non-leptonic B-meson decays. Remarkably, the hard-scattering kernels T_ij (u) are known up to NNLO-QCD corrections <cit.>. However, a systematic study of these processes beyond the leading power becomes challenging. A recent analysis of the decays B̅_(s)^0 → D_(s)^(*)+ L^- within QCDF was performed in Ref. <cit.>. The authors have included NNLO-QCD corrections for the hard-scattering kernels from Ref. <cit.>, and the B_(s)→ D_(s)^(*) form factors from Ref. <cit.>, where the latter were obtained fitting the corresponding Isgur-Wise functions up to corrections of the order O(Λ_ QCD^2/m_c^2) in the Heavy Quark Expansion, by combining both Lattice QCD data <cit.> and QCDSR results <cit.>. In addition, they have also obtained a first estimate of the next-to-leading power corrections, by computing, within LCSR, the corresponding hadronic matrix element emerging in QCDF <cit.>. This effect was found to be very small, of the order of sub-percent, namely <cit.> A (B̅_(s)^0 → D_(s)^+ L^-)|_ NLP/ A (B̅_(s)^0 → D_(s)^+ L^-)|_ LP≃ - [0.06, 0.6] % , leading all together to very precise predictions for the branching fractions, which resulted to be significantly above the corresponding experimental data. Specifically, the authors of Ref. <cit.> have obtained Br (B̅^0_s → D^+_s π^- )|_ QCDF = (4.42 ± 0.21) × 10^-3 , Br (B̅^0 → D^+ K^-)|_ QCDF = (3.26 ± 0.15) × 10^-4 , in clear tension with the values shown in eqs. (<ref>), (<ref>) [Note that in the SM, the direct CP-asymmetry in these decays is negligible, therefore Br (B̅_(s)^0 → D_(s)^+ L^-) = Br (B_(s)^0 → D_(s)^- L^+). However, this might not necessarily hold in the presence of NP. In this respect, a clear experimental test was suggested in Refs. <cit.>.]. Finally, a later study of the same decays within QCDF, however only at leading power, was performed in Ref. <cit.>. The conclusions obtained were similar to those in Ref. <cit.> and also their analysis revealed a large discrepancy with the data. This puzzling pattern has attracted significant attention in the recent literature, and has led to further investigations of these decays, both within the SM and beyond <cit.>. A conclusive explanation is, however, still missing. The current status of the non-leptonic decays B̅_(s)→ D^+_(s) L^- represents a strong motivation to revisit the estimates of the non-factorisable contribution due to . Given the two very different results shown in eqs. (<ref>), (<ref>), we present a new determination of the matrix element within LCSR, starting from a three-point correlation function with B-meson LCDAs, partially following the method suggested in Ref. <cit.>. Moreover, we also compute for the first time within LCSR, the factorisable matrix element , including both two- and three-particle LCDAs, and thus obtain predictions for the corresponding branching fractions entirely within the same framework. The paper is organised as follows. In section <ref>, we describe the computation of the non-factorisable matrix element within LCSR. More precisely, a detailed derivation of the operator product expansion for the three-point correlator is presented in section <ref>, the light-cone dominance of the correlation function and the problem associated with the lack of generalised B-meson quark-gluon-quark matrix elements with non-aligned fields, are discussed in section <ref>, while the hadronic dispersion relations are derived in section <ref>. In section <ref>, we briefly discuss the computation of the factorisable matrix element within LCSR. Our numerical analysis is presented in section <ref>. In particular, a detailed discussion of the inputs used in the analysis can be found in section <ref>, while our results are shown in section <ref>. Finally, in section <ref>, we present our conclusions, as well as a comprehensive outlook for future improvements. § DETERMINATION OF ⟨ O_2 ⟩ FROM LCSR §.§ Derivation of the OPE for the correlator To compute the hadronic matrix element ⟨ O_2 ⟩ [Unless explicitly stated, we assume, for definiteness, the mode B̅_s → D_s^+ π^-, and often drop, for the sake of a cleaner notation, all labels. The discussion presented here, in fact, straightforwardly extends to the mode B̅^0 → D^+ K^-, once the proper replacements are taken into account.] within the framework of LCSR, we start by introducing the following three-point correlation function F^ O_2_μ (p, q) = i^2 ∫ d^4 x e^i p · x∫ d^4 y e^i q · y ⟨ 0| T{j^D_5 (x), O_2 (0), j^π_μ (y) } | B̅ (p + q) ⟩ , where j^D_5 (x) = i m_c s̅γ_5 c and j^π_μ (y) = u̅γ_μγ_5 d are suitable interpolating currents with the quantum numbers of the D^+_s- and π^--mesons, and momenta p^μ and q^μ, respectively. We consider eq. (<ref>) in the kinematical domain P^2 ≡ -p^2 ≫Λ^2, and Q^2 ≡ -q^2 ≫Λ^2, with Λ denoting a small hadronic scale of the order of Λ_ QCD. With this choice, as discussed further in section <ref>, the dominant contribution to the correlator originates from the region in which both x^μ and y^μ are approximately light-like and aligned along different light-cone (LC) directions, i.e. x^2 ∼ 0 , y^2 ∼ 0 , (x-y)^2 ≁0 . A double LC expansion, however, currently can not be consistently performed due to the lack of the proper hadronic input functions, that is of the B-meson three-particle non-local matrix element with the gluon and the spectator quark aligned on different LC directions. For this reason, in the following, we consider the specific case of LC-local dominance, which is also compatible with the present kinematics, see section <ref>, and expand the time-ordered product in eq. (<ref>) around x^2∼ 0 but y^μ∼ 0 [In principle, also the opposite choice i.e. expanding around y^2∼ 0 and x^μ∼ 0 could be considered. We leave the investigation of this alternative scenario for a future study.]. In this way, in fact, the relevant hadronic matrix element can be derived from the expression for aligned fields given e.g. in Ref. <cit.>, by setting the LC coordinate of the gluon field to zero. We return to this point later on. Expanding the time-ordered product in eq. (<ref>), we thus obtain F^ O_2_μ(p,q) = - i m_c ∫ d^4 x ∫ d^4 y e^i p· x e^i q · y ⟨ 0| s̅^i(x)γ_5 i S^(c)_ij(x,0) γ_ρ (1-γ_5) × i S^(u)_mn(0,y) γ_μγ_5 i S_nl^(d)(y,0) γ^ρ (1-γ_5) b^k (0) |B̅(p+q) ⟩ t^a_jk t^a_lm , where S_ij(x,y) denotes the corresponding quark propagator, with the specific quark indicated in the superscript. In deriving eq. (<ref>), the operator O_2 has been Fierz-transformed to avoid the computation of traces involving γ_5 in dimensional regularisation. Note that this can be consistently done since, with the present choice of the operator basis, the Fierz symmetry is respected also at the one-loop order, see e.g. Ref. <cit.>. Owing to the colour structure of eq. (<ref>), the first non-vanishing contribution corresponds to the emission of one gluon from either the u- or d-quark propagators, as shown in Figure <ref>. In the Fock-Schwinger gauge, the local expansion of the quark propagator in an external background gluon field, including the leading one-gluon corrections, can be found e.g. in Refs. <cit.>. The corresponding expression, in the case of massless quark, i.e. for q={u,d}, takes the following form S^(q)_ij(x,y) = ∫d^4 k/(2 π)^4 e^-i k (x-y)[ δ_ij k /k^2 + i ε - G^a_μν t^a_ij/4( k σ^μν + σ^μν k)/(k^2 + iε)^2] + … , where G_μν is the gluon field strength tensor evaluated at the origin, σ_μν = (i/2)[γ_μ, γ_ν], and the ellipses denote terms of higher order neglected at the current accuracy. Substituting eq. (<ref>) into eq. (<ref>), the integral over y^μ can be easily calculated [Manipulations involving the Dirac algebra are performed using the Mathematica package 𝙵𝚎𝚢𝚗𝙲𝚊𝚕𝚌 <cit.>.]. This yields F^ O_2_μ (p,q) = - m_c/2∫ d^4 x e^i p · x ⟨ 0| s̅^i (x) γ_5 S_0^(c) (x) G_νρ^a (0) t^a_ij I^ νρ_μ(q) b^j (0)| B̅ (p+q)⟩, with S_0^(c)(x) denoting the free charm-quark propagator, and the tensor I_μνρ being I_μνρ (q) = i/4 π^2 (q^2 + i ε)( q_ν q^λϵ_μρτλ- q_μ q^λϵ_νρτλ - q^2 ϵ_μνρτ) γ^τ (1 - γ_5) . The result in eq. (<ref>) has been obtained by computing the corresponding loop integral in naive dimensional regularisation (NDR) with d = 4 - 2 ϵ and anticommuting prescription for γ_5. We note that the divergent 1/ϵ contributions exactly cancel when considering the gluon emission from both the u- and d-quark propagators, leading to a finite expression, in consistency with Refs. <cit.>. In addition, we have also performed the computation of the loop function in eq. (<ref>) using the explicit coordinate representation of the local expansion of the propagator, details of which are given in appendix <ref>. To proceed with the calculation of F^ O_2_μ, we must evaluate the non-local matrix element appearing in eq. (<ref>). At leading order in the heavy quark effective theory (HQET), the non-local vacuum-to-B three-particle matrix element with the gluon and the spectator quark aligned on the same light-cone direction can be parametrised in terms of eight LCDAs <cit.>. The matrix element in eq. (<ref>) corresponds to the specific configuration in which the gluon field is fixed at the origin, and its parametrisation can be derived by taking the local limit of the result given e.g. in Ref. <cit.>. We present below only the final expression and refer to appendix <ref> for the intermediate steps. At leading order in HQET, we then have ⟨ 0| s̅_α (x) G_μν(0) b_β(0)|B̅(p+q) ⟩ = 1/2 F_B(μ) √(m_B)∫_0^∞ d ω_1 e^- i ω_1 v · x ×{P_+ [(v_μγ_ν - v_νγ_μ) (ψ̂_A - ψ̂_V) - i σ_μνψ̂_V - i (x_μ v_ν - x_ν v_μ) ψ̅̂̅_X_A + i (x_μγ_ν - x_νγ_μ) (ψ̅̂̅_W + ψ̅̂̅_Y_A) - ϵ_μνητ x^η v^τγ_5 ψ̅̂̅_X̃_A + ϵ_μνητ x^ηγ^τγ_5 ψ̅̂̅_Ỹ_A + (x_μ v_ν - x_ν v_μ) x ψ̅̅̅̂̅̅̅_W - (x_μγ_ν - x_νγ_μ) x ψ̅̅̅̂̅̅̅_ Z ] γ_5 }_βα(ω_1 ; μ) , where α, β, are spinor indices, v^μ =( p^μ + q^μ)/m_B is the velocity of the B meson, F_B(μ) is the HQET decay constant, and P_+ = (1 + v)/2. Three comments are in order with respect to Ref. <cit.>. First, the terms proportional to ϵ_μνητ appear with an opposite sign because of the different convention adopted in our work for the Levi-Civita tensor, namely ε^0123 = +1, see also appendix <ref>. Second, we have relabelled some LCDAs to make the notation throughout the paper more transparent. Third, the extra mass factor in eq. (<ref>) follows from the conversion from HQET to QCD for the B-meson state. Moreover, we have also introduced the notation [The μ-dependence of the LCDAs is often omitted, however it should always be understood.] ψ̅̂̅(ω_1) = ∫_0^ω_1 d η ψ̂(η) , ψ̅̅̅̂̅̅̅(ω_1) = ∫_0^ω_1 d η∫_0^η d η^' ψ̂(η^') . Given the explicit x-dependence of eq. (<ref>), the integration over x^μ in eq. (<ref>) can be now performed. To this end, it appears to be more convenient to use the coordinate representation of the free charm-quark propagator, which reads S^(c)_0 (x) = - i m_c^2/4 π^2[ K_1 (m_c √(-x^2))/√(-x^2) - i x/x^2 K_2(m_c √(-x^2)) ], with K_n (z) being the modified Bessel function of the second kind of order n. Taking into account eqs. (<ref>), (<ref>), we are then left with the evaluation of tensor integrals of the type ∫ d^4 x e^i p̃· x K_1(m_c √(-x^2))/√(- x^2) { 1, x^μ, x^μ x^ν, …} , ∫ d^4 x e^i p̃· x K_2(m_c √(-x^2))/x^2 {x^μ, x^μ x^ν, x^μ x^ν x^ρ, …} , where, for simplicity, we have introduced the compact notation p̃^μ = p^μ - ω_1 v^μ. The result for the inverse Fourier transform of Bessel functions in eqs. (<ref>), (<ref>), is explicitly given in appendix <ref>. Using eqs. (<ref>)-(<ref>), we then arrive at the final form of the correlator in eq. (<ref>), that is F^ O_2_μ(p,q) = ((p · q) q_μ - q^2 p_μ) F^ O_2 (p^2, q^2) , with F^ O_2 (p^2, q^2) denoting the corresponding Lorentz invariant amplitude. On this point, an important remark is in order. The result for the correlation function in eq. (<ref>) is transversal with respect to the momentum of the light-quark current q^μ, as expected, since in the limit of massless u- and d-quark, the axial-vector current j_μ^π must be conserved [Since we neglect the mass of the strange quark in the loop, the same argument applies also to the decay B̅^0 → D^+ K^-.]. However, when trying to compute the correlator in eq. (<ref>) by expanding the time-ordered product around x^2 ∼ 0, y^2 ∼ 0, and by using the expression for the B-meson three-particle matrix element with both the gluon and the spectator quark aligned on the same light-cone direction, i.e. implicitly assuming that also (x-y)^2 ∼ 0, we obtain an expression for F^ O_2_μ which is not transversal [Specifically, we find that the transversality of the correlator is violated by terms proportional to u ω_2/m_B, with ω_2 being the momentum of the gluon field and u ∈ [0,1] a LC parameter. We have also explicitly checked that these terms do not vanish in the final result, i.e. after integration over u and ω_2.]. In this respect, we also note that in the case of charm loop with photon coupling studied e.g. in Refs. <cit.>, the expression of the non-local amplitude due to soft gluon emission appears actually to be not transversal with respect to the photon momentum. Surprisingly, this has not been pointed out in the above references, nor, to our best knowledge, elsewhere in the literature. Further investigations of this issue would clearly be of utmost importance not only to improve the current estimate of the non-factorisable amplitude in non-leptonic B-meson decays, but also in light of the impact that a better understanding of these non-local effects could have on the present status of the B anomalies, see e.g. the reviews <cit.>. Returning to eq. (<ref>), we isolate the coefficients of the two Lorentz structures and rewrite F^ O_2_μ(p,q) = F^ O_2_q (p^2,q^2) q_μ + F^ O_2_p (p^2,q^2) p_μ, where the LC-local operator product expansion (OPE) for the amplitude F^ O_2_q (p^2,q^2), relevant for the hadronic dispersion relations, see section <ref>, can be expressed as [F^ O_2_q(p^2,q^2)]_ OPE= F_B √(m_B) m_c ∫_0^∞ d ω_1 ∑_ψ̂ψ̂(ω_1) ∑_n= 1^3c_n^ψ̂(ω_1, q^2)/(q^2+ i ε) [ s̃(ω_1, q^2) - p^2 - i ε]^n . In the above equation ψ̂= ψ̂_A, ψ̂_V,…, and for later convenience, the coefficients of the LCDAs have been suitably manipulated so that in eq. (<ref>) the dependence on p^2 is contained exclusively in the denominators. Finally, the function s̃(ω_1, q^2) reads s̃(ω_1, q^2) = ( m_B/m_B- ω_1) [ m_c^2 + ω_1 m_B - q^2 ω_1/m_B- ω_1^2 ] , while the analytic expressions of the OPE coefficients c_n^ψ̂(ω_1, q^2) can be found in appendix <ref>. §.§ Light-cone dominance of the correlator In this section we investigate the conditions for the LC dominance of the correlation function in eq. (<ref>) and discuss the corresponding kinematics. The correlator F_μ^ O_2, in fact, describes the decay of a heavy B meson into two currents with momenta p^μ and q^μ, namely m_B v^μ = p^μ + q^μ , where v^μ = p^μ_B/m_B is the B-meson velocity. In order to be far away from hadronic thresholds originating from the two interpolating currents, we consider the kinematical region in which Q^2 ∼ P^2 ∼ m_BΛ, P^2 ≡ - p^2 , Q^2 ≡ - q^2 , with Λ being a small non-perturbative scale of the order of Λ_ QCD. Hence, both p^2 and q^2 are assumed to be space-like and large, leading to the following power counting m_B^2 ≫ Q^2 ∼ P^2 ≫Λ^2 . It is convenient to study eq. (<ref>) in the rest frame of the B meson, i.e. v^μ = (1, 0⃗ ), aligning, for simplicity, the z-axis along the direction of the decay. Furthermore, we introduce the two light-cone vectors n_±^μ, with n_+^2 = n_-^2 = 0, such that v^μ = (n_+^μ + n_-^μ)/2. Specifically n_+^μ = (1, 0, 0 , 1) , n_-^μ = (1, 0, 0, -1) , (n_+ · n_-) = 2 . A solution for p^μ and q^μ, up to corrections of the order P^4/m_B^4 and Q^4/m_B^4, is given by [Eq. (<ref>) admits also a second solution obtained by exchanging the coefficients of n_-^μ and n_+^μ. Without loss of generality, however, we parametrise the momenta according to eq. (<ref>).] {p^μ = ( m_B^2 + Q^2/2m_B) n_+^μ + (-P^2/2 m_B) n_-^μ , q^μ = (-Q^2/2 m_B) n_+^μ + ( m_B^2 + P^2/2m_B) n_-^μ , . where, due to our choice of the coordinate system, the components transversal to the light-cone vectors vanish, namely p^μ_⊥ = q^μ_⊥ =0. From eqs. (<ref>), (<ref>), it then follows that whereas p^μ has a large component along n_+^μ and a small component along n_-^μ, since the two coefficients respectively scale as (p · n_-) ∼ m_B, (p · n_+) ∼ - Λ, the behaviour is opposite for the two components of q^μ, i.e. (q · n_-) ∼ - Λ, (q · n_+) ∼ m_B. Having fixed the kinematics, we can turn to discuss the structure of the correlation function F_μ^O_2. The integrals in eq. (<ref>) are dominated by the values of x^μ and y^μ in correspondence of which the argument of the exponentials is not large [This follows from the Riemann-Lebesgue theorem.]. With the choice of momenta in eq. (<ref>), the absence of fast oscillations, see also e.g. Refs. <cit.> for details, is ensured given that {[ exp{i p · x}≃exp{i m_B x_0/2_≲ O(1)} exp{- i (m_B + 2 Λ) x_3/2_≲ O(1)} ,; exp{i q · y}≃exp{i m_B y_0/2_≲ O(1)} exp{i (m_B + 2 Λ) y_3/2_≲ O(1)} , ]. yielding respectively the bounds { |x_0| ≲2/m_B , |x_3| ≲2/m_B + 2 Λ , |y_0| ≲2/m_B , |y_3| ≲2/m_B + 2 Λ . . From eq. (<ref>) it then follows that [The lower bound for x^2 and y^2 follows from the causality property of correlation functions, see e.g. Refs. <cit.>.] { x_0^2 - x^2_3 ≲4/m_B^2≤4/m_B^2 + x_1^2 + x_2^2 , y_0^2 - y^2_3 ≲4/m_B^2≤4/m_B^2 + y_1^2 + y_2^2 , . ⇒ { 0 ≤ x^2 ≲4/m_B^2 , 0 ≤ y^2 ≲4/m_B^2 , . showing that the region in which the time-ordered product in eq. (<ref>) dominates, corresponds to both x^μ and y^μ being approximately on the light-cone, i.e. x^2 ∼ 0 , y^2 ∼ 0 . On the other side, expressing the integrals in terms of light-cone coordinates, the exponentials in eq. (<ref>) read {exp{i p · x}≃exp{-i Λ (x · n_-)/2_≲ O(1)} exp{i (m_B + Λ) (x · n_+)/2_≲ O(1)} , exp{i q · y}≃exp{ i (m_B + Λ) (y · n_-)/2_≲ O(1)} exp{ - i Λ (y · n_+)/2_≲ O(1)} , . and the absence of fast oscillations now leads to the conditions {|x · n_-|/2≲1/Λ , |x · n_+|/2≲1/m_B + Λ , |y · n_+|/2≲1/Λ , |y · n_-|/2≲1/m_B + Λ . . Eq. (<ref>) thus shows that whereas the x-component along n^μ_- is strongly suppressed, the behaviour is opposite for y^μ, meaning that the integrals in eq. (<ref>) are actually dominated by the region in which x^μ and y^μ are approximately aligned along different light-cone directions, namely[From eqs. (<ref>), (<ref>), it also follows that |x · n_+| ≪ |x_⊥| ≪ |x · n_-| and |y · n_-| ≪ |y_⊥| ≪ |y · n_+|, where we have introduced the notation a_⊥^μ≡ a_⊥ n_⊥^μ with n_⊥^2 = - 1. ] x^μ∼(x · n_-)/2 n^μ_+ , y^μ∼(y · n_+)/2 n^μ_- , (x-y)^2 ≁0 . Had we used the light-cone expansion of the propagator, instead of its local limit given in eq. (<ref>), the resulting matrix element would be ⟨ 0 | s̅_α(x) G_μν(uy) b_β(0)| B̅(p+q) ⟩. Due to eq. (<ref>), the computation of the time-ordered product in eq. (<ref>) in terms of a double LC expansion would thus require the knowledge of the B-meson quark-gluon-quark matrix element with non-aligned fields, which, as already stressed, is not yet available in the literature for generic Dirac structures [Non-local B-meson matrix elements with non-aligned fields have been investigated in e.g. Refs. <cit.>. In particular, vacuum-to-B three-particle matrix elements with the gluon and the light spectator quark aligned on different light-cone directions have been discussed in Ref. <cit.>. In the latter reference, the authors have also proposed a parameterisation for the novel soft function corresponding to the matrix element ⟨ 0 | q̅ (z_1 n_+) G_μν (z_2 n_-) n_-^ν n_+ γ_⊥^μγ_5 h_v(0)| B̅⟩.]. In this connection, we note that by using the B-meson three-particle matrix element with aligned fields, as previously done in similar computations, see e.g. Refs. <cit.>, one might miss the actual dominant contributions and obtain potentially incomplete results. This issue was also recently pointed out in Refs. <cit.>. Hence, since the local limit y^μ∼ 0 is also compatible with the present kinematics, as it follows from eq. (<ref>) [We note that e.g. in the first study by Blok and Shifman of the non-leptonic decays here considered <cit.>, or in the determinations of the pion decay constant from QCDSR <cit.>, the local expansion of the light-quark propagator is used in correspondence of a typical scale of Q^2 ∼ 1 ^2, which is consistent with our kinematics, cf. eq. (<ref>).] , we have chosen to perform instead a LC-local expansion, which, albeit less accurate than a double LC expansion, allows us to circumvent the problem associated with the lack of the corresponding matrix element and to compute the correlation function in terms of known hadronic input functions without incurring potential inconsistencies. §.§ Hadronic dispersion relations and sum rule The OPE result in eq. (<ref>) must now be linked to ⟨ O_2 ⟩, the matrix element we aim to estimate. To this end, we proceed with the derivation of the hadronic dispersion relations for the correlator F_μ^ O_2. Starting with the p^2-channel, we insert into eq. (<ref>) a complete set of intermediate states with the D_s^+-meson quantum numbers. This gives F_μ^ O_2(p, q) = m_D^2 f_D/m_D^2 - p^2F̂_μ^ O_2(p_D, q) + q_μ∫_s_h^(D)^∞ d s ρ_h (s, q^2)/s - p^2 + … , where the decay constant of the D_s meson is defined as ⟨ 0 |j_5^D| D ⟩= m_D^2 f_D, and we have introduced the two-point correlation function F̂_μ^ O_2(p_D, q), describing the transition of a B̅_s-meson into a D_s^+-meson and a current j_μ^π, namely F̂_μ^ O_2(p_D, q) = i ∫ d^4 y e^i q · y ⟨ D(p_D)| T{O_2(0), j_μ^π(y)} |B̅(p_D+q)⟩ , with p_D^2 = m_D^2. In eq. (<ref>), the spectral density ρ_h(s,q^2) accounts for the contribution of excited states and of the continuum in the p^2-channel, with s_h^(D) indicating the lowest hadronic threshold. Note that, taking into account the Lorentz decomposition shown in eq. (<ref>), we have already isolated the coefficient of q_μ, which is the only one relevant for the final result, and that the ellipses in eq. (<ref>) denote the remaining contribution proportional to p_μ. As the complicated structure of the spectral density is in general difficult to determine, the integral on the r.h.s. of eq. (<ref>) is often estimated by recurring to the principle of quark-hadron duality (QHD), see e.g. Ref. <cit.>. By analytically continuing the function [F^ O_2_q(p^2,q^2)]_ OPE in eq. (<ref>) in the complex p^2-plane, we can express it in the form of a dispersive integral as [F^ O_2_q(p^2,q^2)]_ OPE = 1/π∫_m_c^2^∞ ds Im_s[F^ O_2_q(s,q^2)]_ OPE/s - p^2 , with m_c^2 being the fist pole on the real axis p^2 = s. Using QHD, we thus approximate ∫_s_h^(D)^∞ d s ρ_h (s, q^2)/s - p^2 = 1/π∫_s_0^D^∞ ds Im_s[F^ O_2_q(s,q^2)]_ OPE/s - p^2 , valid at sufficiently large and negative values of p^2. Here, s_0^D is an effective threshold parameter to be determined. Finally, we perform a Borel transform with respect to the variable p^2. This leads to F̂_μ^ O_2(p_D, q) = q_μ/m_D^2 f_Dπ∫_m_c^2^s_0^D ds e^(m_D^2 - s)/M^2 Im_s[F^ O_2_q(s,q^2)]_ OPE , where M^2 is the corresponding Borel parameter. Proceeding in a similar way with the two-point correlator F̂_μ^ O_2(p_D, q), we can derive the corresponding dispersion relations in the q^2-channel. Inserting into eq. (<ref>) a complete set of states with the quantum number of the π^- meson, yields F̂_μ^ O_2(p_D, q) = i f_π q_μ/m_π^2 - q^2⟨ D(p_D) π(p_π)| O_2| B̅(p_D+p_π)⟩ + q_μ∫_s_h^'^' (π)^∞ d s^' ρ_h^' (s^')/s^' - q^2 + … , with p_π^2 = m_π^2 and (p_D + p_π)^2 = m_B^2. In eq. (<ref>), the pion decay constant is defined as ⟨ 0| j_μ^π| π(q)⟩ = i f_π q_μ, while the spectral density ρ_h^'(s^') describes the contribution of excited states and of the continuum in the q^2-channel. Note that in writing the integral on the r.h.s. of eq. (<ref>), we have again taken into account that the correlation function F̂_μ^ O_2(p_D, q) admits the Lorentz decomposition analogous to the one in eq. (<ref>), however now with coefficients which can depend only on the variable q^2 since the first invariant is fixed. The matrix element we aim to determine is now on the r.h.s. of eq. (<ref>). Combining the latter with eq. (<ref>), we obtain i f_π⟨ O_2 ⟩/m_π^2 - q^2 = 1/m_D^2 f_D π∫_m_c^2^s_0^D ds e^(m_D^2 - s)/M^2 Im_s[F^ O_2_q(s,q^2)]_ OPE - ∫_s_h^'^' (π)^∞ d s^' ρ_h^' (s^')/s^' - q^2 . The matrix element ⟨ O_2 ⟩ could in principle be extracted by fitting the r.h.s. of eq. (<ref>). In this case, one could further isolate the next resonance due to the a_1-meson state and employ an ansatz, usually polynomial, to parametrise the remaining contribution due to the continuum. However, this turns out to be practically not feasible, given that the current size of the theoretical uncertainties, strongly affected by the limited accuracy of many input parameters, see section <ref>, makes the disentanglement of the pion state, of the a_1 state and of the continuum extremely challenging. On the other hand, taking into account the approximate 1/q^2 behaviour of the OPE result in eq. (<ref>), which almost perfectly matches the dominant contribution due to the pion pole on the l.h.s. of eq. (<ref>), one can already obtain a good estimate of the matrix element ⟨ O_2 ⟩, by considering only the first term on the r.h.s. of eq. (<ref>). Alternatively, expressing the OPE result on the r.h.s. of eq. (<ref>) as a dispersive integral in the complex q^2-plane, with the first pole being on the real axis s^' =0, and recurring again to QHD, we can approximate ∫_s_h^'^' (π)^∞ d s^' ρ_h^' (s^')/s^' - q^2 = 1/m_D^2 f_D π^2∫_s_0^π^∞ d s^'∫_m_c^2^s_0^D ds e^(m_D^2 - s)/M^2 Im_s^' Im_s[F^ O_2_q(s,s^')]_ OPE/s^' - q^2 , with s_0^π denoting the effective threshold parameter in the π channel. From eqs. (<ref>), (<ref>), after applying a Borel transform with respect to the variable q^2, we arrive at the following sum rule for the non-factorisable matrix element i ⟨ O_2 ⟩ = - e^m^2_π/M^' 2/ f_π f_D m_D^2∫_m_c^2^s_0^D ds ∫_0^∞ d ω_1 ∑_ψ̂ψ̂(ω_1) ∑_n=1^3c_n^ψ̂(ω_1, 0)/(n-1)! e^(m_D^2 - s)/M^2δ_s^(n-1)( s̃(ω_1,0)- s ) , where M^' 2 denotes the corresponding Borel parameter in the q^2-channel and the expression for Im_s^' Im_s[F^ O_2_q(s,s^')]_ OPE follows from using eq. (<ref>), with δ^(n-1)_x(f(x)) indicating the (n-1)-derivative of the delta function with respect to the variable x. Finally, note that also in this way, in consistency with what discussed below eq. (<ref>), because of the 1/q^2 structure of the OPE result, only the contribution due to the pion pole enters eq. (<ref>) and the sum rule becomes independent of s_0^π. § DETERMINATION OF ⟨ O_1 ⟩ FROM LCSR The computation of the factorisable part of the amplitude ⟨ O_1 ⟩ within the framework of LCSR proceeds in a very similar manner to that discussed in the previous section. Therefore, here we limit ourselves to describing only the key steps. The starting point is now the following three point correlation function F^ O_1_μ (p, q) = i^2 ∫ d^4 x e^i p · x∫ d^4 y e^i q · y ⟨ 0| T{j^D_5 (x), O_1 (0), j^π_μ (y) } | B̅ (p + q) ⟩ , where the two interpolating currents coincide with those in eq. (<ref>). The kinematics is also chosen to be the same, i.e. P^2 ≡ - p^2 ≫Λ^2 and Q^2 ≡ -q^2 ≫Λ^2, so that the time-ordered product in eq. (<ref>) is again calculated around x^2 ∼ 0 and y^μ∼ 0. Specifically, from eq. (<ref>) we obtain F^ O_1_μ(p,q) = - i N_c m_c ∫ d^4 x ∫ d^4 y e^i p· x e^i q · y ⟨ 0| s̅^i(x)γ_5 i S^(c)_ij(x,0) γ_ρ (1-γ_5) × i S^(u)_0(-y) γ_μγ_5 i S_0^(d)(y) γ^ρ (1-γ_5) b^j (0)|B̅(p+q) ⟩ , with S^(c) (x,0) denoting the charm-quark propagator expanded near the light-cone. Including the leading one gluon corrections, this reads <cit.> S_ij^(c)(x,0) = - i m_c^2 δ_ij/4 π^2[ K_1 (m_c √(-x^2))/√(-x^2) + i x/-x^2 K_2(m_c √(-x^2)) ] - i t^a_ij/16 π^2∫_0^1 d u [ m_c K_0 (m_c √(-x^2)) G_μν^a(u x) σ^μν + i m_c/√(-x^2) K_1 (m_c √(-x^2)) [ u̅ x G^a_μν(u x) σ^μν + u G^a_μν(u x) σ^μν x ] ] + … , where the first line corresponds to the free-quark propagator already introduced in eq. (<ref>), and the ellipses indicate subleading corrections with at least one additional covariant derivative of the gluon field strength tensor; note also that in writing eq. (<ref>) we have already taken into account that the colour structure now forbids the emission of one gluon from the light-quark loop and we have thus replaced the two propagators with the corresponding free quark ones, see Figure <ref>. The integral over y^μ in eq. (<ref>) can be easily performed. In dimensional regularisation it yields the standard massless one-loop two-point function, and, as expected, the result is transversal with respect to the momentum of the light-quark current q^μ. On the other hand, the integration over x^μ can be computed once a parametrisation for the corresponding two- and three-particle B-meson matrix elements is implemented. Using the results given in appendix <ref>, again in the HQET limit, we have respectively ⟨ 0| s̅_α (x) b_β(0)|B̅(p+q) ⟩ = - F_B(μ) √(m_B)∫_0^∞ d ω e^- i ω v · x{i/2(ϕ_+ + x^2 g_+) P_+ γ_5 + 1/4[(ϕ̅_+ - ϕ̅_-) + x^2 (g̅_+ - g̅_-)] P_+ x γ_5 }_βα(ω ; μ) , and ⟨ 0| s̅_α (x) G_μν(ux) b_β(0)|B̅(p+q) ⟩ = 1/2 F_B(μ) √(m_B)∫_0^∞ d ω_1 ∫_0^∞ d ω_2 e^- i (ω_1+ u ω_2) v · x ×{P_+ [ - i σ_μνψ_V + (v_μγ_ν - v_νγ_μ)(ψ_A - ψ_V) - i (x_μ v_ν - x_ν v_μ) ψ̅_X_A + i (x_μγ_ν - x_νγ_μ) (ψ̅_W + ψ̅_Y_A) - ϵ_μνητ x^η v^τγ_5 ψ̅_X̃_A + ϵ_μνητ x^ηγ^τγ_5 ψ̅_Ỹ_A + (x_μ v_ν - x_ν v_μ) x ψ̅̅̅_W - (x_μγ_ν - x_νγ_μ) x ψ̅̅̅_ Z ] γ_5 }_βα(ω_1, ω_2;μ) , with the notation introduced in eqs. (<ref>), (<ref>). Substituting eq. (<ref>) into eq. (<ref>) and using eqs. (<ref>), (<ref>), we are left with the evaluation of the same type of tensor integrals as those in eqs. (<ref>), (<ref>), together with the following one ∫ d^4 x e^i p̃· x K_0(m_c √(-x^2)) {1, x^μ, x^μ x^ν, …} , with p̃^μ = p^μ - ω v^μ, and p̃^μ = p^μ - (ω_1 + u ω_2) v^μ, respectively for the two- and three-particle contributions. Using the expressions for the inverse Fourier transforms of Bessel functions collected in appendix <ref>, we arrive at the final form of the three-point correlator in eq. (<ref>), that is F^ O_1_μ(p,q) = F^ O_1_q (p^2,q^2) q_μ + F^ O_1_p (p^2,q^2) p_μ , where the contributions to the invariant amplitude F^ O_1_q (p^2,q^2) due to the two- and three-particle matrix elements are written in terms of a LC OPE, respectively, as [F^ O_1_q (p^2,q^2)]_ OPE, 2p = F_B √(m_B) m_c ∫_0^∞ d ω∑_ϕϕ(ω) ∑_n= 1^4c^ϕ_n (ω, q^2)/[s̃(ω, q^2) - p^2 - i ε]^nln(- q^2/μ^2) , with ϕ = ϕ_+, g̅_+, …, and [F^ O_1_q (p^2,q^2)]_ OPE, 3p = F_B √(m_B) m_c ∫_0^1 d u ∫_0^∞ d ω_2 ∫_u ω_2^∞ d ω∑_ψψ(u, ω_2,ω) ×∑_n=1^4c^ψ_n (u, ω, q^2)/[s̃(ω, q^2) - p^2 - i ε]^nln(- q^2/μ^2), with ψ = ψ_A, ψ_V, …. The function s̃ (ω, q^2) in eqs. (<ref>), (<ref>), is defined as in eq. (<ref>), while the analytic expressions of the OPE coefficients c_n^ϕ(ω, q^2), c_n^ψ(u,ω, q^2) can be found in appendix <ref>. Note that both the divergent 1/ϵ piece and the remaining constant term originating from the light-quark loop have been omitted, as only the coefficient proportional to log(-q^2/μ^2) is relevant for the derivation of the dispersion relations. To this end, we follow the same procedure as done in the previous section, employing QHD as well as applying a Borel transform in both the p^2- and q^2-channels. The final result can be compactly presented as i ⟨ O_1 ⟩ = 1/π^2 f_π f_D m_D^2∫_0^s_0^π d s^'∫_m_c^2^s_0^D d s e^(m_D^2 - s)/M^2 e^(m_π^2 - s^')/M^' 2 Im_s^' Im_s[F^ O_1_q(s,s^')]_ OPE , where [F^ O_1_q(s,s^')]_ OPE includes both the two- and three-particle contributions given in eqs. (<ref>), (<ref>), and the corresponding imaginary part can be easily obtained from the identities given in eqs. (<ref>), (<ref>). § NUMERICAL ANALYSIS §.§ Discussion of the inputs Below we discuss the numerical value of the inputs used in our analysis [In this section the notation B^0 and B_d is used interchangeably.]. Following Ref. <cit.>, the eight LCDAs, arising in the parametrisation of the three-particle B-meson matrix element in eq. (<ref>), are decomposed in terms of DAs of definite collinear twist, see eq. (<ref>). These non-perturbative inputs can then be estimated by constructing specific model-dependent parametrisations, all satisfying the same normalisation conditions and asymptotic behaviour for small value of the arguments <cit.>. In our analysis, we employ the exponential model. Specifically, we follow Refs. <cit.> for the twist-3 and twist-4 LCDAs and use, respectively ϕ_3 (ω_1, ω_2) = λ_E^2 - λ^2_H/6 ω_0^5ω_1 ω_2^2 e^-(ω_1+ω_2)/ω_0 , ϕ_4 (ω_1, ω_2) = λ_E^2 + λ^2_H/6 ω_0^4ω_2^2 e^-(ω_1+ω_2)/ω_0 , ψ_4 (ω_1, ω_2) = λ_E^2/3 ω_0^4ω_1 ω_2 e^-(ω_1+ω_2)/ω_0 , ψ̃_4 (ω_1, ω_2) = λ^2_H/3 ω_0^4ω_1 ω_2 e^-(ω_1+ω_2)/ω_0 , whereas for the twist-5 and twist-6 LCDAs we use the parametrisation proposed in Ref. <cit.>, namely ϕ̃_5 (ω_1, ω_2) = λ_E^2 + λ^2_H/3 ω_0^3ω_1 e^-(ω_1+ω_2)/ω_0 , ψ_5 (ω_1, ω_2) = - λ_E^2/3 ω_0^3ω_2 e^-(ω_1+ω_2)/ω_0 , ψ̃_5 (ω_1, ω_2) = - λ_H^2/3 ω_0^3ω_2 e^-(ω_1+ω_2)/ω_0 , ϕ_6 (ω_1, ω_2) = λ^2_E - λ_H^2/3 ω_0^2 e^-(ω_1+ω_2)/ω_0 . In the studies performed e.g. in Refs. <cit.>, the expansion has been truncated at twist-4 so that the DAs in eqs. (<ref>)-(<ref>) were neglected. In fact, the LCDAs of twist-5 and twist-6 were not expected to contribute at the current accuracy of O(1/m_B) <cit.>, and in addition, the four DAs in eqs. (<ref>)-(<ref>) would not be exhaustive for a complete description of the three-particle matrix element up to twist-6, since other LCDAs of the same order would still be missing <cit.>. However, we stress that the inclusion of the twist-5 and twist-6 DAs in eqs. (<ref>)-(<ref>) is actually necessary to ensure that eq. (<ref>) has the correct local limit [From the local limit of eq. (<ref>), cf. eq. (5.1) of Ref. <cit.>, it follows that Ψ_V(0,0)=(1/3)λ_H^2, Ψ_A(0,0)=(1/3)λ_E^2, and Ψ_X_A(0,0)= … = Ψ_Z(0,0) = 0. However, truncating at twist-4, i.e. neglecting Φ̃_5, …, Φ_6, in eq. (<ref>), leads instead to Ψ_Y_A(0,0) = (-1/6) λ_E^2 and Ψ_Ỹ_A(0,0) = (1/6) λ_H^2.], and therefore we refrain from truncating the expansion at twist-4. Moreover, as discussed in the next section and as shown in Table <ref>, we find that neglecting these higher-twist DAs leads to pronounced cancellations, mainly because, in this case, the contribution due to ψ_Ỹ_A is found to largely compensate the one due to ψ_V. On the other hand, when including also the twist-5 and twist-6 LCDAs, the coefficient of ψ_Ỹ_A becomes roughly one order of magnitude smaller and no cancellations between LCDAs arise. Turning to the two-particle DAs, we again adopt the exponential model [Several different models, mostly for the twist-2 LCDA ϕ_+, have been proposed and studied in the recent literature, see e.g. Refs. <cit.>.] and use, for the LCDAs up to twist-4, the parametrisation given in Ref. <cit.>, i.e. ϕ_+ (ω) = ω/ω_0^2 e^-ω/ω_0 , ϕ_- (ω) = e^-ω/ω_0/ω_0 - λ_E^2 - λ_H^2/9 ω_0^3 e^-ω/ω_0[1 - 2 ω/ω_0 + 1/2ω^2/ω_0^2], g_+ (ω) = ω^2/2 ω_0(1 - λ_E^2 - λ_H^2/36 ω_0^2) e^-ω/ω_0 - λ_E^2/6 ω_0^2[(ω - 2 ω_0 ) Ei(-ω/ω_0) + (ω + 2 ω_0 ) e^-ω/ω_0(lnω/ω_0 + γ_E ) - 2 ω e^-ω/ω_0], where Ei(z) is the exponential integral and γ_E is the Euler constant, while for the twist-5 LCDA we follow Ref. <cit.> and use g_- (ω) = ω[3/4 - λ_E^2-λ_H^2/12 ω_0^2(1 - ω/ω_0 + 1/3ω^2/ω_0^2)] e^-ω/ω_0 . The models in eqs. (<ref>)-(<ref>) depend on the parameters ω_0, λ_E^2, and λ_H^2. Within the exponential model and using EOM relations, it follows that ω_0 = λ_B <cit.>, with λ_B being the inverse moment of the two-particle B-meson distribution amplitude ϕ_+ (ω). The remaining two parameters λ_E^2 and λ_H^2 characterise the local vacuum-to-B-meson quark-gluon-quark matrix element. These inputs must be determined with some non-perturbative techniques, and are currently still quite poorly known. Specifically, for the parameter λ_B, there exist several determinations in the literature, obtained either with QCD sum rules <cit.>, OPE-based methods <cit.>[Very recently, a study of the strange quark mass effects has been preformed in Ref. <cit.>.], or from studies of the B →γℓν̅ decay <cit.>. In our analysis, we use the recent sum rule result from Ref. <cit.> where, for the first time, the complete SU(3)_F breaking effects due to the strange quark mass have been taken into account, hence providing estimates of the parameter λ_B for both the B mesons, i.e. λ_B_d(1 ) = (0.383 ± 0.153) , λ_B_s(1 ) = (0.438 ± 0.150) . As for the parameters λ_E^2 and λ_H^2, in the case of the B_d meson, several studies within the framework of QCD sum rule have been performed <cit.>. The first estimates, obtained in Ref. <cit.>, included only LO-QCD contributions up to dimension-five in the corresponding OPE, yielding respectively λ_E,B_d^2(1 ) = (0.11 ± 0.06) ^2 and λ_H,B_d^2(1 ) = (0.18 ± 0.07) ^2. Later, perturbative QCD corrections to the dimension-five contribution, as well as the LO-QCD dimension-six contributions were taken into account in Ref. <cit.>. These corrections improved the overall stability of the sum rule, leading to the smaller values λ_E, B_d^2 (1 ) = (0.03 ± 0.02) and λ_H, B_d^2 (1 )= (0.06 ± 0.03). Recently, a new study, performed using a different expression for the correlation functions and including dimension-seven contributions, has been carried out in Ref. <cit.>. The authors have obtained the values λ_E, B_d^2 (1 ) = (0.01 ± 0.01) and λ_H,B_d^2 (1 ) = (0.15 ± 0.05), where the former is consistent with the result of Ref. <cit.>, while the latter is considerably above. Therefore, to account for the spread in the two determinations, in our analysis we use the following intervals λ_E,B_d^2 (1 ) = (0.03 ± 0.03) , λ_H, B_d^2 (1 ) = (0.12 ± 0.09) , which cover the results of both Refs. <cit.>. On the other hand, since there are still no estimates of the parameters λ_E,B_s^2 and λ_H,B_s^2 available in the literature, we fix their central values to be the same as the corresponding ones for the B_d meson, adding an extra 20 % uncertainty to account for SU(3)_F breaking effects. This gives λ_E, B_s^2 (1 ) = (0.03 ± 0.04) , λ_H, B_s^2 (1 ) = (0.12 ± 0.11) . Another important ingredient of the computation is the choice of the sum rule parameters. For the threshold continuum s_0^D and the Borel parameter M^2 in the D_(s)-meson channel, we adopt the same intervals as used in the recent QCD sum rule studies of the form factors for the B → D <cit.> and B_s → D_s <cit.> transitions, see also Refs. <cit.>. We thus use respectively s_0^D^+ = (6.8 ± 1.0) , M_D^+^2 = (3 ± 1.5) , s_0^D_s^+ = (9.0 ± 2.1) , M_D_s^+^2 = (3 ± 1.5) , while, for the corresponding sum rule parameters in the π- and K-meson channels, we use the following values <cit.> s_0^π^- = (0.7 ± 0.1) , M_π^-^2 = (1.0 ± 0.5) , s_0^K^- = (1.05 ± 0.10) , M_K^-^2 = (1.0 ± 0.5) . The QCD decay constants are determined with high precision within Latice QCD, and for all the mesons considered we take the corresponding FLAG values <cit.>. As for the HQET decay constant F_B (μ), which enters eq. (<ref>), we use the one-loop relation to the QCD decay constant f_B, valid up to power corrections of the order of 1/m_b <cit.>, namely F_B (μ) = f_B √(m_B)[1 - C_F α_s(μ)/4 π(3 lnm_b/μ -2 )] + … , with C_F = 4/3. In our analysis, the central value of the renormalisation scale in eq. (<ref>) is set to μ =1 GeV, corresponding to the scale at which the inputs λ_B, λ_E^2, and λ_H^2, have been determined. Taking then into account the scale-dependence of the latter parameters <cit.>, the total uncertainty due to μ-variation is obtained varying this scale in the interval 1 ≤μ≤ 1.5. For the strong coupling α_s (μ), we include the five-loop running implemented in the Mathematica package 𝚁𝚞𝚗𝙳𝚎𝚌 <cit.> and use the most recent result <cit.> α_s (M_Z) = 0.1179 ± 0.0009 . For the quark masses, we use the corresponding values in the MS-scheme, i.e. m_b (m_b) = (4.18 ± 0.03) and m_c (m_c) = (1.27 ± 0.02) <cit.>. Values of the meson masses, known very precisely, are also taken from the PDG <cit.>. In order to obtain predictions for the branching fractions, we need in addition to fix the value of the Wilson coefficients, of the CKM matrix elements, and of the B_(s)^0-meson lifetime. For the former, we use the corresponding results at NLO accuracy, see e.g. Ref. <cit.>. The central value of the Wilson coefficients is obtained setting μ_b = m_b, and this scale is then varied in the interval m_b/2 ≤μ_b ≤ 2 m_b. We stress that the choice of using NLO results, despite the LO accuracy of the corresponding matrix elements, is motivated by the fact that there is a sizeable shift of ∼ -40 % in the value of C_2, when going from LO to NLO [The shift from NLO to NNLO can be instead neglected, given the current accuracy of our study.], which strongly affects the prediction of the non-factorisable part of the amplitude. The computation of the missing perturbative QCD corrections to the matrix elements would be clearly of utmost importance in order to assess the total size of NLO effects. For the CKM matrix elements, we use the best-fit values, obtained from a global fit, provided by the PDG <cit.>, i.e. |V_ud| = 0.97435 ± 0.00016, |V_us| = 0.22500 ± 0.00067, |V_cb| = 0.04182^+0.00085_-0.00074 . Finally, B_(s)^0-meson lifetimes are by now measured very precisely and their values are taken from Ref. <cit.>.[B_(s)^0-meson lifetimes can also be computed using the framework of the Heavy Quark Expansion. However, the current theoretical uncertainties are still much larger than the corresponding experimental ones <cit.>. ] For convenience, all the inputs used in our analysis are collected in Table <ref>. §.§ Results In this section we present our predictions, obtained within the framework of LCSR and at LO-QCD, of the factorisable and non-factorisable matrix elements for the non-leptonic decays B̅_s^0 → D_s^+ π^- and B̅^0 → D^+ K^-, as well as of the corresponding branching fractions. Let us start by discussing the predictions for the non-factorisable matrix element ⟨ O^q_2⟩, which represents the main result of the paper. The final sum rule is given in eq. (<ref>), and in order to illustrate the main sources of uncertainty, in Figure <ref> we show the dependence of i ⟨ O_2^d ⟩ on different inputs, fixing in each plot the remaining parameters to their central values. For easier comparison, all plots are displayed in the same interval, namely i ⟨ O_2^d ⟩∈ [0,0.50] GeV^3, and for brevity, we only show the mode B̅_s^0 → D_s^+ π^-, since the behaviour of the corresponding matrix element in the case of the B̅^0 → D^+ K^- decay is completely analogous. We find that the sum rule prediction for the non-factorisable matrix element is extremely sensitive to the value of the parameter λ_H^2 which, on the other hand, as discussed in the previous section, is still poorly known. The result is also quite sensitive to the size of λ_B, while the dependence on λ_E^2 appears softer. Clearly, a more precise determination of these non-perturbative inputs is essential in order to improve the accuracy of the present analysis. The sensitivity to the value of the threshold continuum s_0^D, and of the Borel parameters M_D^2, and M^2_π, is found to be quite mild, thus reflecting the overall stability of the sum rule. The dependence on the charm quark mass and on the renormalisation scale μ is also very moderate. The partial contribution to i ⟨ O_2^d ⟩, for each of the eight LCDAs entering the parametrisation of the three-particle B-meson matrix element in eq. (<ref>), is shown in the third column of Table <ref>, in correspondence of the central values of all the input parameters. We find that the function ψ_V gives the dominant contribution to the non-factorisable matrix element, while the remaining LCDAs lead all together to a small effect. As stated in the previous section, in our analysis we use the results for the LCDAs up to twist-six accuracy; however, for comparison, in the last column of Table <ref>, we also provide the corresponding partial contributions to i ⟨ O^d_2 ⟩ obtained neglecting the twist-5 and twist-6 LCDAs in eqs. (<ref>)-(<ref>). In this case, there is a strong cancellation between the coefficients of ψ_V and ψ_Ỹ_A, leading to a much smaller value for ⟨ O_2^d ⟩. Again, a similar picture is found in the case of ⟨ O_1^s ⟩ and we thus refrain from showing the corresponding results. Varying the input parameters within their intervals and combining all the corresponding uncertainties in quadrature, we obtain the following results for the matrix element i ⟨ O^q_2 ⟩, for both the modes considered, namely i ⟨ O_2^d ⟩ = (0.24_-0.22^+0.22) ^3 , B̅_s^0 → D_s^+ π^- , i ⟨ O_2^s ⟩ = (0.24_-0.19^+0.21) ^3 , B̅^0 → D^+ K^- , where, as already discussed, the total uncertainties are strongly dominated by the limited accuracy of the parameter λ_H^2. We can now turn to discuss our results for the factorisable matrix element ⟨ O_1^q ⟩. The final sum rule is given in eq. (<ref>) and includes the contribution of both the two- and three-particle LCDAs. The relative size of each of the LCDAs contributions is shown in the second column of Table <ref>. We find, as expected, that the dominant effect is due to ϕ_±, with the twist-4 and twist-5 LCDAs g_± yielding a smaller contribution. On the other hand, the three-particle LCDAs appear to be strongly suppressed, in consistency with what found e.g. in the LCSR study of the B → D form factors <cit.>. As for the uncertainty budget, the LCSR prediction is extremely sensitive to the value of the non-perturbative parameter λ_B, while the dependence on the sum rule inputs i.e. the threshold continuum and the Borel parameters is found to be mild, and that on the parameters λ_E^2 and λ_H^2 very small. Furthermore, also in this case, the uncertainty due to μ-variation is moderate. Varying all the input parameters within their intervals and again adding all individual uncertainties in quadrature, we obtain the following estimates of the factorisable matrix element i ⟨ O_1^q ⟩, for both the modes considered, i.e. i ⟨ O_1^d ⟩ = - (1.51^+0.66_-0.61) ^3 , B̅_s^0 → D_s^+ π^- , i ⟨ O_1^s ⟩ = - (2.03^+1.00_-0.75) ^3 , B̅^0 → D^+ K^- . Note that the above values are consistent with the QCDF results <cit.>, however the uncertainties are significantly larger. Before discussing our predictions for the branching fractions, two more remarks with respect to the error budget are in order. First, we emphasise that using other models for the LCDAs, like the local duality model, see Ref. <cit.> for a detailed discussion, and Ref. <cit.> for new parametrisations of the twist-5 and twist-6 LCDAs, does not lead to any significant difference, within the quoted uncertainties, in the values for both the factorisable and non-factorisable matrix elements. Moreover, additional sources of uncertainties, like missing 1/m_b corrections to the expression in eq. (<ref>), are also expected to be effectively covered by our large error ranges. Combining the above results with the corresponding Wilson coefficients, our estimates for the ratio of the non-factorisable over the factorisable parts of the amplitude for the B̅^0_s → D_s^+ π^- and B̅^0 → D^+ K^- decays read, respectively C_2 ⟨ O_2^d ⟩/C_1 ⟨ O_1^d ⟩ = 0.051^+0.059_-0.052 , B̅_s^0 → D_s^+ π^- , C_2 ⟨ O_2^s ⟩/C_1 ⟨ O_1^s ⟩ = 0.039^+0.042_-0.034 , B̅^0 → D^+ K^- . The non-factorisable matrix element ⟨ O_2^q⟩ is thus found to lead to a sizeable positive effect, of the order of few percent, to the total amplitude for both the non-leptonic decays considered. This is in perfect agreement with the first estimates of Ref. <cit.>, however in contrast with the results of Ref. <cit.>. On the other hand, the uncertainties in eqs. (<ref>), (<ref>) appear still very large, and are of the order of 100 %. It is worth pointing out that, despite computing the ratio of the two matrix elements within the same theoretical framework, we only obtain a minor reduction of the total uncertainty from the simultaneous variation of the common inputs. This follows from the large sensitivity of ⟨ O_2^q ⟩ and ⟨ O_1^q ⟩ on different non-pertubative parameters. That is, as already stressed, λ_H^2 for the former matrix element, and λ_B for the latter. Furthermore, due to the stronger scale dependence of the Wilson coefficient C_2 compared to that of C_1, also the ratio C_2/C_1 does not provide a significant reduction of the total uncertainty. We note in particular that, because of the additional variation of the scale μ_b, the relative uncertainty in the non-factorisable part of the amplitude becomes even larger. The results in eqs. (<ref>)-(<ref>) lead to the following predictions for the branching fractions Br (B̅^0_s → D^+_s π^- ) = (2.15^+2.14_-1.35) × 10^-3, Br (B̅^0 → D^+ K^-) = (2.04^+2.39_-1.20) × 10^-4, in agreement with the experimental data shown in eqs. (<ref>), (<ref>), and also consistent with the QCDF results in eqs. (<ref>), (<ref>), although again within very large uncertainties. On the other hand, our central values are considerably lower than the latter. Finally, note that naively combining our results in eqs. (<ref>), (<ref>), with the QCDF prediction of the leading power amplitude for the corresponding decays, actually leads to a reduction of the observed tension with the data, despite the positive shift, due to the increased size of the uncertainties. However, we would like to emphasise that one should be careful when combining LCSR and QCDF results because of different assumptions adopted, e.g. the different treatment of the charm quark. § CONCLUSION AND OUTLOOK In this work we have presented new determinations, obtained within the framework of LCSR, of the non-factorisable contributions to the amplitude of the non-leptonic decays B̅_s^0 → D_s^+ π^- and B̅^0 → D^+ K^-, due to soft gluon emission. The computation is based on the derivation of a LC-local OPE for a suitable three-point correlation function, and on the use of B-meson LCDAs. Our analysis, in particular, has raised several questions that have been overlooked in many previous similar studies, and that require further clarifications. First, the fact that performing a double LC expansion of the correlation function seems to actually lead to a result which is not transversal. Second, in this case, the dominant contribution to the correlator originates from generalised non-local three-particle B-meson matrix elements with non-aligned fields, which are still unknown in the literature for arbitrary Dirac structures. First steps in this direction have been taken in Ref. <cit.>. Third, we have found that truncating the expansion of the three-particle B-meson LCDAs at twist-4, i.e. neglecting the twist-5 and twist-6 DAs, seems to contradict the local limit of the corresponding non-local matrix element, and in addition lead to pronounced cancellations. In our work, the first two points have been circumvented by employing a LC-local OPE, which, albeit less accurate, has allowed us to consistently compute the correlator in terms of known hadronic input functions. As for the third point, we have included the contribution of the twist-5 and twist-6 LCDAs in our analysis, thus ensuring the correct local limit for the three-particle matrix element, and also the lifting of the apparent cancellations. However, in light of the above findings, further investigations are certainly needed in order to improve the current understanding of these decays, as well as shed more light on the size of non-local hadronic effects in rare semileptonic B-meson decays. Another important result of the paper is the computation of the factorisable matrix elements for the decays B̅_s^0 → D_s^+ π^- and B̅^0 → D^+ K^-, at LO-QCD accuracy, within LCSR, which represents the first determination using this framework. In this respect, it is important to stress that, despite so far the limited precision compared to QCDF at leading power, LCSR provides a well established method for the computation of the whole amplitude, including next-to-leading power effects, entirely within the same framework. Our predictions, shown in eqs. (<ref>)-(<ref>), indicate that the non-factorisable matrix element leads to a sizeable and positive contribution, of the order of few percent, to the amplitude for the decays B̅_s^0 → D_s^+ π^- and B̅^0 → D^+ K^-, in consistency with the first estimates by Blok and Shifman <cit.>, but in contrast with the findings of Ref. <cit.>. On the other hand, we emphasise that the total uncertainties are also found to be very large, mainly due to the limited accuracy of many non-perturbative inputs, particularly those entering the parametrisation of the two- and three-particle B-meson LCDAs, i.e. λ_B, and λ_H^2. Finally, combining our results for the factorisable and non-factorisable matrix elements, we have also obtained new estimates for the branching fractions of the B̅_s^0 → D_s^+ π^- and B̅^0 → D^+ K^- decays, shown in eqs. (<ref>), (<ref>), respectively. Our predictions appear to be in good agreement with the corresponding experimental data, however, given the very large uncertainties, and the LO accuracy of the current analysis, we refrain from drawing any conclusion on the status of these observables, in light of the discrepancies found in Ref. <cit.>. We consider instead to be more justified to conclude with a comprehensive outlook for future studies and investigations. Specifically, in order to improve the present analysis, one would require: ♢ More accurate determination of the parameter λ_B, for both the B̅^0_d and B̅^0_s mesons, either by improving the current QCD sum rule analyses <cit.> or by performing first Lattice QCD investigations. Alternatively, stronger constraints on the size of these inputs could also be derived by extending the OPE-based studies of Refs. <cit.>, or from the anticipated data by the Belle II collaboration on B →γℓν decays <cit.>. ♢ Improved determination of the parameters λ_E^2 and λ_H^2 either within QCD sum rules or Lattice QCD, as well as the computation of the corresponding SU(3)_F-breaking effects which are, so far, still missing in the literature. ♢ Study of the generalised three-particle B-meson non-local matrix elements, with the light spectator quark and the gluon aligned on different light-cone directions. As already stressed in Ref. <cit.>, the knowledge of these novel soft functions would also be crucial in order to improve the current analyses of the non-local soft-gluon contributions in rare semileptonic B-meson decays, and thus to shed more light on the apparent tensions in b → s ℓ^+ ℓ^- transitions. ♢ Further studies of higher-twist effects in the three-particle B-meson matrix elements, and of the corresponding LCDAs. The investigation of alternative models for the DAs would be important to reduce the corresponding model dependent uncertainty. ♢ Computation of NLO-QCD corrections in the OPE for both the factorisable and non-factorisable matrix elements, within LCSR. ♢ Alternative estimate of the factorisable and non-factorisable matrix elements using the LCSR framework with the light meson, i.e. the π- and K, LCDAs. This would in fact provide an important cross-check of our study, and allow one to circumvent the current challenges associated with the B-meson LCDAs. § ACKNOWLEDGEMENTS We are really grateful to Alexander Khodjamirian for many insightful discussions and valuable comments. We also thank Nico Gubernari, Thorsten Feldmann, Alexander Lenz, and Danny van Dyk, for helpful discussions. In addition, we would like to thank Alexander Khodjamirian, and Alexander Lenz, for their constant support throughout this work and for carefully proofreading the manuscript. We also acknowledge the TP1 group in Siegen for the useful feedback provided during the journal club, where this work has been informally presented. MLP wishes to thank Thorsten Feldmann, Peter Stangl, Javier Virto, and Roman Zwicky, for interesting discussions. The work of MLP was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - project number 500314741. § CONVENTIONS AND DEFINITIONS In this appendix, we collect the main conventions and definitions adopted throughout the paper. For the Levi-Civita tensor we use ϵ^0123 = + 1 which, together with γ_5 = i γ^0 γ^1 γ^2 γ^3, leads to Tr[γ^μγ^νγ^ργ^σγ_5] = - 4 i ϵ^μνρσ . The SU(3)_c generators in the fundamental representation t^a_ij satisfy the completeness relation t^a_ijt^a_lm = 1/2(δ_imδ_jl - 1/N_cδ_ijδ_lm) , and are normalised as Tr[t^a t^b] = (1/2) δ^ab. The gluon field strength tensor is defined as G_μν = i [D_μ, D_ν], with the covariant derivative given by D_μ = ∂_μ - i A_μ(x). Note that the strong coupling g_s is absorbed in the definition of the gluon field A_μ(x) = A_μ^a(x) t^a. The matrix element of the axial and axial-vector currents j_5^D(x) = i m_c q̅γ_5 c, and j_μ^L(x) = u̅γ_μγ_5 q, with q = {d,s}, between the D- and the L-meson and the vacuum, are respectively defined as ⟨ 0 |j_5^D(x) | D (p) ⟩ = m_D^2 f_D e^- i p · x , ⟨ 0 |j_μ^L(x) | L (p) ⟩ = i f_L p_μ e^- i p· x , where f_D and f_L are the corresponding meson decay constants. Moreover, for the matrix element of the vector current j_μ(x) = c̅γ_μ b between a B- and a D-meson, we use the following parametrisation ⟨ D (p) | j_μ| B (p+q) ⟩ = f_+^BD (q^2) [2 p^μ + (1 - m_B^2 - m_D^2/q^2) q^μ] + f_0^BD (q^2) m_B^2 - m_D^2/q^2 q^μ , with f_+^BD and f_0^BD being, respectively, the vector and scalar form factors for the B → D transition. In order to write down the final sum rules, the following results for the imaginary part of the functions entering the OPE are used. From lim_ε→ 0^+ 1/(x ± i ε) = P(1/x) ∓ i πδ(x), we obtain Im1/(x - i ε)^n = π(-1)^n-1/(n-1)! δ^(n-1)( x ) , n ≥ 1 , where δ^(n-1)(x) denotes the (n-1)-derivative of the delta function with respect to its argument. Furthermore, the analytic continuation of the logarithm function at negative values of the argument is defined as ln(- x) =ln|x| - i πθ(x) , so that the logarithms in eqs. (<ref>), (<ref>), develop an imaginary part for q^2>0 equal to - π. § B-MESON LCDAS The non-local matrix element corresponding to both the light spectator quark and the gluon in the B-meson aligned along the same light-cone direction n^μ [The notation follows Ref. <cit.> so, when comparing with section <ref>, it is n_+^μ≡ n^μ and n_-^μ≡n̅^μ.] in the rest frame of the heavy B-meson v^μ = (n^μ + n̅^μ)/2 = (1,0⃗ ), can be parameterised in terms of eight three-particles LCDAs as <cit.> [In the definition of the matrix element, the gauge link [x,y] = P exp{ i ∫_0^1 du (x-y)_μ A^μ(u x + u̅ y)}, with u̅ = 1 -u, is always implicitly assumed. We note however that in the Fock-Schwinger gauge, which we use in our computation, this factor equals to unity.] ⟨ 0 | q̅_α (n z_1) G_μν (n z_2) h_v,β (0) | B̅ (v) ⟩ = 1/2 F_B(μ) { P_+ [(v_μγ_ν - v_νγ_μ)(Ψ_A - Ψ_V) - i σ_μνΨ_V - (n_μ v_ν - n_ν v_μ) Ψ_X_A + (n_μγ_ν - n_νγ_μ) (Ψ_W + Ψ_Y_A) + i ϵ_μνητ n^η v^τγ_5 Ψ_X̃_A - i ϵ_μνητ n^ηγ^τ ×γ_5 Ψ_Ỹ_A - (n_μ v_ν - n_ν v_μ) n Ψ_W + (n_μγ_ν - n_νγ_μ) n Ψ_Z ] γ_5 }_βα(z_1, z_2; μ) , where α, β, denote spinor indices, h_v(x) = e^i m_b v · x b(x) + O(1/m_b) is the HQET field, see e.g. the review <cit.>, and the sign difference with respect to Ref. <cit.> in the coefficients of the Levi-Civita tensor follows from using the opposite convention for ϵ_μνρσ, cf. appendix <ref>. Moreover, we point out the change of notation for some of the DAs as compared to Ref. <cit.>, i.e. X_A →Ψ_X_A etc. In eq. (<ref>), the two parameters z_1, z_2, specify, respectively, the position of the light quark and of the gluon field on the light-cone vector n^μ. Performing a Fourier transform, each LCDA can be expressed in terms of the corresponding momentum space distributions as Ψ(z_1, z_2) = ∫_0^∞ d ω_1 ∫_0^∞ d ω_2 e^-i ω_1 z_1 -i ω_2 z_2 ψ(ω_1, ω_2) , Ψ = {Ψ_V, Ψ_A, …}. Note that following Ref. <cit.>, we also adopt the convention that DAs in coordinate space are written in upper case, whereas the lower case is used for the momentum space representations. In order to reorganise eq. (<ref>) in terms of its twist expansion rather than its Lorentz decomposition, in Ref. <cit.>, the DAs appearing in eq. (<ref>) have been recast in terms of LCDAs of definite collinear twist. Specifically <cit.> Ψ_A(z_1,z_2) = Φ_3 + Φ_4/2 , Ψ_V(z_1,z_2) = Φ_4 - Φ_3/2 , Ψ_X_A (z_1,z_2) = -Φ_3 - Φ_4 + 2 Ψ_4/2 , Ψ_X̃_A(z_1,z_2) = -Φ_3 + Φ_4 - 2 Ψ̃_4/2 , Ψ_Y_A (z_1,z_2) = -Φ_3 - Φ_4 + Ψ_4- Ψ_5/2 , Ψ_Ỹ_A(z_1,z_2) = -Φ_3 + Φ_4 - Ψ̃_4 + Ψ̃_5/2 , Ψ_W (z_1,z_2) = Φ_4 - Ψ_4 - Ψ̃_4 + Φ̃_5 + Ψ_5 + Ψ̃_5/2 , Ψ_Z (z_1,z_2) = -Φ_3 + Φ_4- 2 Ψ̃_4 + Φ̃_5 + 2 Ψ̃_5 - Φ_6/4 , where Φ_3, and Φ_4, Ψ_4, Ψ̃_4, are LCDAs of twist-3 and twist-4, whereas Φ̃_5, Ψ_5, Ψ̃_5, and Φ_6, are of twist-5 and twist-6, respectively. At leading order in HQET, the three-particle matrix elements required for the computation of both the factorisable and non-factorisable amplitudes can be then simply derived from eqs. (<ref>). In doing so, one obtains additional factors of the type (v · x)^-1 and (v · x)^-2, which can be simplified by introducing respectively the replacements ψ(ω_1, ω_2) → i (v · x) ψ̅(ω_1, ω_2) , ψ(ω_1, ω_2) → - (v · x)^2 ψ̅̅̅(ω_1, ω_2) , with ψ̅(ω_1, ω_2) ≡∫_0^ω_1 d η ψ(η, ω_2) , ψ̅̅̅(ω_1, ω_2) ≡∫_0^ω_1 d η∫_0^η d η^' ψ(η^', ω_2) . The results in eq. (<ref>), (<ref>), can be easily derived by using the identities ∫ d^4 x f(x) ∫_0^∞ d ω_1 d/d ω_1[ e^- i ω_1 v · x∫_0^ω_1 d η ψ(η, ω_2) ] =0 , ∫ d^4 x f(x) ∫_0^∞ d ω_1 d^2/d ω_1^2[ e^- i ω_1 v · x∫_0^ω_1 d η∫_0^η d η^' ψ(η^', ω_2) ] =0 , where f(x) absorbs the remaining x-dependence in the correlation function. Note that eqs. (<ref>), (<ref>), follow from the fact that the boundary terms are always zero. Specifically, they vanish when ω_1 → 0 due to the integration over η, and also when ω_1 →∞, because of the exponential suppression of the integral over x^μ, in accordance with the Riemann–Lebesgue theorem. In our computation of the non-factorisable amplitude, we actually need the corresponding three-particle matrix element with the gluon field fixed at the origin, cf. eq. (<ref>). Setting z_2 = 0 in eq. (<ref>), the integral over ω_2 in eq. (<ref>) can be readily performed. To this end, we introduce the compact notation ψ̂(ω_1) ≡∫_0^∞ d ω_2 ψ(ω_1,ω_2) . Finally, a new parametrisation of the two-particle B-meson matrix element including higher-twist DAs, at leading order in HQET, has been obtained in Ref. <cit.> ⟨ 0| q̅_α (x) h_v, β(0)|B̅(v) ⟩ = - i/2 F_B(μ) √(m_B)∫_0^∞ d ω e^- i ω v · x{(ϕ_+ + x^2 g_+) P_+ γ_5 - 1/2 (v · x)[(ϕ_+ - ϕ_-) + x^2 ( g_+ - g_-)] P_+ x γ_5 }_βα(ω ; μ) , where ϕ_+, ϕ_-, are LCDAs of twist-2 and twist-3, whereas g_+, g_-, are of twist-4 and twist-5, respectively. The expression in eq. (<ref>) immediately follows from eq. (<ref>), taking into account that ϕ(ω) → i (v · x) ϕ̅(ω) , with ϕ̅(ω) ≡∫_0^ω d η ϕ(η) , with ϕ= {ϕ_+, g_+, …}. § COMPUTATION OF THE LOOP INTEGRAL IN COORDINATE SPACE The one-loop integral in eq. (<ref>), can be also explicitly computed in coordinate space, since in the limit of massless quark, the expression of the corresponding propagator simplifies and does not contain Bessel functions. In fact, in dimensional regularisation, with d = 4 - 2 ϵ, the local expansion of a massless quark propagator, up to leading one-gluon contributions, reads <cit.> S^(q)_ij (x,y) = Γ (d/2)/2 π^2 x - y/[ -(x-y)^2]^d/2 δ_ij + Γ (d/2-1)/32 π^2( x - y) σ^μν + σ^μν ( x - y)/[ -(x-y)^2]^d/2-1 G_μν^a t_ij^a + … , where q ={u, d, s}, Γ(z) is the gamma function, and the ellipses denote higher order corrections with at least one covariant derivative of the gluon field strength tensor. Substituting eq. (<ref>) into eq. (<ref>), the integration over y^μ can be easily performed by taking into account the following results <cit.> ∫ d^d y e^i q · y1/(-y^2)^a = -i 2^d-2aπ^d/2 Γ(d/2 - a)/Γ(a) (-q^2)^a- d/2 , ∫ d^d y e^i q · yy^μ y^ν/(-y^2)^a = i 2^d-2 a + 1π^d/2 Γ(d/2 - a)/Γ(a) (a - d/2) (-q^2)^a - d/2 - 2 ×[ (a - d/2 - 1 ) 2 q^μ q^ν + q^2 g^μν] , where a = d - 1, and eq. (<ref>) has been obtained by differentiating twice both sides of eq. (<ref>) with respect to q^μ. Performing the calculation in NDR, the divergent 1/ϵ contributions cancel when considering the gluon emission from both the light-quark propagators, leading to the finite result shown in eq. (<ref>). Finally, the computation of the loop integral in eq. (<ref>) proceeds in an analogous way, although now only the first line of eq. (<ref>) contributes. § INVERSE FOURIER TRANSFORM OF BESSEL FUNCTIONS In this appendix, we list the results for the tensor integrals introduced in eqs. (<ref>), (<ref>), (<ref>). Starting with those of lowest rank ∫ d^4 x e^i p · x K_0 (m √(-x^2)) = - 8 π^2 i 1/(p^2 - m^2 + i ε)^2 , ∫ d^4 x e^i p · x K_1(m √(-x^2))/√(-x^2) = 4 π^2 i/m1/ p^2 - m^2 + i ε , ∫ d^4 x e^i p · x K_2(m √(-x^2))/x^2 x^μ = - 4 π^2/m^2 p^μ/ p^2 - m^2 + i ε , the remaining tensor integrals can be obtained by differentiating multiple time eqs. (<ref>)-(<ref>) with respect to the four-momentum p^μ. This gives ∫ d^4 x e^i p · x K_0 (m √(-x^2)) x^μ = 32 π^2 p^μ/(p^2 - m^2 + i ε )^3 , ∫ d^4 x e^i p · x K_0 (m √(-x^2)) x^μ x^ν = 32 π^2 i [6 p^μ p^ν - (p^2 - m^2) g^μν/(p^2 - m^2 + i ε )^4] , ∫ d^4 x e^i p · x K_1(m √(-x^2))/√(-x^2) x^μ = - 8 π^2/mp^μ/(p^2 - m^2 + i ε )^2 , ∫ d^4 x e^i p · x K_1(m √(-x^2))/√(-x^2) x^μ x^ν = 8 π^2 i/m[ ( p^2 - m^2) g^μν - 4 p^μ p^ν/( p^2 - m^2 + i ε)^3] , ∫ d^4 x e^i p · x K_1(m √(-x^2))/√(-x^2) x^2 x^μ = 192 π^2 m p^μ/(p^2 - m^2 + i ε)^4 , ∫ d^4 x e^i p · x K_2(m √(-x^2))/x^2 x^μ x^ν = 4 π^2 i/m^2[ ( p^2 - m^2) g^μν - 2 p^μ p^ν/( p^2 - m^2 + i ε)^2] , ∫ d^4 x e^i p · x K_2(m √(-x^2))/x^2 x^μ x^ν x^ρ = - 8 π^2/m^2[ (p^2 - m^2) g^{μν p^ρ} - 4 p^μ p^ν p^ρ/(p^2 - m^2 + i ε)^3] , ∫ d^4 x e^i p · x K_2(m √(-x^2)) x^μ x^ν = - 16 π^2 i/m^2[ (p^2 - 4 m^2) (4 p^μ p^ν - g^μν p^2) - 3 m^4 g^μν/(p^2 - m^2 + i ε)^4] , where the curly brackets in eq. (<ref>) denote the symmetrisation of the tensor g^μν p^ρ with respect to the three Lorentz indices. § RESULTS FOR THE OPE COEFFICIENTS The c_n^ψ̂ (ω_1, q^2) in eq. (<ref>) read respectively [Here and in the rest of the section, we only show the non-vanishing coefficients.] c_1^ψ̂_V(ω_1, q^2) = - 1/16 π ^2 (m_B - ω_1)^3(3 m_B^2 + 4 m_B m_c - 6 m_B ω_1 + m_c^2 - q^2 - 4 m_c ω_1 + 3 ω_1^2 ) (m_B^3 - 2 m_B^2 ω_1 - m_B (m_c^2 + q^2 - ω_1^2) + 2 q^2 ω_1 ) , c_1^ψ̂_A(ω_1, q^2) = - 1/16 π ^2 (m_B -ω_1)^3(m_B^5 - 4 m_B^4 ω_1 - 2 m_B^3 (m_c^2 - 3 ω_1^2) + 2 m_B^2 ω_1 (2 m_c^2 + q^2 - 2 ω_1^2) + m_B (m_c^4 - 2 m_c^2 ω_1^2 - q^4 - 4 q^2 ω_1^2 + ω_1^4) + 2 q^2 ω_1 (-m_c^2 + q^2 +ω_1^2) ) , c_1^ψ̅̂̅_Y_A(ω_1, q^2) = 1/4 π ^2 (m_B-ω_1)^2(m_B^3 - 2 m_B^2 ω_1 - m_B (m_c^2+q^2-ω_1^2) +3 q^2 ω_1 ) , c_2^ψ̅̂̅_Y_A(ω_1, q^2) = - m_B/8 π ^2 (m_B-ω_1)^3(m_B^2 - 2 m_B ω_1 - m_c^2 - q^2 + ω_1^2 ) ×(m_B^3 - 2 m_B^2 ω_1 - m_B (m_c^2 + q^2 - ω_1^2) + 2 q^2 ω_1 ) , c^ψ̅̂̅_X̃_A_1(ω_1, q^2) = - m_B + m_c - ω_1/4 π ^2 (m_B-ω_1)^3(m_B^2 m_c - m_B m_c (m_c + ω_1) + q^2 ω_1 ) , c^ψ̅̂̅_X̃_A_2(ω_1, q^2) = m_B (m_B+m_c-ω_1)/8 π^2 (m_B-ω _1)^4(m_B^2 - 2 m_B (m_c + ω_1) + m_c^2 - q^2 + 2 m_c ω_1 + ω_1^2) (m_B^3 - 2 m_B^2 ω_1 - m_B (m_c^2 + q^2 - ω_1^2) + 2 q^2 ω_1) , c^ψ̅̂̅_Ỹ_A_1(ω_1, q^2) = 1/4 π ^2 (m_B-ω _1)^2(m_B^3 + 2 m_B^2 (m_c-ω_1) + m_B (m_c^2 - 2 m_c ω_1 - q^2 + ω_1^2) + q^2 ω_1 ) , c^ψ̅̂̅_Ỹ_A_2(ω_1, q^2) = m_B/8 π ^2 (m_B-ω _1)^3(m_B^2 + 4 m_B m_c - 2 m_B ω_1 + 3 m_c^2 - q^2 - 4 m_c ω_1 + ω_1^2 ) (m_B^3 - 2 m_B^2 ω_1 - m_B (m_c^2 + q^2 - ω_1^2) + 2 q^2 ω_1 ) , c_1^ψ̅̂̅_W(ω_1, q^2) = 1/4 π ^2 (m_B - ω_1)^2(m_B^3 - 2 m_B^2 ω_1 - m_B (m_c^2 + q^2 -ω_1^2) + 3 q^2 ω_1 ) , c_2^ψ̅̂̅_W(ω_1, q^2) = - m_B/8 π ^2 (m_B-ω_1)^3(m_B^2 - 2 m_B ω_1 - m_c^2 - q^2 + ω_1^2 ) ×(m_B^3 - 2 m_B^2 ω_1 - m_B (m_c^2 + q^2 - ω_1^2) + 2 q^2 ω_1 ) , c^ψ̅̅̅̂̅̅̅_Z_1(ω_1, q^2) = m_B m_c/2 π ^2 (m_B-ω_1)^2 , c^ψ̅̅̅̂̅̅̅_Z_2(ω_1, q^2) =m_B m_c q^2 ω_1/π ^2 (m_B - ω_1)^3 , c^ψ̅̅̅̂̅̅̅_Z_3(ω_1, q^2) = - m_B^2 m_c/2 π ^2 (m_B-ω_1)^4(m_B^2 - 2 m_B ω_1 - m_c^2 - q^2 + ω_1^2 ) ×(m_B^3 - 2 m_B^2 ω_1 - m_B (m_c^2 + q^2 - ω_1^2) + 2 q^2 ω_1 ) . *** The coefficients c_n^ϕ(ω, q^2) in eq. (<ref>) read respectively c_1^ϕ_+(ω, q^2) = m_B+m_c-ω/8 π ^2 (m_B-ω)^2(m_B^3 -2 ω m_B^2 - m_B (m_c^2+q^2-ω ^2)+2q^2 ω) , c_1^ϕ̅_+(ω, q^2) = - 1/8 π ^2 (m_B-ω)^2(m_B^3 + m_B^2 (m_c -2 ω ) - m_B (ω (m_c -ω) + q^2 ) + 2 q^2 ω) , c_2^ϕ̅_+(ω, q^2) = - m_B m_c (m_B+m_c-ω)/8 π ^2 (m_B-ω)^3(m_B^3 -2 ω m_B^2 - m_B (m_c^2+q^2-ω ^2) + 2 q^2 ω) c_1^ϕ̅_-(ω, q^2) =1/8 π ^2 (m_B-ω)^2(m_B^3 + m_B^2 (m_c-2 ω) - m_B (ω (m_c-ω) + q^2 )+2 q^2 ω) , c_2^ϕ̅_-(ω, q^2) = m_B m_c (m_B+m_c-ω)/8 π ^2 (m_B-ω)^3(m_B^3 - 2 ω m_B^2 -m_B (m_c^2+q^2-ω ^2)+2 q^2 ω) , c_1^g_+(ω, q^2) = -m_B/2 π ^2 (m_B-ω) , c_2^g_+(ω, q^2) = -m_B/2 π ^2 (m_B-ω)^3(m_B^4 - 3 ω m_B^3 + m_B^2 (m_c^2-q^2+3ω ^2) + m_B (-ω m_c^2+2 m_c^3+3 q^2 ω -ω ^3) -2 q^2 ω ^2 ) , c_3^g_+(ω, q^2) = -m_B^2 m_c^2 (m_B + m_c - ω)/π ^2 (m_B-ω)^4(m_B^3 - 2 ω m_B^2 - m_B (m_c^2 + q^2 - ω^2) + 2 q^2 ω) , c_3^g̅_+(ω, q^2) =3 m_B^3 m_c^3 (m_B+m_c-ω)/π ^2 (m_B-ω)^4 , c_4^g̅_+(ω, q^2) =3 m_B^3 m_c^3 (m_B+m_c-ω)/π ^2 (m_B-ω)^5(m_B^3 -2 ω m_B^2 -m_B (m_c^2+q^2-ω ^2)+2 q^2 ω) , c_3^g̅_-(ω, q^2) = -3 m_B^3 m_c^3 (m_B+m_c-ω)/π ^2 (m_B-ω)^4 , c_4^g̅_-(ω, q^2) = -3 m_B^3 m_c^3 (m_B+m_c-ω)/π ^2 (m_B-ω)^5(m_B^3 -2 ω m_B^2 - m_B (m_c^2+q^2-ω ^2) + 2 q^2 ω) . *** Finally, the coefficients c_n^ψ(u, ω, q^2) in eq. (<ref>) read respectively c_1^ψ_V (u, ω, q^2) = 1/8 π ^2 (m_B-ω)^3(m_B (m_B+m_c-ω) (m_B (4 u-1) + 4 m_c (1-u) -4 u ω +ω)+4 q^2 (u-1) ω) , c_2^ψ_V (u, ω, q^2) =m_B/8 π ^2 (m_B-ω)^4(m_B (-2 ω m_B+m_B^2-m_c^2+ω ^2)-q^2 (m_B-2 ω) ) ×((m_B+m_c-ω) (m_B (2 u - 1) + 2 m_c (1-u) - (2 u + 1) ω) + 2 q^2 (u-1)) , c_1^ψ_A (u, ω, q^2) = 1/8 π^2 (m_B - ω)^3(m_B (3 m_B m_c - 2 m_B (2 u ω + ω) + (2 u + 1) m_B^2 + 4 (u - 1) m_c^2 - 3 ω m_c + (2 u + 1) ω^2 ) - 4 q^2 (u-1) ω) , c_2^ψ_A (u, ω, q^2) = m_B/8 π ^2 (m_B-ω)^4(m_B (-2 ω m_B+m_B^2-m_c^2+ω ^2)-q^2 (m_B-2 ω)) ×(m_B (3 m_c-8 u ω +2 ω)+(4 u-1) m_B^2 + 2 m_c^2 (u - 1) - 3 ω m_c -2 q^2 (u - 1) + 4 u ω^2 - ω^2 ) , c_1^ψ̅_X_A (u, ω, q^2) = - m_B/8 π^2 (m_B - ω)^3((2 u-1) m_B + 2 m_c-2 u ω +ω) , c_2^ψ̅_X_A (u, ω, q^2) = m_B/8 π ^2 (m_B-ω)^4(q^2 (m_B (ω -2 u ω) + (1-2 u) m_B^2 + 2 ω((2 u-1) ω - 2 m_c )) + m_B (-2 ω m_B+m_B^2-m_c^2+ω ^2) ×((2 u-1) m_B-4 m_c -2 u ω +ω)) , c_3^ψ̅_X_A (u, ω, q^2) = m_B^2 (m_B (-2 ω m_B+m_B^2-m_c^2+ω ^2)-q^2 (m_B-2 ω))/4 π ^2 (m_B-ω)^5 ×((2 ω m_B-m_B^2+m_c^2-ω ^2) (-2 u m_B+m_B+m_c+(2 u-1) ω) -q^2 ((2 u-1) m_B+m_c-2 u ω +ω)) , c_2^ψ̅_Y_A (u, ω, q^2) = 3 m_B^2 m_c/2 π ^2 (m_B-ω)^3(m_B+(2 u-1) m_c-ω) , c_3^ψ̅_Y_A (u, ω, q^2) = 3 m_B^2 m_c (m_B (-2 ω m_B+m_B^2-m_c^2+ω ^2)-q^2 (m_B-2 ω))/2 π ^2 (m_B-ω)^4 ×(m_B+(2 u-1) m_c-ω) , c_1^ψ̅_X̃_A (u, ω, q^2) =m_B (m_B+2 m_c-ω)/8 π ^2 (m_B-ω)^3 , c_2^ψ̅_X̃_A (u, ω, q^2) = m_B/8 π^2 (m_B - ω)^4(q^2 (ω m_B+m_B^2+4 ω m_c-2 ω ^2) + m_B (m_B (-8 ω m_c+m_c^2-3 ω ^2)+m_B^2 (4 m_c+3 ω) - m_B^3 + 4 ω ^2 m_c -ω m_c^2-4 m_c^3+ω ^3)) , c_3^ψ̅_X̃_A (u, ω, q^2) = -m_B^2 (m_B+m_c-ω)/4 π^2 (m_B-ω)^5(2 q^2 (m_B^2 (m_c+3 ω)-3 ω m_B (m_c+ω) - m_B^3 + ω(m_c+ω)^2)+m_B (m_B-m_c-ω)^3 (m_B+m_c-ω) + q^4 (m_B-2 ω) ) , c_2^ψ̅_Ỹ_A (u, ω, q^2) = -3 m_B^2 m_c/2 π ^2 (m_B-ω)^3(m_B+m_c-ω) , c_3^ψ̅_Ỹ_A (u, ω, q^2) = -3 m_B^2 m_c (m_B+m_c-ω)/2 π ^2 (m_B-ω)^4(m_B (-2 ω m_B+m_B^2-m_c^2+ω ^2) -q^2 (m_B-2 ω)) , c_2^ψ̅_W (u, ω, q^2) = 3 m_B^2 m_c/2 π ^2 (m_B-ω)^3( m_B+(2 u-1) m_c-ω) , c_3^ψ̅_W (u, ω, q^2) = 3 m_B^2 m_c/2 π ^2 (m_B-ω)^4( m_B (-2 ω m_B+m_B^2-m_c^2+ω ^2)-q^2 (m_B-2 ω)) ×(m_B+(2 u-1) m_c-ω) , c_2^ψ̅̅̅_W (u, ω, q^2) = 3 (2 u-1) m_B^2 m_c^2/2 π ^2 (m_B-ω)^4 , c_3^ψ̅̅̅_W (u, ω, q^2) = -3 m_B^2 m_c/2 π ^2 (m_B-ω)^5(q^2 (ω m_B-m_B^2+2 (1-2 u) ω m_c) +m_B (-2 ω m_B+m_B^2-m_c^2+ω ^2) (m_B+(2-4 u) m_c-ω)) , c_4^ψ̅̅̅_W (u, ω, q^2) = -3 m_B^3 m_c/2 π ^2 (m_B - ω)^6(m_B (-2 ω m_B+m_B^2-m_c^2+ω ^2)-q^2 (m_B - 2 ω)) ×(q^2 (-m_B -2 u m_c + m_c + ω) + (-2 ω m_B + m_B^2 - m_c^2 + ω^2 ) ×(m_B - 2 u m_c + m_c - ω) ) , c_2^ψ̅̅̅_Z (u, ω, q^2) =3 m_B^2 m_c/π ^2 (m_B-ω)^3 , c_3^ψ̅̅̅_Z (u, ω, q^2) = 3 m_B^2 m_c/π ^2 (m_B-ω)^4(m_B ((3-6 u) m_B m_c-2 ω m_B+m_B^2+3 (2 u-1) ω m_c - 4 m_c^2+ω ^2) - q^2 (m_B-2 ω) ) , c_4^ψ̅̅̅_Z (u, ω, q^2) = -9 m_B^3 m_c^2/π ^2 (m_B-ω)^5(m_B (-2 ω m_B+m_B^2-m_c^2+ω ^2)-q^2 (m_B-2 ω)) ×((2 u-1) m_B+m_c-2 u ω +ω) . JHEP
http://arxiv.org/abs/2307.05107v1
20230711083411
Optimizing Feature Extraction for Symbolic Music
[ "Federico Simonetta", "Ana Llorens", "Martín Serrano", "Eduardo García-Portugués", "Álvaro Torrente" ]
cs.SD
[ "cs.SD", "cs.MM", "eess.AS" ]
Gate voltage induced injection and shift currents in AA- and AB-stacked bilayer graphene Jin Luo Cheng August 12, 2023 ======================================================================================== This paper presents a comprehensive investigation of existing feature extraction tools for symbolic music and contrasts their performance to determine the set of features that best characterizes the musical style of a given music score. In this regard, we propose a novel feature extraction tool, named musif, and evaluate its efficacy on various repertoires and file formats, including MIDI, MusicXML, and **kern. Musif approximates existing tools such as jSymbolic and music21 in terms of computational efficiency while attempting to enhance the usability for custom feature development. The proposed tool also enhances classification accuracy when combined with other sets of features. We demonstrate the contribution of each set of features and the computational resources they require. Our findings indicate that the optimal tool for feature extraction is a combination of the best features from each tool rather than those of a single one. To facilitate future research in music information retrieval, we release the source code of the tool and benchmarks. § INTRODUCTION Feature extraction is a pivotal task in contemporary machine learning. Music features can be categorized into two main types: symbolic and audio. While audio features have been subject to extensive research, computational techniques for symbolic music remain comparatively underexplored. In recent years, there has been an increasing interest in analyzing symbolic scores in music. This encompasses studies on composer <cit.> and style recognition <cit.>, affective computing <cit.>, music generation <cit.>, analysis of performance <cit.>, and interpretation <cit.>. The symbolic dimension of music concerns the conceptual representation of musical data <cit.>. This level has been used in the field of Music Information Retrieval (MIR), with particularly successful outcomes when employed to support multimodal approaches <cit.>, which integrate both audio and symbolic levels through audio-to-score alignment techniques <cit.>. The symbolic level is also crucial for musicologists, as music scores are the most common source for historical music studies. Musicologists rely on computational tools to extract and analyze musical scores on a large scale <cit.>. However, traditional manual annotations, such as harmony <cit.> and cadence <cit.>, are time-consuming and prone to errors. Therefore, computational tools are essential for efficient and accurate musicological analysis. Presently, two primary tools are available for extracting features from symbolic music: jSymbolic <cit.> and music21 <cit.>. Although both tools are open-source and widely employed, no comprehensive comparison between them has been conducted yet. In this paper, we propose a novel set of features that is specifically, although not exclusively, tailored for the analysis of 18th-century Italian opera. We have developed a tool for extracting these features, named musif, that is being used for the analysis of operatic music in the Didone project[<https://didone.eu>] <cit.>. Here, we conduct a comparative study between musif and other existing tools, thus providing valuable insights into the strengths and weaknesses of each of them. Additionally, we evaluate the efficiency of each tool and demonstrate that musif adds useful features to both music21 and jSymbolic. We observe that, in most cases, a combination of features from multiple tools yields the most powerful feature set. To validate our findings, we test all three tools on various repertoires. We aim to compare the feature sets on file formats with varying levels of representation abilities, such as MIDI, MusicXML, and **kern. While MIDI is widespread in computational studies, it is relatively simplistic for written music; MusicXML and **kern, instead, are less commonly utilized in MIR but provide more accurate representations when dealing with music scores. The main contributions of this paper are, therefore, threefold. Firstly, we present a new set of features designed for the study of an under-represented repertoire in music computing literature, i.e., 18th-century Italian opera. Secondly, we introduce musif, a new efficient, extensible, and open-source Python tool for feature extraction from symbolic music. Finally, we provide a benchmark of music21, jSymbolic, and musif on a variety of repertoires and file formats. The whole code used for this study, as well as the code used for the proposed tool, is available at <https://github.com/DIDONEproject/music_symbolic_features/>. § FEATURE EXTRACTION TOOLS In this study, we compare three tools for feature extraction from symbolic music: jSymbolic <cit.>, music21 <cit.>, and musif. Other tools such as Humdrum[<https://github.com/humdrum-tools/humdrum-tools>] may be used for feature extraction, but they would require a larger effort for assembling different features from various toolkits and organizing them in a usable tabular format. We will describe each one in detail in the following subsections. §.§ jSymbolic The jSymbolic tool was initially introduced in 2006 <cit.> and subsequently updated in 2018 <cit.>. It is an open-source, Java-based software designed to extract features from both MIDI and MEI files. The latest iteration of jSymbolic is capable of extracting 246 distinct features, some of which are multidimensional and account for a total of 1022 values. However, the actual number of extracted features may vary depending on the user's configuration and the musical composition itself. jSymbolic features relate to pitch statistics, melodic intervals, chords and vertical intervals, rhythm, instrumentation, texture, and dynamics. In addition to these features, jSymbolic is capable of computing certain characteristics that are not readily available in MIDI files. To achieve this, jSymbolic utilizes the MEI file format to determine the number of slurs and grace notes in a given piece. While MEI and other high-informative file formats offer additional features such as pitch names, harmonic analysis, and written dynamic or agogic indications, jSymbolic does not take these into consideration. The jSymbolic software provides users with the flexibility to customize configurations and features, facilitating the integration of previously existing feature values into newer features. Furthermore, users can extract windowed features by specifying window size and overlap in seconds. jSymbolic does not provide pre-built methods for parallel processing of large corpora, thereby requiring the user to implement a suitable strategy. Lastly, jSymbolic provides output options in both CSV and Weka's ARFF format. The software is accessible as a self-contained program featuring a Graphical User Interface (GUI) and a Command Line Interface (CLI), as well as as a Java library. §.§ music21 music21 is a Python toolkit designed for computational music analysis, which was first introduced in 2010 <cit.>. One of its remarkable features is the capability to parse a wide range of file formats, including MIDI, MusicXML, **kern, ABC, and various others. The music information is represented in an object-oriented hierarchical structure that is aimed at facilitating the development of novel tools. After its initial academic publication, music21 was further developed with a set of features presented in 2011 <cit.>. The latest version of music21 includes 69 features introduced by jSymbolic, as well as 20 characteristics computed using the information parsed from high-informative file formats. These characteristics are related to key, cadence, harmony, and lyrics. Regardless of the input file format, music21 consistently outputs 633 features. However, the number of extracted features may vary since some features are zeroed out when they are not computable. music21 is a Python module that lacks a CLI or a GUI. It does not have a configuration format; rather, it offers a broad range of methods for developing custom pipelines for different types of music information processing. These methods encompass the creation of new features and some automated high-level inference of music characteristics, such as key <cit.>, as well as tools for windowed analysis. One disadvantage of music21 is that large music scores may result in deeply nested Python objects with numerous non-picklable attributes attached. This makes the programming process challenging, particularly due to the difficulty of saving these objects to a file. In this study, we have developed a CLI for utilizing music21 feature extraction tools in a manner comparable to musif. This implementation facilitates parallel processing by distributing the extraction of features across numerous files simultaneously. §.§ musif Our software is named musif <cit.>. It is implemented in Python and built upon the music21 library, and offers an Application Programming Interface (API) with no default settings of significance and a CLI with default settings optimized for most common use cases. We leverage music21's internal representation, enabling us to extract features from any file format supported by music21. musif is highly customizable and allows users to add custom features as required. After creating the internal representation of the musical score using music21, we extract multiple features and store them in dataframes. This facilitates exporting results in various formats, making musif easily integrable into diverse pipelines. One limitation of music21 is its restricted ability to serialize complex and large music scores. This restriction also affects the possibility of parallel processing, as Python's single-thread approach necessitates parallelization via processes, which in turn requires context copying and data serialization. Furthermore, parsing large XML files is one of the slowest steps in the feature extraction process. To optimize this procedure, a more favorable strategy would be to store the parsed XML files' logical structure on disk as a cache. We have thus implemented a caching system capable of caching and serializing any music21 object. A restriction to note about the caching system is that the cached scores are read-only. However, this feature enables the writing of parsed scores onto disk and caching of the output from resource-intensive music21 functions into memory. musif can extract harmony-related features by utilizing standardized harmonic analyses annotated in the MuseScore file format <cit.>. Besides, it encompasses a wide range of features, including melodic intervals, harmony, dynamics, tempo, density and texture, lyrics, instrumentation, scoring, and key. Notably, dynamics and tempo are determined by the composer's text notation rather than by MIDI parameters. Furthermore, our implementation includes all features provided by music21 with the exception of 14 features that utilized the caching system in writing mode. The number of extracted features depends on the complexity of the score and is influenced by both the number of parts and musif's compatibility with the encoding. NaN values are used to represent non-computable features in a score. For example, when processing datasets with varying instrumentations, some features may not be available for all scores. These values can be replaced with a default value (e.g., 0) or removed from the corpus by deleting either the score or the related feature. In the CLI, we have implemented a heuristic to determine whether a score should be removed from the extracted corpus if it contains too many NaNs. Specifically, we define r as the ratio between the number of columns without NaN and the total number of rows in the output table. If r<0.1, we compute n_i, which is the number of NaNs in the ith row. We remove rows with n_i greater than 1/0.99 q_0.99, where q_0.99 is the 99% quantile of {n_1,n_2,…}, indicating that 99% of rows are not deleted. The factor 1/0.99 can be better understood as dividing the Q_0.99 by 99, thus obtaining an estimate of Q_0.01, and multiplying it by 100, thus obtaining the expected value of Q_1.00 based on the first 99% of the data. Put differently, it computes the maximum n_i that we expect if the remaining 1% of rows has a number of NaN “similar” to the previous 99%. Larger values are thus considered outliers. This method was empirically tested on the corpuses used in this work (see Section <ref>), revealing that only a few scores were generally removed while most lines of the output table were retained. In case a score is not deleted, the CLI removes from the tale the features that are NaN in that score. musif also incorporates a post-processing module that facilitates the removal, merging, or substitution of values in specific columns or groups of columns within the extracted data. This functionality proves especially advantageous when dealing with large tables generated by musif from a substantial set of scores, as it minimizes the computational effort required for processing such tables. Like the other tools, we have implemented the capability to extract features at a window level. However, unlike jSymbolic, in our implementation, the window length is specified in musically relevant units such as score measures rather than seconds. This provides more pertinent information for processing music scores. In contrast to other tools, our solution provides an out-of-the-box capability for processing large corpora through parallel processing, resulting in a reduction of the required time. The design principles and the features included in musif were presented in a previous publication <cit.>. The code and documentation of musif is available online[<https://github.com/DIDONEproject/musif>, <https://musif.didone.eu>]. § BENCHMARKING METHODOLOGY To assess the performance of musif in comparison to other tools, we devised a benchmarking methodology. Initially, we identified several datasets that enable testing of diverse file formats. Subsequently, we developed a standardized protocol based on an AutoML pipeline <cit.>. We evaluated the computational resources utilized by each tool during extraction and their respective efficacy in various classification tasks. §.§ Datasets We selected five datasets to evaluate the performance of the tools in analyzing both Standard MIDI Files (SMFs) and highly informative music score formats. For MIDI analysis, we aimed to test both music scores and performances. As for highly informative file formats for music scores, we chose MusicXML and **kern due to their popularity, availability of large datasets, various conversion tools, and compatibility with common music score editing software such as Finale, Sibelius, and MuseScore. While MEI was considered as an option, the limited availability of datasets in this format led us to leave it for future studies. In this study, we considered the following datasets: * ASAP <cit.>: This dataset contains music performances derived from the Maestro dataset <cit.> and is synchronized with a corresponding score obtained from the MuseScore's crowd-sourced online library. The dataset comprises 222 music scores in MusicXML and MIDI formats, as well as 1068 music performances in MIDI format. The authors have rectified any significant notation errors found in the music scores. We used this dataset for composer recognition based on music scores and music performances. * EWLD <cit.>: It contains lead sheets obtained from Wikifonia, a crowd-sourced archive. To reduce errors in music score transcription by inexperienced users, the authors applied algorithmic selection criteria to the dataset. Specifically, they retained only scores with simple notation, without modulations and with a single melodic part. Moreover, all scores contained key signatures and chords throughout. The dataset was augmented by incorporating genre and composer details, as well as the year of first performance, composer birth and death dates, precise title, and additional metadata. This was achieved by cross-referencing the dataset with information sourced from <secondhandsong.com> and <discogs.com>. We used this dataset for genre recognition. * Josquin-La Rue <cit.>: This dataset was created within the context of the Josquin Research Project and includes 59 Josquin duos and 49 duos by La Rue. The musical scores underwent a meticulous musicological transcription process. Moreover, the music scores were assigned to two labels based on the security of the attribution, thus resulting in four labels (Josquin secure, La Rue secure, Josquin not secure, La Rue not secure). The musical scores are provided in various file formats including MIDI, MusicXML, **kern, Sibelius, and PDF. We used this dataset for composer classification in a real-world attribution problem. * Quartets <cit.>: We retrieved a selection of files from the <kern.humdrum.org> website, consisting of all available string quartets in **kern format by Mozart, Haydn, and Beethoven. While the original sources of these musical scores are not always declared, the encoding quality is generally considered to be at a musicological level. In total, we obtained 363 files. We used this dataset for composer classification. * Didone <cit.>: With the aim of filling an under-studied repertoire, we curated, analyzed, and transcribed over 1600 arias from 18th-century opera, written by dozens of composers. The music scores were transcribed into MusicXML format using Finale Music software and revised by three musicologists independently. Harmonic analyses were added by expert musicologists using MuseScore software in accordance with a prior standard <cit.> and were reviewed automatically using the ms3 tool<cit.>. We also included various metadata in the database such as year and place of premiere, composer, and high-level formal analysis. This database is an ongoing project and will be made freely available in 2024. We utilized this dataset for classifying the period of composition of each piece, each period being defined in decades (i.e., 1720s, 1730s, 1740s, etc.). §.§ Experimental setup After selecting the datasets, a standardized protocol was developed for benchmarking the three aforementioned tools. The protocol is based on an AutoML pipeline <cit.> and comprises the following steps: * Conversion to MIDI: The datasets were selected and subsequently converted into MIDI format, resulting in two or three file formats for each dataset: MIDI and either MusicXML or **kern. This step aims to evaluate the impact of notational file formats, such as MusicXML or **kern, on classification tasks. Indeed, although MIDI has limited capacity for representing notational aspects of music, it remains uncertain the extent to which these aspects can determine the accuracy of machine-learning algorithms for music symbolic analysis. MusicXML files were converted using MuseScore 3, and **kern files were processed with the Humdrum toolkit[See footnote <ref>.]. * Feature extraction: Features were extracted from MIDI, MusicXML, and **kern files using the methods detailed in Section <ref> with default settings and without the use of windows, resulting in one array of features for each file. The purpose of this step was to measure the computational cost of the tools. Therefore, all available files in the datasets were used to obtain a larger number of samples and a more accurate estimation of the computational cost, even if they were discarded in later steps. For instance, MIDI scores were already provided in the ASAP dataset; however, we additionally converted them from the MusicXML files. As a result, we extracted features from more files than necessary. We created a CLI tool in Python for music21 while we utilized the official CLI tools for jSymbolic and musif. Each file format was processed individually, resulting in CSV files for each format. We calculated the average time and RAM usage of each tool. Furthermore, CPU time was collected as a measure of the required time without parallel processing. Lastly, we documented the number of files for which each tool produced errors. * AutoML: A state-of-the-art machine learning approach was employed using the Python module  <cit.>. The method utilizes Bayesian optimization with surrogate models based on random forests and generates ensembles of models by exploring a vast array of possible architectures. 10-fold cross-validation was used, and the balanced accuracy averaged across the test folds was observed. The best-performing model's result was used for comparison. To initiate the AutoML process, a list of valid files for each dataset was initially defined, discarding those processed in the previous step but unsuitable for validating the classification task. Subsequently, files were selected for which all tools succeeded in extraction, creating comparable datasets for validation. Finally, classes with a number of samples less than twice the number of cross-validation splits were eliminated from each dataset. Consequently, the number of files and categories used in our study differs from the numbers officially provided by each dataset. The classification task performed depended on the dataset, as shown in Table <ref>. We conducted two primary experiments: one utilizing all of the extracted features and another using only the first ten principal components. To achieve this, we standardized the features and applied PCA to obtain the ten first principal components. The rationale for the latter experiment is that a larger feature space typically requires a longer AutoML optimization process and affects the performance of the trained classifiers. As the tools extract varying numbers of features, this experiment enables a principled comparison of the usefulness of the non-redundant information generated by the different tools by homogenizing the number of variables in the AutoML process. In other words, it helps decouple the AutoML optimization capabilities from the number of features. Due to the overlap between the features extracted with musif and those with music21 with jSymbolic, we also analyzed the concatenation of music21, jSymbolic, and our features. We also observed the performance of musif and music21 when only the native features were used, i.e. when musif was utilized without music21 features and when music21 was run without jSymbolic features. In the following, we denote these feature sets as “native”. We run each feature extraction and AutoML experiment on a Linux machine with 32 GB of RAM and an i7-8700 CPU, ending the AutoML procedure after 30 minutes. We also experimented with longer AutoML processes and more powerful machines for the first 5 columns of tables <ref> and <ref>, but we noticed no significant change in accuracy. § RESULTS Table <ref> summarizes the comparative computational efficiency of the three tools. It is observed that jSymbolic outperforms the other tools when no parallel processing is employed. This can be attributed to the superior performance of Java language, which facilitates faster I/O operations and parsing of byte-level structures such as MIDI files. musif's caching system significantly reduces the time required for feature extraction during multiple runs, such as those performed during the development and debugging of newly added features. For MIDI files, the extraction process can be accelerated by a factor of five. When comparing the time needed for extraction, jSymbolic is still faster than musif. However, our caching system is advantageous when a cache is available. Regarding MusicXML and **kern files, musif and music21 use the same parser engine, making their time values more comparable. In this case, music21 is slightly faster than musif but also attempts to extract a smaller number of features. Nevertheless, musif's the caching system allows for a 50% reduction in extraction times. The music21 tool proves to be the optimal choice when taking into account RAM utilization. Table <ref> presents the dataset sizes used in our experiments, which are obtained through the protocol detailed in Section <ref>. The sample sizes vary from 109 to 3197, while the number of classes ranges from 3 to 11, depending on the dataset. The music21 feature extraction process produces a fixed set of 602 native features, supplemented by an additional 31 features re-implemented from the jSymbolic feature set. In contrast, jSymbolic consistently extracts a set of 225 features with minor variations. musif extracts a variable number of features depending on its ability to parse different music structures, ranging from 91 to 974 extracted features. The remaining features extracted by musif are computed using the music21 feature extraction methods. It is worth noting that music21 always converts non-computable features to zero, whereas musif allows users to assign different values or perform other operations. Tables <ref> and <ref> demonstrate the effectiveness of feature sets in representing significant aspects of music analysis across various repertoires. The results in Table <ref> must be interpreted with caution due to the longer AutoML process required by accurate models when using a higher number of features. Overall, music21 and jSymbolic are effective tools for extracting features from MIDI files, while musif shows promising results for MusicXML files, particularly when utilizing the first ten principal components during validation. This difference in performance can be attributed to the presence of highly correlated features in musif, a consequence of its granularity. We also evaluated combinations of feature sets and found that optimal performance is achieved by employing multiple tools. For MIDI files, jSymbolic is fundamental in achieving model accuracy, but incorporating musif and music21 generally enhances performance. For MusicXML and **kern files, leveraging both musif and music21 yields optimal results, especially when considering the first ten principal components. When comparing the efficacy of models trained on MusicXML, **kern, and MIDI files, no discernible pattern emerges indicating the superiority of highly informative file formats over SMFs for representing music scores. In fact, the only instances where the MusicXML files exhibit superior performance are in the Josquin-La Rue dataset and genre recognition on the EWLD dataset when all features are utilized. However, for all the remaining tasks, MIDI files demonstrate superior performance. This is likely due to the fact that jSymbolic can only extract features from MIDI files and is simultaneously the most important source of features for music score analysis. Consequently, in this study, the MusicXML and **kern datasets lack some relevant features that can be extracted only when converted to MIDI. Even when comparing only the proposed tool and music21's performances, MusicXML and **kern files do not show a clear advantage over MIDI files, particularly when considering the combination of both tools. It should be noted that jSymbolic can extract features from MEI as well, thus potentially allowing for better performances. The effect of missing values on tool performance is a significant concern and may be a contributing factor to the comparatively lower results for MusicXML and **kern files. While music21 substitutes all missing values with 0, musif utilizes a hybrid strategy that entails either removing a row or column from the table (refer to Section <ref>). The most effective method for handling missing values remains an open issue. We assessed the impact of harmonic features on the Didone dataset using musif. Unfortunately, due to the time-consuming nature of harmonic annotations, we were unable to evaluate these features on the other datasets used in this study. We annotated our dataset of more than 1600 opera arias using the standard established in previous works (see Section <ref>) and extracted melody- and accompaniment-related features with respect to the local key. The extraction of harmonic features resulted in 22 additional features beyond the 126 listed in Table <ref> for MIDI files. For MusicXML files, we extracted 265 additional features, raising the total number of extracted features to 617. We observed an overall improvement in classification accuracy when incorporating harmonic features, as demonstrated in Table <ref>. The only instance where performance was degraded by the inclusion of harmonic features was for MIDI files when all the available features were considered (without PCA). We interpret this degradation as an indication that longer processing times are necessary for AutoML when additional, possibly highly correlated features are introduced. § CONCLUSION This paper presents a comprehensive analysis of tools for extracting features from symbolic music. A strict protocol was defined to compare the tools in terms of efficiency and efficacy across various repertoires and file formats. The results indicate that using multiple tools is the most effective approach, with the optimal tool choice depending on the file format and repertoire. The study emphasizes the importance of using file formats that are accessible by multiple tools. However, it remains open whether highly informative file formats such as MusicXML, **kern, or MEI are relevant for the automatic classification of symbolic scores. The available set of features indicates that, while these formats remain fundamental for certain types of musicological research, they do not seem to entail a significant advantage for machine learning tasks. The problem of NaN values in extracted features from music scores remains unresolved. Further research is required to explore optimal approaches for replacing, removing, or inferring missing values in music applications. Additionally, the new musif tool was proposed, which can process various file formats using the music21 parsing engine. The tool also includes a caching mechanism to speed up feature development. Moreover, motivated by the experiments presented in this work, we included the whole music21 and jSymbolic tools in the newer versions of musif, easing the extraction of the combined feature sets from large corpora. § ACKNOWLEDGEMENTS This publication is a result of the Didone Project, which has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program, Grant agreement No. 788986. It has also been conducted with funding from Spain’s Ministry of Science and Innovation (IJC2020-043969-I/AEI/10.13039/501100011033). Part of the computational experiments were run at the FinisTerrae III cluster of the Galician Supercomputing Center (CESGA). The authors gratefully acknowledge the access to these resources.
http://arxiv.org/abs/2307.04260v1
20230709202754
Cluster tomography in percolation
[ "Helen S. Ansell", "Samuel J. Frank", "István A. Kovács" ]
cond-mat.dis-nn
[ "cond-mat.dis-nn", "cond-mat.stat-mech" ]
http://arxiv.org/abs/2307.04303v1
20230710014413
Learning to Generate Equitable Text in Dialogue from Biased Training Data
[ "Anthony Sicilia", "Malihe Alikhani" ]
cs.CL
[ "cs.CL", "cs.AI" ]
The category of reduced imaginary Verma modules Juan Camilo Arias, Vyacheslav Futorny and André de Oliveira August 12, 2023 =============================================================== The ingrained principles of fairness in a dialogue system's decision-making process and generated responses are crucial for user engagement, satisfaction, and task achievement. Absence of equitable and inclusive principles can hinder the formation of common ground, which in turn negatively impacts the overall performance of the system. For example, misusing pronouns in a user interaction may cause ambiguity about the intended subject. Yet, there is no comprehensive study of equitable text generation in dialogue. Aptly, in this work, we use theories of computational learning to study this problem. We provide formal definitions of equity in text generation, and further, prove formal connections between learning human-likeness and learning equity: algorithms for improving equity ultimately reduce to algorithms for improving human-likeness (on augmented data). With this insight, we also formulate reasonable conditions under which text generation algorithms can learn to generate equitable text without any modifications to the biased training data on which they learn. To exemplify our theory in practice, we look at a group of algorithms for the GuessWhat?! visual dialogue game and, using this example, test our theory empirically. Our theory accurately predicts relative-performance of multiple algorithms in generating equitable text as measured by both human and automated evaluation. § INTRODUCTION Machine learning models for text generation in dialogue have trouble learning the “long tail” of a data distribution; i.e., the data concepts not frequently observed during training. For example, dataset biases like gender imbalance can induce a long tail in training data whereby important data relationships involving gender are underrepresented, like women in sports <cit.>. When training, generative models often fail to learn these concepts in the long tail, and ultimately, learn inequitable, stereotyping behaviors instead (see Figure <ref>). These non-inclusive behaviors not only decrease user-satisfaction by isolating users <cit.>, but also impede common ground, hindering the task-success of the dialogue system. Despite the multi-faceted impact of inequitable text generation in dialogue, we do not have a comprehensive and theoretically grounded framework for understanding how machines learn to generate inequitable text and when this outcome can be avoided. To provide a strong technical foundation for equitable generation in dialogue, we build on theories of computational learning <cit.>. Specifically, our theoretical contributions are as follows: * We define precise constraints that encapsulate diverse notions of equity in dialogue (Def. <ref>). * We rigorously compare our proposals to traditional notions of equity in classification ( <ref>). * We show computational learning theory models equitable learning well: algorithms from learning theory are easily adapted to learn equitable dialogue by augmenting data (Thm. <ref>). * We prove algorithms based on learning theory can even learn to generate equitable text from some types of biased training data (Thm. <ref>). Loosely, Thm. <ref> is based on the idea that, when provided sufficient background, human text is not biased because it is typically context-aware (Def. <ref>). For example, when the subject is a female scientist, a human will likely not use male pronouns in subject-referring conversation because humans tend to correctly employ dialogue context to inform their language use. Instead, in many real-world datasets, bias is an aggregate property, arising from inequality of the proportions of protected attributes such as race or gender; e.g., more conversations about male than female doctors. The theoretical understanding we contribute is imperative because it informs algorithm design. In particular, using our theory, we can predict: * the most equitable algorithms for unseen data; * counter-intuitive properties of algorithms that lead to less equitable results. For example, consider algorithms which naïvely augment data to remove bias <cit.>. Through theoretical study, we identify cases where this practice can actually hurt an algorithm's chances at learning to be equitable. In fact, our experiments in  <ref> confirm this. The remainder of the paper is organized as follows:  <ref> provides background to position our contributions including discussion of related work, a brief tutorial on the employed learning theoretic framework, and a few running examples used throughout the text;  <ref> provides our theoretical contributions including formulation of mathematical notions of equity in text generation and theoretical analysis of learning algorithms;  <ref> conducts experiments which validate our theory in practice; and finally,  <ref> concludes the work. Code, data, and a python package will be made publicly available to promote further research.[https://github.com/anthonysicilia/equitable-dialogue-ACL2023 https://github.com/anthonysicilia/equitable-dialogue-ACL2023] § BACKGROUND AND RELATED WORK §.§ Learning Theory for Dialogue Recent proposals for the use of learning theory in dialogue are due to <cit.> who propose .[rning eory for Text-Genation] Specifically, is a formal framework for studying the diverse objectives present when learning to generate text. Ultimately, their proposal is grounded in a general evaluation metric – the test divergence. Intuitively, test divergence mimics practical evaluation, in which we conduct tests to evaluate the generated dialouge: 𝐓𝐃_𝔾(θ) = 𝐄[| h(D, U) - h(D̂, U) |] where (C, D) ∼𝔾, D̂∼ℙ_θ(C), U ∼𝕌. Of course, there are a number of undefined terms here: specifically, the test h, the context C, the goal dialogue D, the learned dialogue D̂, and the unobserved effects U. Below, we explain each, using examples from Figure <ref> to assist our exposition. Goal Distribution The goal distribution 𝔾 is a joint probability distribution over dialogue contexts c ∈𝒞 and dialogues d ∈𝒟. For <cit.>, the goal is to generate human-like text. So, as in the visual dialogue example in Figure <ref>, the context might be an image/goal-object and the goal dialogue might be sampled from a (human) corpus of QA pairs with this context. Learned Dialogue Distribution The learned dialogue distribution is the probability kernel ℙ_θ(C) that provides a distribution over dialogues, conditional to the parameters θ learned by the machine (e.g., neural parameters) as well as the random dialogue context C. The precise manner in which dialogue occurs will vary from system to system, but typically involves a machine generating/prompting responses to/from human users as in Figure <ref>. This interaction implicitly defines the random process through which a set of parameters θ and a random context C produce a predicted dialogue D̂. Importantly, the learning machine may not control every aspect of the process – e.g., the human responses. Aptly, we encapsulate this unknown randomness by the distribution ℙ_θ(C). In some cases, we will consider the joint distribution of both (goal) contexts and learned dialogues; i.e., of the random tuple (C, D̂). We write 𝔾̂_θ for this joint distribution. Test Function with Unknown Effects The final component is the test function (or simply test) h. The test takes as its primary input a dialogue and returns a value in the interval [0,1]. Conceptually, a test can represent any evaluation process in which we are interested. For example, some tests commonly employed in practice include n-gram overlap metrics such as BLEU <cit.>, sentiment scores from a pre-trained classifier, or even a score attained through human evaluation. The unknown effect U ∼𝕌 represents any additional information needed to completely determine the outcome of the test. When the test is BLEU, U simply takes the form of a reference dialogue to which the input dialogue is compared. For human evaluation, U encapsulates all of the unknown variables that contribute to the randomness of a real-world experiment. Often, U may not be needed. Interpretation With terms defined, it is easy to see the test divergence is a direct comparison of the output of the test from the goal dialogue D to the predicted dialogue D̂, learned by our dialogue system. Larger test divergence indicates the learned dialogue fails to replicate the goal dialogue along the dimensions targeted by the test. For example, if the goal is human-likeness in the visual dialogue example from Figure <ref>, a test might target question strategies <cit.>. Small test divergence in these cases indicates the learned dialogue uses similar strategies as the (human) goal. §.§ Related Works on Equity In natural language, popular, early studies of equity begin with avoiding stereotyping in learned model representations <cit.>. This approach has continued to inspire many de-biasing techniques for learned representations <cit.> and evaluation techniques for the equity of representations <cit.>. De-biasing and evaluation techniques for model representations have also been adapted for text-generation tasks <cit.>. Still, these model-intrinsic approaches to resolving inequity have proven subpar compared to model-extrinsic approaches, which focus directly on the downstream task <cit.>. For this reason, our approach tackles the problem of equitable dialogue generation from an extrinsic point-of-view. Previously, in text-generation, extrinsic points-of-view have typically used change in scoring functions (e.g., for sentiment, gender-polarity, etc.) to measure equity <cit.>. Our work is in line with these, but provides formal theoretical study, and further, focuses more specifically on dialogue. Formal theoretical study is vital to understanding equity, because imprecision in problem assumptions and objectives has already proven to be a pitfall in existing works on equity <cit.>. For example, in classification, detailed theoretical study reveals a complex relationship of trade-offs between accuracy and (some) notions of equity <cit.>, contributing to algorithmic advances <cit.>. Our work continues this trajectory, offering valuable practical insights, which are sometimes unintuitive, to achieve equity in machine dialogue. Finally, it is worthwhile to note that <cit.> also contribute a formal, theoretical definition of fairness in dialogue. Our work contributes a more general definition of equity – i.e., which supports arbitrary types of dialogue context and more general types of dataset bias. As noted, we also make connections with learning theory to provide key insights on algorithm and dataset design. Indeed, ours is the first work to study bias in text generation using these insightful techniques from computational learning theory. § FORMALIZING EQUITY IN DIALOGUE §.§ Formal Definitions for Equity In this part, we introduce some formal, mathematical notions of equity. We start with a general notion of equity in dialogue and show how this can be specialized to compare with ideas of equity in the classification literature. For proofs, see Appendix <ref>. Protected Attributes To begin, we need to first define the notion of a protected attribute. Conceptually, this is the sensitive variable (e.g., race, gender, religion, etc.) that we intend to “protect” by the equity constraint. Otherwise, presumably, system inequities would disproportionately, negatively impact the sub-population captured by the attribute. Throughout this work, we use a variable a ∈𝒜 = {0,1} to denote the protected attribute and we measure equity of the text with respect to this variable. Precisely, a=1 implies the dialogue context exhibits the attribute (e.g., female gender, Black race, Muslim religion), while a=0 implies the context does not exhibit the protected attribute. For example, in the educational dialogue from Figure <ref>, the context is a discussion topic and the protected attribute is female gender. Since the topic is a female scientist, it exhibits the protected attribute and we would have a=1. If the topic was “Science” more generally, it would not exhibit the protected attribute and it would be appropriate to set a=0. In general, we expect the protected attribute to vary randomly with the dialogue context C. To model this in a general way, we assume the attribute is sampled from a probability distribution which is dependent on the random context: A ∼𝔸(C). For example, in the visual dialogue from Figure <ref>, the protected attribute A is female gender, which is non-deterministically dependent on the visual features of the image C. In other cases, like the educational example, the protected attribute may be completely determined by context. 𝔸 can model this as well – e.g., as a point mass. Equity as Score Parity Commonly, equity in machine learning systems is formally defined through a notion of parity <cit.>. In dialogue, we can express parity as the following requirement: The system uses language in the same way, regardless of protected attribute. This intuitive notion of equity is vague in its use of “way” to be general, allowing for specification to different applications. For example, <cit.> both consider the toxicity and sentiment of language as the pertinent “way” in which language is used, when measuring equity. A classifier is used to estimate the toxicity or sentiment of the used language, and equity occurs if this classifier's outputs are invariant of the protected attribute. For example, if the protected attribute is Muslim religion, the dialogue should be no more “toxic” when its context is specific to Muslims, than when its context is not specific to Muslims. Below, we formalize this intuition for equity with a mathematical constraint. (Score Parity) A contextualized dialogue distribution[Frequently, we use contextualized dialogue distribution to refer to any joint distribution over contexts and dialogues.] 𝔾 with (C,D) ∼𝔾 and A ∼𝔸(C) satisfies score parity if 𝐄[s(D, 0) | A = 0] = 𝐄[s(D, 1) | A = 1] where s is a scoring function s : 𝒟×𝒜→ [0,1]. To arrive at our motivating example <cit.>, one simply chooses the scoring function s to be a toxicity classifier or a sentiment classifier. The expected output of this classifier should be the same, regardless of the protected attribute's setting. In general, if equality does not hold in the above definition of parity, we follow <cit.> using Δ to denote the gap across attributes: Δ(𝔾) = |𝐄[s(D,0)| A=0] - 𝐄[s(D,1)| A=1] |. This lets us talk about degrees of inequity, and therefore, measure progress towards our ideals. Multi-Category Score Parity Notice, we use the presence/absence of singular demographic groups (e.g., female v. not female) instead of binary comparisons (e.g., female v. male) in defining the protected attribute. This choice allows our definition of equity (above) and later theory to support study of general multi-category attributes with more than two attributes like race (e.g., Black, White, Asian) or religion (e.g., Muslim, Jewish, Catholic). Using race as an example, we can measure the parity gap when Black is the protected attribute, White is the protected attribute, Asian is the protected attribute, etc. The dataset is then equitable for all races (according to score parity) if all measured parity gaps are 0. In this way, our definition and subsequent results can generalize to the multi-category case. We use this strategy, for example, in Section <ref>. Comparison to Demographic Parity In classification, demographic parity is a commonly studied notion of equity <cit.>, which stipulates that a classifier's outputs should be independent of the protected attribute. For a classifier c, mapping random features X to a {0,1}-valued label, this can be written: 𝐄[c(X) | A = 0] = 𝐄[c(X) | A = 1]. For score parity, when s(·, 0) = s(·, 1), the scoring function s does not depend on the attribute and we see that score parity is a direct reflection of demographic parity. Whereas classification problems use machine learning to select the classifier c in a fair way, dialogue uses machine learning to select the feature distribution X (i.e., D in our definition). Comparison to Accuracy Parity Depending on the application, it is known that demographic parity can also be an inappropriate constraint; e.g., if the classifier c is meant to predict the protected attribute itself <cit.>. This precise situation is inherent to dialogue, since some aspects of language are compulsorily predictive of the protected attribute (e.g., gendered pronouns or religious terminology). Fundamentally, there is a trade off between the accuracy of the language used and the desired invariance. In these cases, <cit.> suggest accuracy parity as an alternative, which requires equal error rates, regardless of protected attribute. For Y the true label to X and c as in Eq. (<ref>), this can be written: 𝐏𝐫(c(X)≠ Y | A = 0) = 𝐏𝐫(c(X)≠ Y | A = 1). By our definition, score parity can be used to reflect this distinct notion from classification as well. Conceptually, we select our scoring function to measure the correctness of the dialogue. Then, just like accuracy parity, score parity enforces equal error rates, regardless of protected attribute. While details may vary based on application, we consider selecting the scoring function in the examples from Figure <ref>. We first define an identifier function v : 𝒟→{0,1} which indicates whether a dialogue d ∈𝒟 verbalizes the protected attribute. For example, we can imagine v scans for female gendered words {she, her, girl, ...}. Then, our system makes an “error” if it fails to verbalize the protected attribute or inappropriately verbalizes the attribute. So, we select the scoring function to reflect this: s(D, A) = | A - v(D) |. With the choice of scoring function above, score parity reflects the intuition of accuracy parity by requiring that the correctness of the language use (in referring to a protected attribute) is independent of the protected attribute. As alluded, this constraint can be especially useful in case spurious correlations (i.e., stereotypes) between protected attributes and context cause different error rates with/without a protected attribute. This is the case in our toy examples (Figure <ref>) as well as some real-world generation tasks <cit.>. Takeaways The formalization of equity we introduce – score parity – is both general and useful. It models existing ideas for empirical evaluation of equity in text-generation <cit.> and can also be used to model disparate notions of equity from existing classification theories <cit.>. Ultimately, the choice of the scoring function s determines the “way” in which the language should be invariant to the protected attribute, and subsequently, dictates the motivating goals of the equity constraint. §.§ Evaluating Equity with Learning Theory Next, we show how learning to generate equitable text can be modeled with learning theory. Test Divergence (Reprise) To evaluate equity with , the objective in Eq. (<ref>) remains largely unchanged. Primarily, we explicitly incorporate the protected attribute:[Equivalently, one can group A with the unknown effects and keep Eq. (<ref>). The rewrite only makes assumptions explicit.] 𝐓𝐃_𝔾(θ) = 𝐄[| h(D, A, U) - h(D̂, A, U) |] where (C, D) ∼𝔾, D̂∼ℙ_θ(C), A ∼𝔸(C), U ∼𝕌. Importantly, we must consider the deviations from <cit.> not present in Eq. (<ref>): (1) the choice of goal distribution 𝔾 and (2) the choice of test h. Originally, focus on evaluation of human-like dialogue, and therefore, propose the goal to be defined by any collected corpus of contextualized human dialogues. Instead, we are interested in the equity of the contextualized dialogue and cannot blindly use human dialogue as an example; i.e., we cannot take for granted that the contextualized human dialogue is equitable. Thus, to appropriately evaluate equity, we generally assume the following constraints on the goal distribution and test. Equitable Goals and Tests (Balanced) A contextualized dialogue distribution 𝔾 is balanced if it assigns equal (marginal) likelihood to the protected attribute: 𝐏𝐫(A = 1) = 𝐏𝐫(A = 0); (C,·) ∼𝔾, A ∼𝔸(C). (Equitable Goal) We say a contextualized dialogue distribution 𝔾 with (C,D) ∼𝔾 is an equitable goal distribution if it is balanced and satisfies score parity (for some fixed score s). So, intuitively, we propose the goal in equitable dialogue is a contextualized dialogue distribution which is itself equitable, according to our formal definition of this property – i.e., score parity. Furthermore, it should be balanced to prioritize the protected attribute equally during evaluation. As we'll see later, choosing the test h to be the scoring function s from our previous definition allows us to use 𝐓𝐃 (with an equitable goal) to control the parity gap of our learned dialogue. Biased Data While the formal definition above (Def. <ref>) is about equity, it should also be noted that we implicitly arrive at a formal definition for bias: the absence of equity. In particular, a contextualized dialogue distribution (dataset) is biased if it is not equitable. Note, this also distinguishes biased data from other common concepts like noisy data because we use an expectation to quantify parity; i.e., which is immune to non-systemic noise. Small Test Divergence Implies Equity Consider an equitable goal 𝔾 and let h ≡ s (the scoring function). Then, Δ(𝔾̂_θ) ≤ϵ whenever 𝐓𝐃_𝔾(θ) ≤ϵ / 2. Simply, the above result indicates minimization of 𝐓𝐃 with an equitable goal and appropriate test leads to an equitable learned dialogue distribution. Takeaways An important consequence of Thm. <ref> is the ability to confidently use algorithms designed in the framework (i.e., to reduce test divergence) for equitable dialogue learning. While these algorithms may have originally been designed to learn human-like dialogue, they can easily be modified to learn equitable dialogue. In particular, we need only change the goal from any human dialogue distribution to any equitable dialogue distribution – as in Def. <ref>. Portability of algorithms in the sense described means, ultimately, a unified theory for dialogue generation. For any algorithm we propose, we may conduct a singular theoretical analysis of test divergence that can serve multiple purposes – both human-like and equitable dialogue generation. In other words: -based algorithms for human-likeness can be used to learn equitable text by simply augmenting training data. Some standard examples of how to create the new equitable goal 𝔾 include augmenting data in the dataset to achieve equitable constraints <cit.>. The takeaway from our theorem above agrees with existing empirical study: we can typically expect these strategies to be effective. Still, as we see next, there are other effective alternatives (under the right assumptions). §.§ Learning to be Equitable and Human-like Next, we study the circumstances under which the goals of human-like dialogue learning and equitable dialogue learning align. That is, we study circumstances under which an algorithm designed to minimize 𝐓𝐃 can learn from (biased) human-like goal data and simultaneously learn to be equitable. Context and Its Role (Assumptions) (Context-Awareness) Consider an equitable goal distribution 𝔾. A contextualized dialogue distribution ℍ≠𝔾 is context-aware if [We use the shorthand 𝐏𝐫(C| D) = 𝐏𝐫(C̃|D̃) to mean: 𝐏𝐫(C = c| D = d) = 𝐏𝐫(C̃ = c|D̃ = d) ∀ (c,d) ∈𝒞×𝒟.] 𝐏𝐫(D | C) = 𝐏𝐫(D̃|C̃); (C̃,D̃) ∼ℍ, Ã∼𝔸(C̃). (Context-Preservation) The distribution ℍ preserves context if 𝐏𝐫(C | A) = 𝐏𝐫(C̃|Ã); (C̃,D̃) ∼ℍ, Ã∼𝔸(C̃). The definitions are based on the idea of label-shift used to study data-shift at test time <cit.>. In this paper, we think of ℍ as the possibly inequitable distribution of human contextualized dialogues (determined by some corpus). So, these definitions can be viewed as assumptions of how inequity presents itself in human data. Context-awareness assumes that humans are not biased provided the background context C. Conceptually, this is reasonable, since humans use context to form inferences about attributes of other human subjects (even protected attributes). If background is sufficient, human inferences will often be correct inferences and the dialogue should be equitable with respect to accuracy parity, at least.[Perfectly correct dialogue satisfies accuracy parity because it satisfies s ≡ 0 in Eq. (<ref>), regardless of A.] Instead, bias in the considered corpus must arise from aggregate disproportions of attributes (see  <ref>). Context-preservation assumes that the presentation of the context for attributes does not change. In other words, the features of the protected attribute which present themselves through the context should be invariant across 𝔾 and ℍ. For example, if one attempts to infer race from an image, this assumption simply states the visual features indicative of race should be consistent. The assumption would be violated, for example, if 𝔾 protects Asian males and ℍ protects Asian females. Test Divergence Learning Bound In this part, for simplicity, we assume the parameters θ are learned from a finite space Θ. Other proof techniques may allow arbitrary Θ; e.g., <cit.>. Consider an equitable goal 𝔾 with associated test h. Suppose a sample of i.i.d. human data is collected 𝕊 = (C̃_i,D̃_i)_i=1^m; (C̃_i, D̃_i) ∼ℍ. Suppose ℍ is context aware and preserves context. Then, for all δ > 0, with probability at least 1-δ, for all θ, 2β×𝐓𝐃_𝔾(θ) is bounded above by 1/m∑_i=1^m |h(D̃_i, Ã_i)_human - h(D̂'_i, Ã_i)_predicted| + √(log|Θ| + ln 2 / δ2m)_data efficiency where β = min_a 𝐏𝐫(à = a).[Note, we also pose a technical requirement: pairwise independence must hold (conditional to the context) between the human dialogue, the predicted dialogue, and the protected attribute. This is not an overly strong assumption; see Appendix <ref> for a detailed discussion with examples.] For interpretation, we break down the upperbound on 2β×𝐓𝐃_𝔾(θ) into two terms: (a) the difference in test output from the human dialogue to the predicted dialogue and (b) a data efficiency term dependent on the number of i.i.d samples m. Equity from Biased Data Notice, the predicted dialogue in (a) is dependent on the human dialogue's context C̃_i – not the goal dialogue's context C – so (a) is actually identical in definition to 𝐓𝐃_𝕊, an empirical observation of 𝐓𝐃_ℍ. That is, (a) is test divergence computed on a human corpus as was done by <cit.>. Since (a) uses a human dialogue corpus to define its goal, Eq. (<ref>) implies that learning human-like dialogue (via ) can also optimize the equity of the dialogue by reducing an upperbound on the equitable goal 𝐓𝐃_𝔾. This is true even if the goal human data is biased. In other words: -based algorithms learn human-likeness and equity, even on biased data. We only require the human data to be context-aware and preserve context (Defs. <ref> and <ref>). Data Efficiency The above interpretation of (a) is only valid if the data efficiency term (b) is also small. For interpretation, we consider the size of the parameter space Θ fixed and focus on the number of i.i.d training samples m. As m increases, (b) ultimately goes to 0 and the effect of (a) dominates the bound. In some cases though, if m is too small (b) can also have an impact. For example, this may be the case when using data-augmentation strategies to create a more equitable distribution. In particular, augmentation reduces the number of i.i.d. data points by creating dependencies in the data, which can reduce the data-efficiency of learning algorithms <cit.>. That is, augmentation can increase the size of (b) in learning bounds on test divergence,[For discussion, see the pf. of Thm. <ref> and remarks.] or in other words: Augmenting training data to improve equity can reduce data-efficiency, and ultimately, model performance. Impact does depend on the augmentation strategy, so we study common proposals for equity, next. § EXPERIMENTS In Section <ref>, we conclude by outlining algorithmic insights revealed by our theory. Next, we test these theories on the GuessWhat?! game corpus. §.§ Dataset, Algorithms, and Evaluation Unless otherwise noted, we use identical experimental settings, hyperparameters, etc. as <cit.>. Dataset Our dataset is the corpus for the GuessWhat?! game proposed by <cit.>. Gameplay is described in Figure <ref> and an example is shown as the visual dialogue in Figure <ref>. We also give a detailed description of the game rules in Appendix <ref>. We use the original train/val. splits and provide statistics on this corpus in Appendix <ref>. For training, unless otherwise noted, we use the full train set and report 1 seed. We focus on modelling the question-player and use an automated answer-player trained on human data. Protected Attribute For these experiments, we use gender (male and female) as the protected attribute. When the protected attribute is female gender (𝐅), we set a=1 as long as all human dialogues use at least one female-gendered word.[{she, woman, her, hers, gal, girl, women, gals, girls}] When the protected attribute is male gender (𝐌), we set a=1 as long as all human dialogues use at least one male-gendered word.[{he, man, him, his, guy, boy, men, guys, boys}] Conceptually, this labeling scheme uses human annotator consensus to determine when it is appropriate or inappropriate to ask gender-specific questions: if a=1, all human annotators perceive the protected gender to be present in the image and relevant to gameplay. Importantly, the labeling scheme also implies that the human dialogue satisfies our assumptions in  <ref>: context awareness (Def. <ref>) and context preservation (Def. <ref>); i.e., as shown in Appendix <ref>. Different conceptualizations of how the protected attribute should be defined are possible, but we focus on this scheme because it allows us to simulate the assumptions of our theory in  <ref>, and therefore, best test our theory in practice. As a final note, while we focus on male/female gender in these experiments, using more than two categories for protected attributes is also possible. Simply, one checks the parity gap for each new protected attribute to be added. This would allow our theoretical and empirical study to be extended to general multi-category attributes; e.g., race or religion. Algorithm is a cooperative learning algorithm proposed by <cit.> to model the question-player. The algorithm is based primarily on a self-play learning phase <cit.> which learns from machine-machine dialogue. This is used in addition to (after) a more traditional supervised learning phase (i.e., on human-human dialogue). See Appendix <ref> for details. Algorithm An extension of proposed by <cit.> with the purpose of better optimizing test divergence during the self-play learning process. Through some theoretical analyses, ultimately, the authors propose to regularize the self-play phase by re-incorporating human-human data from the supervised phase. Algorithm A modification of the algorithm. While re-incorporating human data, an augmentation (downsampling) strategy is used to balance occurrence of protected attributes; i.e., like other strategies for equity <cit.>. See Appendix <ref> for details. Human-Likeness Evaluation To evaluate human likeness, we use metrics proposed by <cit.>: average accuracy 𝐚𝐜𝐜 in identifying the true goal-object across three random seeds, average lexical diversity (𝐥𝐝𝐢𝐯; type/token ratio over all dialogues), average question diversity (𝐪 𝐝𝐢𝐯; % unique questions over all dialogues), and average percent of dialogues with repeated questions (𝐫𝐞𝐩 𝐪). We report these on the full test data. Equity Evaluation To evaluate equity, we focus on accuracy parity; i.e., score parity with scoring function described in Eq. (<ref>).[We focus on accuracy parity because the dataset we consider is not likely to exhibit any significant parity issues in toxicity, sentiment, etc. Instead, the systemic biases in the data are most likely to impact accuracy parity.] To replicate evaluation against the goal distribution in Def. <ref>, we apply an augmentation strategy to the test set (similar to the algorithm; see Appendix <ref>). Because our ground truth data is inferred from human annotators focused on game success, we also incorporate additional human annotations. 𝐡𝐮𝐦.𝐞𝐯𝐚𝐥. is % of model dialogues using gendered words correctly based on annotation (50 per method per annotator). Namely, two annotators[College educated, native English speakers.] were asked to determine correctness of gendered word use, evaluating both incorrect usage as well as false negatives; i.e., where use would be appropriate/helpful.[To prime responses, annotators were prompted with questions like “If any gendered words were used, were they used correctly?” as well as “If a gendered word was not used, would it have been helpful to use one to complete the task?”.] §.§ Results produces human-like, equitable text. In Tab. <ref>, improves upon in terms of both human-likeness and equity, across all metrics. These observations validate our theoretical analyses. In particular, (as the name implies) is designed based on the framework to minimize test divergence. From previous work, we know this means it should improve human-likeness <cit.>. Now, from our current theoretical study (Thm. <ref>), we also hypothesize can improve equity as long as certain assumptions are met (Def. <ref>, <ref>). Since the dataset we study satisfies the specified assumptions, our theoretical expectation of is the multi-faceted improvement we observe. That is, our theory predicts the empirical improvements in human-likeness and equity achieved by . The ability of our theory to predict the impact of algorithm design choices is an important practical implication. We are also able to draw similar conclusions for , which we discuss next. does not improve equity as well as , but overall, its behavior aligns with our theoretical predictions. Thm. <ref> also makes the observation that data-augmentation strategies like can sometimes perform worse than alternatives which focus only on human-likeness (i.e., due to data-inefficiency). Since does augment data significantly, we might expect to perform worse than , and ultimately, it does in Tab. <ref> (all metrics but Δ 𝐌). With that said, another of our theoretical results (Thm. <ref>) suggests data-augmented versions of algorithms like can, in fact, improve equity, especially in more general cases where data does not satisfy the circumstances of our experimental data. In experiments, this insight is reflected in comparing and the baseline. outperforms in Tab. <ref> on all metrics but 𝐓𝐃 𝐅. Test divergence models equity well. Finally, we recall test divergence is the key link between existing learning theoretic work and our analysis of equitable dialogue. In particular, we show, theoretically speaking, that 2𝐓𝐃 always bounds the parity gap Δ, which measures equity. As a result, learning theory algorithms can implicitly learn to be fair in many cases. Indeed, empirical results in Tab. <ref> agree with this theoretical bound in every case, and further, suggest 𝐓𝐃 may be useful at ranking equity of algorithms, since 𝐓𝐃 is predictive of all improvements from to . Again, our theoretical predictions match our empirical observations, highlighting the practical utilitiy of our theory. § CONCLUSIONS In this paper, we provide a first in-depth study of equity in dialogue, formalizing mathematical notions of equity in dialogue and using computational learning theory to study how equity can be achieved through algorithm design. Our empirical results show how our formal theoretical study of equity in dialogue can be used, with great benefit, to select and design algorithms in a task-oriented dialogue setting. In particular, we can: design algorithms that achieve both equity and human-likeness, predict unexpected consequences of data-augmentation, and provide proxy statistics that are useful in ranking the equity of algorithms. To promote further research, our code, data, and a python package will be made publicly available.[https://github.com/anthonysicilia/equitable-dialogue-ACL2023 https://github.com/anthonysicilia/equitable-dialogue-ACL2023] § ACKNOWLEDGEMENTS The authors thank Amazon for their support during this project. § LIMITATIONS While our theoretical work is broadly applicable to any protected attribute and any dialogue task, our empirical study has primarily tested gender bias on the GuessWhat?! task. Continued experimental study on a wider range of protected attributes and tasks can better support our mathematical findings. Also, users of our theory should verify the assumptions of our theory when using it to draw insights on new datasets. Specifically, as the type of data bias changes, it is possible the assumptions of Thm. <ref> may no longer be met. Users of our theory should take care in ensuring context-awareness and context-preservation, for example, are reasonable assumptions on new data, prior to applying the insights of  <ref>. Lastly, while all of our gender annotations come from human annotators, only a smaller subset come from annotators primed to judge correctness/equity of gender reference. So, more in-depth human evaluation can better support our theoretical results as well. § ETHICS STATEMENT The goal of this paper is to present a theoretically grounded framework to mitigate bias in dialogue systems. Our theoretical and empirical techniques can lead to important insights/solutions for algorithm design that reduce bias, along with any unintended harm associated with this bias. With this said, some of the proposed algorithms rely on pretrained models such as word or image embeddings, and any harm or bias associated with these models can still be present after efforts to mitigate. Thus, models trained with these techniques should still undergo rigorous human evaluation for presence of biases before being deployed. Our human subject board approved our protocol. Human subjects participated voluntarily and were compensated according to the regulations approved by our human subject review board. acl_natbib § PROOFS AND ADDITIONAL TECHNICAL DISCUSSION §.§ Proof of Thm. <ref> Consider an equitable goal 𝔾 and let h ≡ s (the scoring function). Then, Δ(𝔾̂_θ) ≤ϵ whenever 𝐓𝐃_𝔾(θ) ≤ϵ / 2. Suppose 𝐓𝐃_𝔾(θ) ≤ϵ, then we have ϵ ≥𝐄[ | s(D, A) - s(D̂, A)|] = ∑_a ∈𝒜𝐏𝐫(A=a) ·𝐄[ | s(D, A) - s(D̂, A) || A=a] (Law of Total Expectation) = 1/2∑_a ∈𝒜𝐄[ | s(D, A) - s(D̂, A) || A=a] (Balance of 𝔾) ≥1/2∑_a ∈𝒜|𝐄[ s(D, A) - s(D̂, A) | A=a] | (Jensen's Inequality) Now, since 𝔾 is equitable we have there is some value x such that for all a ∈𝒜, we have 𝐄[s(D, A) | A=a] = x. Substituting and expanding the sum over 𝒜, we have ∑_a ∈𝒜|𝐄[ s(D, A) - s(D̂, A) | A=a] | = | x - 𝐄[s(D̂, 0)] | + | x - 𝐄[s(D̂, 1)] |. Next, we put together the previous two equations and utilize the definition of the absolute value to break the proof into cases. For ease of presentation, we let μ = min{𝐄[s(D̂, 0)], 𝐄[s(D̂, 1)] } and M = max{𝐄[s(D̂, 0)], 𝐄[s(D̂, 1)]}. This gives 2ϵ≥𝐄[s(D̂, 0)] - x + 𝐄[s(D̂, 1)] - x if μ≥ x, x - 𝐄[s(D̂, 0)] + x - 𝐄[s(D̂, 0)] if M ≤ x, 𝐄[s(D̂, 0)] - x + x - 𝐄[s(D̂, 1)] if 𝐄[s(D̂, 0)] ≥ x ≥𝐄[s(D̂, 1)], x - 𝐄[s(D̂, 0)] + 𝐄[s(D̂, 1)] - x if 𝐄[s(D̂, 1)] ≥ x ≥𝐄[s(D̂, 0)]. In the last two cases, occurrences of x cancel out and we have precisely 2 ϵ≥Δ(𝔾̂), precisely. Then, in the first case, we have 𝐄[s(D̂, 0)] - x + 𝐄[s(D̂, 1)] - x ≥𝐄[s(D̂, 0)] - μ + 𝐄[s(D̂, 1)] - μ = M - μ. In the second case, we also have x - 𝐄[s(D̂, 0)] + x - 𝐄[s(D̂, 0)] ≥ M - 𝐄[s(D̂, 0)] + M - 𝐄[s(D̂, 1)] = M - μ. Thus, in all cases, we have 2ϵ≥Δ(𝔾̂), the desired result. §.§ Proof of Thm. <ref> §.§.§ Proof Consider an equitable goal 𝔾 with associated test h. Suppose a sample of i.i.d. human data is collected 𝕊 = (C̃_i,D̃_i)_i=1^m; (C̃_i, D̃_i) ∼ℍ. Suppose ℍ is context aware and preserves context. Then, for all δ > 0, with probability at least 1-δ, for all θ, 2β×𝐓𝐃_𝔾(θ) is bounded above by 1/m∑_i=1^m |h(D̃_i, Ã_i)_human - h(D̂'_i, Ã_i)_predicted| + √(log|Θ| + ln 2 / δ2m)_data efficiency where β = min_a 𝐏𝐫(à = a), D̂'_i ∼ℙ_θ(C̃). As noted in the main text we also pose the requirement of pairwise independence: first, between D, D̂, and A in the definition of 𝐓𝐃_𝔾 (conditional to C); second, between D̃_i, D̂'_i, and Ã_i (again, conditional to the context C̃_i). First, we enumerate some of the key assumptions for easy reference: * (A1): ℍ is context aware * (A2): ℍ is context preserving * (A3): D, D̂, A are independent conditional to C; and, D̃_i, D̂'_i, Ã_i are independent conditional C̃_i * (A4):[Here, we are using the same shorthand from the main text; e.g., in Def. <ref>.] 𝐏𝐫(D̂ | C) = 𝐏𝐫(D̂^' | C̃) since both probabilities represent identical sampling from ℙ_θ * (A5): 𝐏𝐫(A | C) = 𝐏𝐫(à | C̃) since both probabilities represent identical sampling from 𝔸 Now, we consider decomposing the joint probability density 𝐏𝐫(D=d, D̂=d̂, A=a), which importantly, is the joint density used to compute the expectation in 𝐓𝐃_𝔾(θ).[We ignore U since it is unused in this paper. The proof would be more complicated, but similar had we included U.] To begin, we have 𝐏𝐫(D=d, D̂=d̂, A=a) = ∑_c𝐏𝐫(C=c) 𝐏𝐫(D=d, D̂=d̂, A=a | C = c) (Law of Total Exp.) = ∑_c𝐏𝐫(C=c) 𝐏𝐫(D=d | C = c)𝐏𝐫(D̂=d̂| C = c)𝐏𝐫(A=a | C = c) (A3) = ∑_c𝐏𝐫(C=c)/𝐏𝐫(C̃=c)𝐏𝐫(C̃=c) 𝐏𝐫(D=d | C = c)𝐏𝐫(D̂=d̂| C = c)𝐏𝐫(A=a | C = c) (×1 trick) = ∑_c𝐏𝐫(C=c)/𝐏𝐫(C̃=c)𝐏𝐫(C̃=c) 𝐏𝐫(D̃=d |C̃ = c)𝐏𝐫(D̂=d̂| C = c)𝐏𝐫(A=a | C = c) (A1) = ∑_c𝐏𝐫(C=c)/𝐏𝐫(C̃=c)𝐏𝐫(C̃=c) 𝐏𝐫(D̃=d |C̃ = c)𝐏𝐫(D̂^'=d̂|C̃ = c)𝐏𝐫(A=a | C = c) (A4) = ∑_c𝐏𝐫(C=c)/𝐏𝐫(C̃=c)𝐏𝐫(C̃=c) 𝐏𝐫(D̃=d |C̃ = c)𝐏𝐫(D̂^'=d̂|C̃ = c)𝐏𝐫(Ã=a |C̃ = c) (A5) = ∑_c𝐏𝐫(C=c)/𝐏𝐫(C̃=c)𝐏𝐫(C̃=c) 𝐏𝐫(D̃=d, D̂^'=d̂, Ã=a |C̃=c) (A3) Further, we can relate the probability distributions for the contexts C and C̃ through their implied attribute distributions via (A2) 𝐏𝐫(C=c) = ∑_a 𝐏𝐫(C = c | A = a) 𝐏𝐫(A = a) (Law of Total Exp.) = ∑_a 𝐏𝐫(C̃ = c |à = a) 𝐏𝐫(A = a) (A2) = ∑_a 𝐏𝐫(C̃ = c |à = a) 𝐏𝐫(à = a) ·𝐏𝐫(A = a)𝐏𝐫(à = a) (×1 trick) ≤∑_a 𝐏𝐫(C̃ = c |à = a) 𝐏𝐫(à = a) ·12β (balance of 𝔾 and def. of β) = 12β𝐏𝐫(C̃=c) Applying this to our previous outcome, we have ∑_c𝐏𝐫(C=c)𝐏𝐫(C̃=c)𝐏𝐫(C̃=c) 𝐏𝐫(D̃=d, D̂^'=d̂, Ã=a |C̃=c) ≤∑_c12β𝐏𝐫(C̃=c) 𝐏𝐫(D̃=d, D̂^'=d̂, Ã=a |C̃=c) = 12β𝐏𝐫(D̃=d, D̂^'=d̂, Ã=a) (Law of Total Exp.). Notice, the new joint density 𝐏𝐫(D̃=d, D̂^'=d̂, Ã=a) can be used to compute the expectation in 𝐓𝐃_ℍ, while the previous joint density was used to compute the expectation in 𝐓𝐃_𝔾. Both expectations have everywhere non-negative variables. So, ultimately, the relation between the joint densities gives: 𝐓𝐃_𝔾(θ) ≤12β𝐓𝐃_ℍ(θ) To complete the proof, we need to bound the true test divergence on the human data 𝐓𝐃_ℍ(θ) with our observation 𝐓𝐃_𝕊(θ). To do so, without using a test set, we need to apply a PAC learning bound for parameters selected from a finite hypothesis space (i.e., so that the result holds for any θ learned from Θ). We choose the structural risk minimization bound presented in <cit.> – i.e., Thm. 7.7 – and apply it to our context,[To apply the theorem, we define the prefix free description language for Θ by simply enumerating each parameter in Θ (arbitrary order) and then mapping each parameter to the binary expansion of its assigned numeral. The loss needs to be replaced with the test divergence as well, but with this replacement, the required uniform convergence property for each individual parameter is still given by Hoeffding’s Inequality, so the proof as a whole is unchanged beyond this simple substitution.] which gives the final result. §.§.§ Remarks on Data Efficiency Note, the last step of the proof can be applied directly to 𝐓𝐃_𝔾(θ) as well, or any other instance of the test divergence for that matter. In the main text, when we refer to the data-efficiency of augmentation strategies, it is important to note that these augmentation strategies can change the distribution over which we compute test divergence. Although this distribution and the resulting test divergence may change, the data-efficiency term will be effected equally.[Some strategies for measuring data-efficiency depend on the data – our comment excludes these.] For example, consider downsampling – a simple augmentation strategy used in the experiments. In this case, if one downsamples to achieve balance in the frequency of the protected attribute, the data efficiency term would change from √(log|Θ| + ln 2 / δ2m) to √(log|Θ| + ln 2 / δ2α m), where α is fraction of data remaining after downsampling. In an ideal case, where there is only one protected attribute to consider during re-balancing, we have α = 2β and the data efficiency is reduced by a factor of 1 / √(2β), compared to no augmentation. The reader may notice based algorithms also experience a reduction in data-efficiency by the slightly larger factor of 1 / 2β applied to the whole bound; i.e., see Eq. (<ref>). With this said, the reason we allude to worse data-efficiency overall for augmentation strategies is that these strategies typically also re-use data to define the augmentation; e.g., in the mentioned case, where one downsamples for balance, an additional data-efficiency term must be added to the bound to measure the impact of estimating β from training data prior to conducting the downsampling.[If this added term is γ times the original data-efficiency, the inflation in Eq. (<ref>) actually becomes smaller than the inflation caused by data augmentation, whenever β > 1 / 2 γ^2.] Additional reduction can also be induced from imperfect estimation of β, and furthermore, when there is more than one protected attribute to consider. In the latter case, we may need to reduce the effective dataset size α m further to simulate balance (as in the later experiments; see Appendix <ref>). Thus, depending on the problem, these compounding effects can easily lead to reduced efficiency overall; i.e., compared to basic application of based algorithms without augmentation on the whole dataset. Due to the complexity of this comparison, which is dependent on augmentation strategies, estimation error, etc., we leave formal comparison to future work and simply conjecture on the potential for worse data-efficiency of data augmentation strategies in the main text. Albeit, this hypothesis is confirmed in experiments throughout Section <ref>, and it should be noted our main argument here is that the data-efficiency of augmentation strategies needs to be considered, where it has previously not been in most literature. §.§.§ Assumption of Pairwise Independence As mentioned in the main text, the assumption of pairwise independence is not an overly strong assumption. Conditional to the context C, pairwise independence stipulates realizations of the random values D, D̂, and A do not provide additional information about each other once we know C=c. For example, in GuessWhat?!, knowing the gender does not impact our expectation of the QA pairs, once the image is already known. Alternatively, knowing predicted QAs does not change our expectation about human QAs, after the image is known. The latter is not so intuitive, but independence of predictions on (test) outcomes and the outcomes themselves is common among many simple learning models (e.g., fixed effects linear regression) since the learned parameters are only dependent on the i.i.d. training outcomes. §.§ Labeling Scheme As noted, the labeling scheme for the protected attribute studied in the main text allows us to satisfy some of the key assumptions (on the human data) stipulated by Thm. <ref>: context awareness (Def. <ref>) and context preservation (Def. <ref>). To see this, we show that there exists an equitable goal according to score parity with scoring function defined in Eq. (<ref>), and importantly, that this equitable goal is related to the human data as specified by Defs. <ref> and <ref>. In turn, the existence of such an equitable goal implies that the human data and scoring function we study in the experiments does indeed satisfy Def. <ref> and Def. <ref>. Construction of Goal To begin, consider some random variables (D, C, A) with the below constraints, and let (D̃, C̃, Ã) correspond to random variables for the human data as before. These will be used to construct the equitable goal we have just previously discussed: 𝐏𝐫(D = d | C = c) = 𝐏𝐫(D̃ = d |C̃ = c), 𝐏𝐫(C = c | A = a) = 𝐏𝐫(C̃ = c |à = a), 𝐏𝐫(A = 0) = 𝐏𝐫(A = 1). Now, also assume D is independent of A given C (that is, A3 in Thm. <ref>), so we can decompose the joint distribution of (D, C, A) according to our constraints: 𝐏𝐫(D=d, C=c, A=a) = 𝐏𝐫(D=d, C=c | A=a) 𝐏𝐫(A=a) = 𝐏𝐫(D=d | C=d, A=a) 𝐏𝐫(C=c | A=a) 𝐏𝐫(A=a) = 𝐏𝐫(D=d | C=c) 𝐏𝐫(C=c | A=a) 𝐏𝐫(A=a) (cond. indep. constraint A3) = 𝐏𝐫(D̃=d |C̃=c) 𝐏𝐫(C̃=c |Ã=a) 𝐏𝐫(A=a) (Eq. <ref> constraints) Next, we verify there are distributions with this joint density with total probability summing to 1. To do this, we re-use the above expansion to arrive at: ∑_d,c,a𝐏𝐫(D=d, C=c, A=a) = ∑_d,c,a𝐏𝐫(D̃=d |C̃=c) 𝐏𝐫(C̃=c |Ã=a) 𝐏𝐫(A=a) = 1/2∑_d,c,a𝐏𝐫(D̃=d |C̃=c) 𝐏𝐫(C̃=c |Ã=a) (assumed constraint on A) := 1/2 [ x(1) + x(0) ] (use x(a) as a shorthand for the sum over d,c) Simultaneously, since (D̃, C̃, Ã) already correspond to a distribution, we can use similar logic (i.e., LTE and conditional independence) to expand the sum over this distribution's joint density. In doing so, we must have 1 = 𝐏𝐫(à = 0) · x(0) + 𝐏𝐫(à = 1) · x(1) := a × x(1) + b × x(0) (defining shorthand). So, the density in Eq. (<ref>) has total probability summing to 1 if there is a solution with a,b ∈ [0,1] and a + b = 1 to the following system: 1 = 1/2 [ x(1) + x(0) ] 1 = a × x(1) + b × x(0). If a ≠ b ≠ 1/2, there are solutions a,b ∈ [0,1] with a+b=1 as long as x(1) = x(0), which is indeed true, since due to (A3) x(a) can be re-written as a conditional joint probability over D̃ and C̃. So, x(1) = x(0) = 1. Note, the other axioms of probabilities follow directly because the constraints only restrict the probabilities for (D,C,A) to existing (known) probability functions. Thus, we know a distribution satisfying the needed constraints in Eq. (<ref>) exists. Specifically, a distribution related to the human data as specified by Defs. <ref> and <ref> exists, and we have shown the desired result. Equity of Goal Finally, it remains to see how the distribution corresponding to (D,C,A) is equitable. Score parity follows easily by definition of à = v(D̃). In particular, the test divergence on the human data is 0, so Eq. (<ref>) implies the test divergence on the distribution of (D,C,A) is 0, and so Thm. <ref> implies the parity gap for the distribution of (D,C,A) is 0. Balance of the distribution of (D,C,A) also follows easily from the final constraint in Eq. (<ref>), and so we are done. §.§ Downsampling The downsampling process for the algorithm restricts to images which are determined to have either of the protected attributes — i.e., a=1 when M is the protected attribute or a=1 when F is the protected attribute — such that there are an equal number of occurrences of a=1 for both protected attributes. That is, in the end result, the new training dataset has an equal number of occurrences where annotator consensus identified a male or a female, and all other images are thrown out. This is achieved through a simple randomized filtering approach. As noted, images without a=1 for either protected attribute are also thrown out. This allows us to ensure we are training a (single) model that will be equitable on both protected attributes simultaneously,[If we include images without labels, we cannot be sure of equal occurrence of both attributes.] which is the primary goal in evaluation. Note, this strategy does not hurt the object identification accuracy either (as evidenced by empirical results). This may be for two reasons: first, other objects (besides persons) appear frequently enough in the downsampled dataset as to not effect performance; second, downsampling is only used in the cooperative learning phase, and object recognition ability is primarily learned in the pre-training phase. As alluded in our theoretical discussion, another consequence of this augmentation strategy is that the number of i.i.d. data points is greatly reduced in the cooperative learning phase (e.g., compared to the -based algorithm); i.e., we estimate less than 1/6th of the original dataset is used. Therefore, this indeed presents a good example to test our theoretical hypotheses on the impacts of data augmentation and data-inefficiency. Downsampling to create the equitable distribution is done in a similar manner, except – since we don't need to worry about inefficiency in model training any longer – a separate dataset is created for each protected attribute. So, there is one dataset with balanced occurrences of a=1 and a=0 when the protected attribute is M, and another dataset with balanced occurrences when the attribute is F. Importantly, because labeling scheme enforces our assumptions about context hold in the human data (see Appendix <ref>), this should create an equitable goal. §.§ GuessWhat?! Game Rules and Statistics Here, we introduce the GuessWhat?! visual dialogue game <cit.>. We use this game as a running example to ground abstract theoretical concepts in practical application. Importantly, our theoretical study is more generally applicable (i.e., beyond just this example). Statistics on object distribution and dialogue length are provided in Figure <ref>. After applying the labeling scheme and downsampling (as just described), our dataset consists of about 3200 (half with a=1) when F is the protected attribute and 6400 (half with a=1) when M is the protected attribute. Note, this also indicates that the ratio of M to F in the original dataset is about 2 to 1. Gameplay An image and goal-object within the image are both randomly chosen. A question-player with access to the image asks yes/no questions to an answer-player who has access to both the image and goal-object. The question-player's goal is to identify the goal-object. The answer-player's goal is to reveal the goal-object to the question-player by answering the yes/no questions appropriately. The question- and answer-player converse until the question-player is ready to make a guess or at most m questions have been asked.[By default, m=8 following <cit.>.] The question-player then guesses which object was the secret goal. §.§ Cooperative Learning Cooperative Learning generates questions Q̂_i and object guess Ô based on answer player answers A_i as below: Ô = 𝙶𝚞𝚎𝚜_α(𝙴𝚗𝚌_β(I, D̂)) Q̂_i+1 = 𝚀𝙶𝚎𝚗_θ(𝙴𝚗𝚌_β(I, Q̂_1, A_1, …Q̂_i, A_i). The neural-model 𝚀𝙶𝚎𝚗_θ is called the question-generator and the neural-model 𝙶𝚞𝚎𝚜_α is called the object-guesser. The final neural-model 𝙴𝚗𝚌_β is called the encoder and captures pertinent features for the former models to share. All model parameters (α, β, θ) are first pre-trained on human-human dialogue and then the model-components are further updated through cooperative self-play <cit.>, in which the model-components and an automated answer-player play new games (machine-machine dialogue) to continue the learning process. The shared encoder is used to improve human-likeness of questions <cit.>. Note, the change from Cooperative Learning (above) to Cooperative Learning with simply incorporates additional human data during training the above model, instead of using only machine-machine dialogue. See <cit.> for more details on both approaches to cooperative learning.
http://arxiv.org/abs/2307.03947v1
20230708101048
Hyperelliptic Gorenstein curves and logarithmic differentials
[ "Luca Battistella", "Sebastian Bozlee" ]
math.AG
[ "math.AG", "math.GT", "14H20 (Primary) 14H10 (Secondary)" ]
1.2 #1|_#1 commutative diagrams/.cd, arrow style=tikz, diagrams=>=stealth definition innercustomthmTheorem tocline#1#2#3#4#5#6#7#1>@̧tocdepth secpenalty#2 M ifempty#4 tempdimar@tocindent#1 tempdima#4 @ #3tempdimapnumwidth plus4em -pnumwidth #5-tempdima #1 1em 2em 3em #6topnumwidthtocpagenum#7 [1] @cev#1 calc fadings decorations.pathmorphing decorations.pathreplacing shapes marginnote #1marginnote[][]#1 #1#1 OT1pzcmit definition theoremTheorem[section] *theorem*Theorem claim[theorem]Claim conjecture[theorem]Conjecture *conjecture*Conjecture corollary[theorem]Corollary lemma[theorem]Lemma proposition[theorem]Proposition remark[theorem]Remark assumption[theorem]Assumption *runningexample*Running example aside[theorem]Aside *aside*Aside condition[theorem]Condition construction[theorem]Construction convention[theorem]Convention definition[theorem]Definition example[theorem]Example exerciseExercise notation[theorem]Notation proposition-definition[theorem]Proposition-Definition question[theorem]Question setting[theorem]Setting theorem innercustomconjConjecture theorem innercustomcorCorollary
http://arxiv.org/abs/2307.04612v1
20230710145514
Emergence of Cooperation in Two-agent Repeated Games with Reinforcement Learning
[ "Zhen-Wei Ding", "Guo-Zhong Zheng", "Chao-Ran Cai", "Wei-Ran Cai", "Li Chen", "Ji-Qiang Zhang", "Xu-Ming Wang" ]
physics.soc-ph
[ "physics.soc-ph" ]
1]Zhen-Wei Ding 2]Guo-Zhong Zheng 3]Chao-Ran Cai 4]Wei-Ran Cai 2]Li Chen 1]Ji-Qiang Zhangcor1 [email protected] [cor1]Corresponding Author 1]Xu-Ming Wangcor2 [email protected] [cor2]Corresponding Author [1]School of Physics, Ningxia University, Yinchuan 750021, P. R. China [2]School of Physics and Information Technology, Shaanxi Normal University, Xi'an, 710062, P. R. China [3]School of Physics, Northwest University, Xi’an 710127, P. R. China [4]School of Computer Science, Soochow University, Suzhou 215006, P. R. China Cooperation is the foundation of ecosystems and the human society, and the reinforcement learning provides crucial insight into the mechanism for its emergence. However, most previous work has mostly focused on the self-organization at the population level, the fundamental dynamics at the individual level remains unclear. Here, we investigate the evolution of cooperation in a two-agent system, where each agent pursues optimal policies according to the classical Q-learning algorithm in playing the strict prisoner's dilemma. We reveal that a strong memory and long-sighted expectation yield the emergence of Coordinated Optimal Policies (COPs), where both agents act like “Win-Stay, Lose-Shift” (WSLS) to maintain a high level of cooperation. Otherwise, players become tolerant toward their co-player's defection and the cooperation loses stability in the end where the policy “all Defection” (All-D) prevails. This suggests that tolerance could be a good precursor to a crisis in cooperation. Furthermore, our analysis shows that the Coordinated Optimal Modes (COMs) for different COPs gradually lose stability as memory weakens and expectation for the future decreases, where agents fail to predict co-player's action in games and defection dominates. As a result, we give the constraint to expectations of future and memory strength for maintaining cooperation. In contrast to the previous work, the impact of exploration on cooperation is found not be consistent, but depends on composition of COMs. By clarifying these fundamental issues in this two-player system, we hope that our work could be helpful for understanding the emergence and stability of cooperation in more complex scenarios in reality. * Strong memory and long-sighted expectations yield “win-stay, lose-shift” and high cooperation * Tolerance of exploitation could be a good precursor to a crisis in cooperation * The impact of exploration on cooperation nonmonotonically depends on the composition of the coordinated optimal modes Nonlinear dynamicsCooperationRepeated gameReinforcement learning § INTRODUCTION Cooperation is ubiquitous and significant, from ant fortress associations and altruistic behavior of pathogenic bacteria in biological systems <cit.> to community activities and civic participation in human society <cit.>. However, the emergence of cooperation is not straightforward, since although common interest would require the majority to cooperate, exploiting others by defection could maximize individuals' interest. Such dilemma arising from the conflict between collective and individual welfare is captured in a number of classical game models <cit.>. Here, the key question of how do cooperative behaviors evolve still remains an open question. Among the most favorable models for studying cooperation mechanisms, the Prisoner's Dilemma Game (PDG) stands out for its simplicity. It's well known that defection, as the Nash equilibrium, is the optimal choice for individuals in a single round for this game. <cit.>. But the repeated PDG potentially provides an escape to cooperation revealed in both theoretic predictions and experiments <cit.>. Previous studies show that the equilibriums of repeated PDGs depend crucially on whether a game is finitely or infinitely repeated <cit.>. There are some exceptions, however, that incomplete information, e.g. uncertain preferences of the players <cit.>, uncertain number of rounds <cit.>, termination rule <cit.> or rewards with noise <cit.>, etc., can lead to the altruistic cooperation even in the finite repeated PDGs. This leads another interesting theme in repeated PDGs about the relevance of strategies to the cooperation level. Researchers have uncovered a number of strategies for actions, in which individuals' future deeds adhere to particular rules based on scant historical data, and they have investigated the dependence of the cooperation level on these rules <cit.> as well as the choices made by the individuals within these strategies for actions <cit.>. Based on the above works and the introduction of framework of evolutionary game theory, considerable progresses have been made later on around the mechanism behind the cooperation emergence among unrelated individuals <cit.>. It is notable that in most of these previous studies, the strategies are not evolved or fixed once adopted, showing weak adaptivity towards the circumstance. Humans and many other creatures, however, have a sophisticated cognitive capacity, such as reinforcement learning <cit.>, behaviour prediction <cit.>, intention recognition <cit.>, and intelligence from social interactions  <cit.>. A new paradigm accounting for the adaptivity is needed to understand the cooperative behaviours in reality. The past decades have witnessed the flourishing of machine learning, which has rooted in human cognition and neuroscience <cit.> and has many successful applications in many fields <cit.>. Reinforcement learning, as one of most influential branches of machine learning, is found particular suitable for understanding the evolution cooperation <cit.>. Reinforcement learning is originally designed to make optimal decision for maximizing the rewards for the given states through exploratory experimentation <cit.>, and this setup exactly matches with the evolution of cooperation. Actually, some studies have already adopted the idea of the reinforcement learning to investigate the repeated PDGs <cit.>. With this new paradigm, new insights are obtained by studying the impact of different factors on cooperation <cit.>, e.g., it's found that cooperation can benefit from improved exploration methods <cit.>, self-adaptive memories <cit.>, evolved payoffs <cit.> and even intrinsic fluctuations <cit.>. Some other works discuss the optimization of algorithms to facilitate the cooperation and increase rewards <cit.>, or find strategies to play against the classical strategies <cit.>. In parallel, some theories have also been developed, such as symmetric equilibrium <cit.>, symmetry breaking <cit.> or fundamental dynamics <cit.>. Building on these studies, researchers have inspected cooperation from self-organization, in populations or multi-agent systems, which aim to continuously adjustable strategies by learning instead of fixed strategies, such as imitation learning in the classical evolutionary games <cit.>. In spite of the progresses in the employment of reinforcement learning in explaining how humans deal with various tasks <cit.>, there are still a number of interesting questions about the cooperation mechanism: Can we use reinforcement learning, rooted in psychology and neuroscience, to understand cooperation in the repeated PDGs observed by economists? Can we bridge the strategies (equilibriums) of classical economics and the policies (behaviour modes) of machine learning? Addressing these questions is of paramount significance because it helps us to understand the connections and differences between social and artificial intelligence systems. This paper is organized as follows. In Sec. <ref>, we present a general model that combines reinforcement learning algorithms with repeated games for two agents. The simulation results of strict prisoners' dilemmas game are shown in Sec. <ref>. To investigate the mechanism of the phenomena, we make some analysis in Sec. <ref>, which consists of four parts. Finally, the conclusions and discussion are given in Sec. <ref>. § REINFORCEMENT LEARNING FOR REPEATED GAMES We start by introducing a general Reinforcement Learning Repeated Game (RLRG) for two agents, say “Iris” and “Jerry” (abbreviated as “i” and “j” in notation), specifically they adopt the Q-learning algorithm <cit.>. In this algorithm, Iris/Jerry may take an action against its co-player from an action set 𝒜 = a_1, ⋯, a_n_a when it is in one of n_s states from the state set 𝒮 = s_1,⋯,s_n_s. The goal is to find a policy that maximizes the expected cumulative reward. At τth round, the state of each agent consists of its own and its co-player's actions in the previous round, i.e. s(τ) = a(τ-1)ã(τ-1), where a and ã denote the agent and its co-player's actions, respectively. Therefore, the state set is the Cartesian product of action set 𝒜×𝒜→𝒮. In the Q-learning algorithm <cit.>, Iris/Jerry seeks for optimal policies through the so-called Q-table by learning. Here, the Q-table is a matrix on Cartesian product for states (columns) – actions (rows) 𝒮×𝒜→ℝ: Q(τ) = ( [ Q_s_1,a_1(τ) ⋯ Q_s_1,a_n_a(τ); ⋮ ⋱ ⋮; Q_s_n_s,a_1(τ) ⋯ Q_s_n_s,a_n_a(τ); ]). With a Q-table in hand, Iris/Jerry takes action following its Q-table a(τ) →max_a'{Q_s,a'(τ)}, a'∈𝒜, with probability 1-ϵ, or a random action within 𝒜 otherwise. Here, max_a'{Q_s,a'(τ)} is the action corresponding to the maximum Q-value in the row of state s. The parameter 0 < ϵ≪ 1 is to introduce some random exploration besides the exploitation of the Q-table. When Iris and Jerry make their decisions, they receive their own rewards according to their actions and a payoff matrix Π = ( [ Π_a_1a_1 ⋯ Π_a_1a_n_a; ⋮ ⋱ ⋮; Π_a_n_aa_1 ⋯ Π_a_n_aa_n_a; ]), where Π_aã = Π(τ) denotes the agent's reward if the agent with action a is against action ã of its co-player. At the end of τth round, Iris/Jerry update the element Q_s,a for its Q-table as follows Q_s,a(τ + 1) = g( Q(τ),r(τ)) = (1-α)Q_s,a(τ) +α[γ Q_s',a'^max(τ)+r(τ)], where α∈ (0, 1] is the learning rate reflecting the strength of memory effect, a large value of α means that the agent is forgetful since its previous value of Q_s,a(τ) is quickly modified. r(τ) = Π(τ)=Π_a(τ)ã(τ) is the agent's reward received in the current round τ. γ∈ [0,1) is the discount factor determining the importance of future rewards since Q_s',a'^max is the maximum element in the row of next state s' that could be expected. In such a way, the Q-table is updated, and the new state becomes s(τ+1) = a(τ)ã(τ), and a single round is then completed. To summarize, the pseudo code is provided in Algorithm <ref>. § SIMULATION RESULTS FOR PRISONER'S DILEMMA GAME In this work, Iris and Jerry play the Strict Prisoner's Dilemma Game (SPDG) within our RLRGs framework, and we focus on the evolution of the cooperation preference f_c. Specifically, the action and state sets are respectively 𝒜 = a_1,a_2=C, D, 𝒮 = s_1,s_2,s_3,s_4=CC, CD, DC, DD, and correspondingly the time-evolving Q-table Q(τ) = ( [ Q_s_1,a_1(τ) Q_s_1,a_2(τ); Q_s_2,a_1(τ) Q_s_2,a_2(τ); Q_s_3,a_1(τ) Q_s_3,a_2(τ); Q_s_4,a_1(τ) Q_s_4,a_2(τ); ]) = ( [ Q_cc,c(τ) Q_cc,d(τ); Q_cd,c(τ) Q_cd,d(τ); Q_dc,c(τ) Q_dc,d(τ); Q_dd,c(τ) Q_dd,d(τ); ]). The payoff matrix in the Sec. <ref> is rewritten as Π = ( [ Π_cc Π_cd; Π_dc Π_dd; ]) = ( [ R S; T P; ]) = ( [ 1 -b; 1+b 0; ]), where Π is with a tunable game parameter b∈ (0, 1), controlling the strength of the dilemma. A larger value of b means a higher temptation to defect where cooperators are harder to survive. Here, we define the average cooperation preference for Iris and Jerry within tth window in simulation as follows f_c(t):=∑_τ=t-T^t∑_k∈{i,j}δ(a^k(τ) - C)/2T, where δ(⋯) is the Dirac delta function and a^i,j are Iris or Jerry's actions. As can be seen that a sliding window with the length of T is used for averaging. The time series of average preference can help us for better monitoring the evolution trend of different actions. As t and T tend to infinity, f_c(t) is the average cooperation preference over all time and can be denoted as f̅_c. In our practice, we use sufficiently large t and T instead of infinity. Apart from the average cooperation preference, we are also interested in the degree of fairness for agents. For example, in the case of C-D pair, the defector takes advantage over the cooperator, yielding an unfair reward division. To measure the degree of unfairness, we defined it as the average reward difference between Iris and Jerry within consecutive rounds Δ R := ∑_τ=t-T^t|∑_τ^'=τ-1^τΠ^i(τ^') - Π^j(τ^')|/T, in which Π^i,j are the rewards for Iris and Jerry. Obviously, when Δ R→ 0, it means the two agents statistically keep the action symmetry with each other, without apparent exploitation detected. Otherwise, a symmetry-breaking in their action/reward is present. By fixing the dilemma strength b = 0.2, We firstly provide the average cooperation preference ⟨f̅_c⟩ in the parameter domain of (α,γ) in Fig. <ref>(a). The results show the domain can be roughly divided into three regions. In Region 1, where α is low and γ is high, the two agents maintain a high level of cooperation, showing that a strong memory effect and the long-sight facilitate the cooperation to thrive. By contrast, the opposite setup where agents are both forgetful and short-sighted results in a low cooperation preference, full defection is seen in Region 2. Starting from Region 1, ⟨f̅_c⟩ decreases as the agents gradually become short-sighted (i.e. by decreasing γ), but cooperation does not disappear as long as the value of α is low enough, which is Region 3. This means that cooperators still survive as long as the agents are not forgetful. In addition, Fig. <ref>(b) also provides the average reward difference ⟨Δ R⟩ in the same domain as in Fig. <ref>(a). We can see that the reward difference is almost zero within all the three Regions (I, II, and III), except at the boundaries between Regions 1 – 2, and Regions 2 – 3. This means high unfairness (corresponding to frequent appearance of C-D or D-C cases) is observable only in the domain close to the boundary between 1 and 2; otherwise fairness is well maintained. Fig. <ref> first shows some typical time series of cooperation preference for different combinations of (α, γ) in Fig. <ref>(a). As can be seen, for a low learning rate α, the cooperation preference is relatively stable and f_c increases with the discount factor γ [Fig. <ref>(a)]. With the increase of α, a significant decrease in f_C is detected, and the preference becomes quite volatile [Fig. <ref>(b)]. As α continues to increase, the cooperation is almost completely unsustainable in all three cases in [Fig. <ref>(c)]. By comparison, a high value of γ is more likely to yield a high level of cooperation for a fixed learning rate. These results and Fig. <ref>(a) indicate that the combination of a relatively low learning rate and high discount factor is the ideal scenario to sustain cooperation. § MECHANISM ANALYSIS §.§ Coordinated optimal policies and modes In the classical Q-learning, agent optimizes the value function of state-action pairs for the optimal policy π^* to maximize the total expected cumulative reward. The value function (i.e. the Q-table), is characterized by a set of Bellman equations as Q^π^*_sa = ∑_s^'∈𝒮p(s'|s,a)[r(s,a,s') +γ∑_a^'∈𝒜π^*(s^',a^')Q^π^*_s^',a^'] Here, p(s'|s, a) is the transition probability for the agent from state s to s' when it takes action a at s, r(s,a,s') is the reward received for the agent. In a static environment, e.g. walking the labyrinth game, the optimal policy π^* found by the agent is fixed because of the fixed environment of labyrinth thus also the time-independence of p(s'|s,a). But, this is not the case in RLRGs. Because the environment is now composed of agents' states, which are time-varying. It's thus obvious that p(s'|s,a) for Iris/Jerry is time-dependent and co-evolves with the Q-tables of both agents together with their policies <cit.>. This means that Iris and Jerry must seek for their optimal policy in a coordinated way with the other. Here we define a set of Coordinated Optimal Policies (COP) denoted as π^i*,π^j*, where π^i* is optimal for Iris only if Jerry's policy is π^j*, and vice versa. The COPs are able to remain unchanged for some time then the system will fall into some corresponding Coordinated Optimal Modes (COMs), which consist of circular state transitions. Here, we check the COMs instead of directly examining the COPs. By careful examination, we find that the system falls into a few modes once π^i,j remain unchanged for some time, which can be classified into 12 circular modes as ϵ→ 0 [see Table <ref> in <ref>]. Fig. <ref> gives seven short ones from m_1 to m_7. Note that, due to the presence of exploration, long modes generally are more unstable compare to the shorter ones [see <ref>]. Here these modes can be classified according to their state symmetry. A mode is called symmetric if the state experience is statistically the same if Iris and Jerry are interchanged, otherwise it is called asymmetric. For example, the mode of CC,CC or DD, DD in Fig. <ref>(a) is obviously symmetric, CD ↔ DC, DC ↔ CD is also in symmetry. But CC ↔ CD, CC ↔ DC is asymmetric, since one agent always acts as a cooperator while the other adopts between cooperation or defection periodically. In an asymmetric mode, there is a reward difference Δ R between Iris and Jerry, and this difference is disappeared in symmetric modes. Note that, due to the presence of transition between two states, we need to compute the accumulated rewards for two consecutive rounds as the definition given in Eq. (<ref>). Here, COMs can be analogous to the equilibriums in the finite classical repeated PDGs <cit.>. According to observation, we learn that the number of COMs lies between 1 and 4 for a COP. §.§ Transition of States The above analysis indicates that the probability of state, p(s), and the probabilities of state transitions, p(s^'|s), reflects agents' COPs and corresponding COMs. Since the states and state transitions for both agents can be mapped to each other as updating protocol shown, we only need to examine the cooperation preference of one agent through its p(s) and p(s^'|s). In here, we show the distribution of states in the parameter space of α,γ in Fig. <ref> (a-c) firstly. The state of CC and DD dominate in Region 1 and  2 respectively, while the two coexist in Region 3. However, CD and DC appear only at the boundary between 1 and 2. To characterize the correlation between the consecutive states, we compute the mutual information I(s;s^') defined as I(s;s^'):=∑_s∈𝒮∑_s^'∈𝒮p(s,s^')logp(s,s^')/p(s)p(s^'), which are shown in Fig. <ref> (d). Results show that the mutual information between consecutive states is weak in the Region 1 or 2, but is strong at their boundary. Strong mutual information implies that strong correlation between consecutive states and thus predictability in time. It is well known that the dynamics near the criticality has long-term correlations, and a tiny perturbation is able to trigger a series of large fluctuations. Thus, the observations suggest that there is a bifurcation at the boundary in α, γ, where the COP gradually changes as the learning parameters are varied. Fig. <ref> exhibits p(s) and p(s'|s) of one agent for four typical combinations of (α, γ) after the system becomes statistically stable. Specifically, we choose α, γ from Region 1, the boundary between Regions 1 and 2, Region 2 and Region 3 in Fig. <ref>, respectively. The observations are made respectively as following: (1) In Fig. <ref> (a), where the parameters are located in Region 1, CC is shown to be the primary state and the other states are rarely seen according to the probabilities of p(s). This is because of the dominating transitions CD→ DD→ CC and DC→ DD→ CC are dominant according to p(s^'|s). With these, one learns that π^i* and π^j* in the unique COP are the same for both that are “win-stay, lose-shift” (WSLS). It implies that the exploitation by the agent's defection incurs retaliation from its co-player, and that revenge will ultimately lower its payoff and the agent is forced to cooperate. (2) For the case at the boundary between 1 and 2, Fig. <ref> (b) shows the state is mainly composed of CC and DD mixture, while the fractions of the rest states are negligible. The reason for the absence of CD and DD is due to the high probabilities of CD→ CC and DC→ CC. This means that the cooperator becomes tolerant in the face of co-player's defection, but this tolerance also causes more exploitation to appear, i.e., the possibilities of CC→ CD and CC→ DC also increase compare to Fig. <ref> (a). The observations indicate that, on one hand, agents use co-player's tolerance to get more rewards by defection, but on the other hand, they do not want to break cooperation completely. However, once both defect, they stay in defection with a large probability. Though, still there is some chance to rebuild cooperation by DD→ CC. The results suggest that tolerance is a precursor that cooperation become fragile. (3) When located in Region 2, Fig. <ref> (c) shows the state DD dominates. The transitions to state DD are also non-negligible, i.e. CC→ DD, CD→ DD, and DC→ DD). Consequently, agents almost have no chance to rebuild cooperation once they have defected. That is to say, both π^i* and π^j* are “all-defection” (All-D), i.e., the COP is All-D, All-D in Region 2. (4) Fig. <ref> (d) shows the scenario in Region 3, where the state is also mostly composed of CC and DD mixture, as in the case of Fig. <ref> (b). However, compared to case ii, the CC state now becomes unstable although the possibility to rebuild cooperation is non-zero. Overall, in this region, agents' policies preserve most features of the All-D, but also with the property of WSLS, due to the presence of DD→ CC. Case (1-3) suggest a strong memory may be the prerequisites for rebuilding cooperation when the cooperation is broken. §.§ Temporal Correlation To further capture the correlation in time between consecutive states, we compute the joint probability p(s,s^') for the state transition from s to s^'. And as the benchmark <cit.>, we also compute the products of their state probabilities p(s)p(s^'). When p(s,s^')>p(s)p(s^'), it means that there is a positive correlation between the two consecutive states compare to the purely random occurrence, and vice versa. Figure. <ref> displays p(s,s^'), p(s)p(s^') for the four typical cases with the same parameter combinations as in Fig. <ref>. Figure. <ref>(a) and (c) show that pCC, CC and pDD, DD are the only dominant joint probabilities in Region 1 and 2, respectively. Since the gaps between p(s,s^') and p(s)p(s^') are almost invisible in these two cases, we compute their Cohen's kappa coefficients that are given in Fig. <ref> of the <ref>, which verify positive correlations between consecutive CCs (DDs) in Region 1 (2). This observation means that the COMs of CC, CC and DD, DD are quite stable in Region 1 and 2, respectively. The reasons why cooperation is preferred in Region 1 is due to the fact that cooperation must build on the predictability of agents' policy towards each other; and the predictability increases with γ but decreases with increasing α [Eqs. (<ref>) and  (<ref>)]. Therefore, a high level of cooperation is expected in Region 1, where γ is large and α is small. By contrast, weakened cooperation is observable due to the inferior predictability of the opponent as α increases and γ decrease. At the boundary between Region 1 and 2, Fig. <ref> (b) shows that there are positive correlations between CC and all states except DD, while DD is only positively correlated with itself. This indicates that CC, CC starts to become unstable and as the competing transition CC↔ CD, CC↔ DC emerges, and DD, DD starts to stabilize. The results imply that there is competition between tolerance and revenge for agents in dealing with the exploitation from its opponent at boundary. In Region 3, Fig. <ref>(d) displays p(DD,DD) is much higher than all other joint probabilities, the two states CC and DD coexist. While the correlations in p(DD,DD) and p(CC,CC) are both negative, the p(CC,DD) and p(DD,CC) are positive (see Fig. <ref> of <ref> for their Cohen's kappa coefficients), which implies enhanced propensities of the transitions CC↔ DD, CC↔ DD compared to the benchmark level. §.§ Boundary of High Cooperation Level Our simulation shows WSLS,WSLS and CC↔ CD, CC↔ DC coexist at the boundary. According to the clue, we conjecture that there are two competitive balances at the boundary: (1) the selection between WSLS,WSLS and CC↔ CD, CC↔ DC; (2) the switch between CC↔ CD, CC↔ DC and WSLS,WSLS with perturbations due to the exploration. For the first balance, it is the competition between the revenge of WSLS,WSLS and the tolerance of CC↔ CD, CC↔ DC in dealing with exploitation. Thus, as the values for the revenge and the tolerance, Q_cd,d^w and Q_cd,c^t are our pivotal Q-values. The analysis in  <ref> shows that the Q-values on a key path of COM/COP will converge to fixed values [as Eqs. (<ref>) and  (<ref>) shown]. Accordingly, the converged Q-values for WSLS, WSLS {[ Q_cc,c^w* = Q_dd,c^w* = Π_cc/1-γ,; Q_cd,d^w* = Q_dc,d^w* = Π_dd + γΠ_cc/1-γ, ]. and for the mode CC↔ CD, CC↔ DC are {[ Q_cc,c^t* = Π_cd+γΠ_cc/1-γ^2,; Q_cd,c^t* = Π_cc+γΠ_cd/1-γ^2, ]. {[ Q_cc,d^e* = Π_dc+γΠ_cc/1-γ^2,; Q_dc,c^e* = Π_cc+γΠ_dc/1-γ^2. ]. Due to the asymmetry of the COM, we use superscript e and t to distinguish the exploiters and the tolerant agent's Q-values, respectively. At the boundary, the constraint for the tolerant agents is Q_cd,d^w*=Q_cd,c^t*, i.e., γ = -1-b/2+1/2√(5-2b+b^2). For the second balance, after entering CC↔ CD, CC↔ DC, the exploiter confronts staying on the current COM or switching to the competing COM of WSLS,WSLS when faced with the tolerant agent's random exploration. As a COM or COP, CC↔ CD, CC↔ DC and WSLS, WSLS obviously have some stability at the boundary. Thus, for the exploiter, the Q-values on the key paths of CC↔ CD, CC↔ DC and WSLS,WSLS have converged to the fixed values correspondingly before the random exploration, i.e., Q_cc,d→ Q_cc,d^e*, Q_dd,c→ Q_dd,c^w* and Q_cc,c→ Q_cc,c^w*. If the tolerant agent defects at the state CC under exploration, the balance of whether or not to change the path for the exploiter is decided by Q_cc,c and Q_cc,d, i.e., Q_cc,d = (1-α)Q_cc,d^e* + α(γ Q_dd,c^w* + Π_dd) = Q_cc,c^w* as Fig. <ref> shows. So, another constraint for the boundary is Q_cc,d=Q_cc,c^w*, i.e., α=b/1+b-γ^2. After substituting b = 0.2, the temptation b of Fig. <ref>, into Eqs. (<ref>) and  (<ref>), the predicted boundaries well match the results of our simulations as shown Fig. <ref>. Our analysis shows that, with the increase of α and decrease of γ, CC, CC becomes unstable because tolerance replaces immediate revenge of WSLS in the face of exploitation from the opponent, leading to the degradation of cooperation. In addition, the analysis also confirms that a transition occurs at the boundary in Fig. <ref>(d), where WSLS,WSLS loses stability and COP changes gradually with the change of learning parameters. §.§ Impact of Random Exploration We further analyze the impact of the random exploration on cooperation. For comparison, we turn off the exploration (by setting ϵ=0) after the evolution is stabilized to reveal the difference, which is shown as a function of the game parameter b, see Fig. <ref>. When the exploration is ceased, the evolution fails to go through from one mode to another, but falls into a single mode. As can be seen, only some of them e.g. m_1, m_2, m_3 and m_6 are present acting as COMs and the probabilities of the aforementioned short modes meet ∑_μ=1^7p(m_μ) ≈ 1. Fig. <ref>(a-b) and (d) show that p(m_1) is decreasing as b increases, while p(m_2) is doing the opposite. p(m_3) however shows a non-monotonic dependence on b from increasing to decreasing. Furthermore, an asymmetric COM, m_6, appears in (a-b) but not in (c-d), and the mode also shows non-monotonic change. According to Fig. <ref> (b), we conjecture that m_6, resulting from tolerance, is a metastable mode between m_1 and m_2. The reason is that (b) shows m_6 seemingly redundant around b = 0.19 as α,γ = 0.5, 0.9, which is at the boundary of 1 (see Fig. <ref> and Eq. (<ref>)). To investigate the impacts of exploration, we compute the difference between the cooperation level with and without exploration δf̅_c := ⟨f̅_c⟩ - ⟨ f_c_0⟩ = ⟨f̅_c⟩-∑_μ=1^7 p(m_μ)f_c(m_μ), where ⟨f̅_c⟩ is the observed cooperation prevalence before the exploration is turned off, f_c(m_μ) is cooperation preference of the corresponding mode m_μ, e.g., the preferences for m_ 3 and m_6 are f_c(m_3) = 0.75 and f_c(m_6) = 0.5. The second term as a benchmark is thus the expected cooperation prevalence in the absence of exploration. Fig. <ref> also plots δf̅_c as a function of temptation b for the same parameter combination. Firstly, there is little impact of exploration in Fig. <ref>(b) and (c) since δf̅_c≈ 0. However, this is not case in Fig. <ref>(a) and (d). In both cases, the presence of exploration improves the cooperation preference at the small range of b. But, as b increases, the exploration suppresses the cooperation preference in Fig. <ref>(a), but shows little impact in Fig. <ref>(d). This is because exploration does not play a significant role in the pure modes m_1 and m_2 as in (b, c). But when multiple states coexist such as in (a-d) for an intermediate b value, the transition among the evolved states yields a non-trivial impact of cooperation prevalence. This is quite different from the previous work <cit.>, where the exploration always facilitates the cooperation under scheme of reinforcement learning. § CONCLUSION AND DISCUSSION In the work, we introduce a general reinforcement learning for repeated dyadic games, where each agent optimizes it’s policies through Q-learning algorithms. Specifically, we focus on the impacts of the learning rate and discount ratio on the evolution of cooperation in the strict prisoner’s dilemma game. We reveal that agents can achieve a high level of cooperation when they have a strong memory and a confident foresight for the future. However, cooperation is completely broken when the agents become forgetful or short-sighted. To proceed, we examine the agents' policies by checking their probabilities of states and states transitions. In the high cooperation region, both Q-tables exhibit WSLS property as their Coordinated Optimal Policy (COP). In contrast, both agents are doomed to defect when their COP is composed of All-D for both in defection region. The most striking case occurs on the boundary of these two regions, where one agent tolerates its opponent’s defection and maintains cooperation, while the other takes the advantage to maximize its own rewards. Such tolerance may be regarded as a precursor to the instability of cooperation. A mixture of both WSLS-like and All-D-like policies finds its niche when agents are endowed with a strong memory but a short sight, which allows a low level of cooperation. Moreover, analogous to the equilibriums of the finite repeated PDGs <cit.>, we find that the agents' behavior can be decomposed into one of several circular Coordinated Optimal Modes (COMs). The time correlation between consecutive states are also given, and the pronounced mutual information between consecutive states at the boundary indicates some sort of criticality relating to bifurcation of COPs. Based on evolution of COMs and COPs, our theoretical analysis give the boundary of high cooperation and verifies the indication by showing a decent match with the numerical results. Finally, we also examine the effects of exploration rate on cooperation. In contrast to the previous work <cit.>, its impact depends on the composition of COMs, could be positive, negative, or no influence at all. In brief, by establishing an exploratory framework for the analysis of dynamics of RLRGs, we show some fundamentally interesting results. However, our findings leave many questions unanswered. For example, an interesting perspective is to relate the COP to dynamical attractors, but a proper formulation still needs to be shaped. Addressing this question could help to obtain all COPs in complex scenarios. A further open question of special significance is to identify effective early-warning signals what could this be like? of failure in cooperation, where the theory of criticality may lend a hand to prevent irreversible and disruptive defective behaviours. § ACKNOWLEDGMENTS We are supported by the Natural Science Foundation of China under Grant No. 12165014 and the Key Research and Development Program of Ningxia Province in China under Grant No. 2021BEB04032. CL is supported by the Natural Science Foundation of China under Grant No. 12075144. figuresection § MORE DETAILS ON COMS §.§ Learning Parameters and Convergence As stated in Subsec. <ref>, the state transitions in our RLRG model will fall into one of circular modes as ϵ→ 0 if Iris and Jerry's policies remain unchanged for some time. In Table. <ref>, we list all modes of circular state transitions under SPDG setting, where “cycle-m” means the mode contains m-states, e.g., cycle-1 is a single-state self-loop. Besides, all modes in cycle-3 are asymmetric, while the mode in cycle-4 is symmetric. For cycle-1 and -2 modes, the symmetric and asymmetric modes are separated by semicolons. For a cycle-1 mode, the convergence rate of the Iris and Jerry's Q-tables is 1-α+αγ, which increases with γ but decreases with increasing α. While, for a cycle-m mode with m≥ 2, the dynamics of Q-table for any agent k∈i,j in a cycle is as follows ([ Q^k_s_1,a_1(τ+ m); Q^k_s_2,a_2(τ+ m); Q^k_s_3,a_3(τ+ m); ⋮; Q^k_s_n,a_n(τ+ m) ]) = ([ 1 0 0 ⋯ 0; 0 1 0 ⋯ 0; 0 0 1 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; αγ 0 0 ⋯ 1-α ]) ⋯([ 1 0 0 ⋯ 0; 0 1-α αγ ⋯ 0; 0 0 1 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ 1 ])· ([ 1-α αγ 0 ⋯ 0; 0 1 0 ⋯ 0; 0 0 1 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ 1 ])·([ Q^k_s_1,a_1(τ); Q^k_s_2,a_2(τ); Q^k_s_3,a_3(τ); ⋮; Q^k_s_m,a_m(τ) ]) +([ αΠ_s_2; αΠ_s_3; αΠ_s_4; ⋮; αΠ_s_1+α^2γΠ_s_2 ]) =([ 1-α αγ 0 ⋯ 0; 0 1-α αγ ⋯ 0; 0 0 1-α ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; (1-α)αγ α^2γ^2 0 ⋯ 1-α ])·([ Q^k_s_1,a_1(τ); Q^k_s_2,a_2(τ); Q^k_s_3,a_3(τ); ⋮; Q^k_s_m,a^k_m(τ) ]) + ([ αΠ_s^k_2; αΠ_s^k_3; αΠ_s^k_4; ⋮; αΠ^k_s_1+α^2γΠ_s_2 ]) and the corresponding state transitions are shown in Fig. <ref>(a). Here, 𝒮^k_μ = s_1,⋯, s_m is the set of k’s states in the mode, e.g., k's sets and its co-player are CC, CD and CC, DC in CC↔ CD, CC↔ DC, respectively. In the mode, agents' Q-tables must meet the constraints max_a'{Q^k_s_n,a^'}=Q^k_s_n,a_n, ∀ k∈{i,j}, s_n∈𝒮^k_μ. Here, the constraints for a mode is increased with the length of the mode. So, the long modes are generally more fragile than the short as mentioned in Subsec. <ref>. The eigenvalues of the matrix of right-hand side in Eq. (<ref>) meets (λ+α-1)^m = (αγ)^mλ, which are the horizontal coordinates of the intersection of curves y(λ) = (λ+ α - 1)^m and y(λ) = (αγ)^mλ. Then, the maximum real eigenvalue λ_max is greater than 1-α. To investigate the effects of learning parameters on the convergence rate, we derive both sides of Eq. (<ref>) for α and γ, and we have m(λ_max+ α - 1)^m-1+m(λ_max+ α - 1)^m-1∂λ_max/∂α = [m(λ_max+ α - 1)^m/α +(λ_max+ α - 1)^m/λ_max∂λ_max/∂α] and m(λ_max+ α - 1)^m-1∂λ_max/∂γ = [m(λ_max+ α - 1)^m/γ + (λ_max+ α - 1)^m/λ_max∂λ_max/∂γ]. According to the above equation, we get ∂λ_max/∂α = mλ_max(1-λ_max)/α[(α-1)+λ_max(1-m)]<0 and ∂λ_max/∂γ = λ_maxm(λ_max+α-1)/γ[(1-α)+λ_max(m-1)]>0 under m≥ 2 and (1-α)<λ_max<1. So, in any mode, the convergence rates of Q-tables for both agents increases with α but decreases of γ. §.§ Stability of COMs/COPs Any Q-value, Q^k_s_n,a_n, in the mode of Fig. <ref> will converge to fixed Q^k*_s_n,a_n = Π_s_n+1+γΠ_s_n+2+⋯γ^m-lΠ_s_m + γ^m-l+1Π_s_1 + ⋯γ^m-1Π_s_n-1/1-γ^m if the mode is a stable COM. It is obvious that the next state is determined by the current states if both agents' policies are remaining and ϵ→ 0. In the case, the agents will enter the COM through determined state transition paths, called as key paths. Then, the Q-values of corresponding policies of both agents are also converged along these paths. Without loss of generality, we only focus on the Q-values in one of key paths, 𝒫^k_ν= s_1̅, s_2̅,⋯, s_l̅, as Fig. <ref>(a) shown. The Q-values in 𝒫^k_ν will also converge to fixed values if the Q-values are always satisfying following constraints max_a'{Q^k_s_n,a^'}=Q^k_s_n,a_n, ∀ k∈{i,j}, s_n∈𝒫^k_ν, i.e., both agents policies on the path are unchanged. According to Fig. <ref>(a), we get the stable Q-values in the path as follows Q^k*_s_i,a_i = Π_s_i+1 + γΠ_s_i+2+⋯ + γ^l-iΠ_s_l+γ^l-i+1Q^*n_s_n,a_n. Here, s_n is the first state in COM to pass along the path. It is obvious that for a given exploration rate ϵ the convergence rate also increases with α but decreases with γ. Note that the Q-values in the modes evolve much faster than on the key paths to the mode, while the Q-values on the key paths evolve much faster than on the other paths. Each state on key paths or COM only has one state as the next transition. It means that the correlation between consecutive states is positive if they are also consecutive states in a COM or a key path. But p(s) for s in 𝒮^k_μ is much greater than for s in 𝒫^k_ν because the only way to leave a stable COM is to explore (see Fig. <ref>(a) and (c)). In Fig. <ref> (a), the COM will be broken as long as any constraint in Eq. (<ref>) is unsatisfied. And the state transitions cannot return to the mode through 𝒫_μ^k after leaving the mode by exploration, as long as the constraints in Eq. (<ref>) cannot be satisfied. In both cases, at least one agent has changed its policy at some state and the COP is not unique. Thereafter, the COP may switch by exploration. This means that a COM might become unstable before the Q-tables of the COM converges to fixed. Our analysis show that high α and low γ could reduce robustness of both agents' policies of COPs under competition and thus shorten characteristic time of the corresponding COMs. It is the reason why f_c is more volatile for high α and low γ as Fig. <ref> shows. Furthermore, cooperation also becomes fragile in the case when co-player's policy becomes unpredictable, because all-defection for any agent is the best policy. It is important to note that when the exploration rate is low, it is the exploration fluctuation, not the exploration itself, that weakens the robustness of the competing COPs. In contrast, the exploration may enhance the robustness of the COPs because the exploration is benefit to maintain the constraints in Eqs. (<ref>) and (<ref>) §.§ Supplementary Simulations for Tipping Points According to Fig. <ref> in Sec. <ref>, we conjecture that the tipping point for whether exploration can promote cooperation is always corresponding to p(m_2) = 0.5. To verify that, we give more results on p(m_μ) and δf̅_c in Fig. <ref>. The results, especially (a), support our conjecture. § COHEN’S KAPPA COEFFICIENTS FOR TEMPORAL CORRELATIONS In analysis of COM in Sec. <ref>, we give the gap between p(s,s^') and p(s)p(s^') to show the correlation between two consecutive states. But, the gap becomes invisible as p(s,s^') is close to 0 or 1. So, we here employ the Cohen's kappa coefficients for better visualisation. The Cohen's kappa coefficient of two consecutive rounds is defined as κ:=p(s,s^')-p(s)p(s^')/1-p(s)p(s^'). Fig. <ref> shows the results corresponding to Fig. <ref>. As Fig. <ref> (a) and Fig. <ref> (a) show, the correlation between the consecutive CCs is positive as CC is the dominant state for a low α and large γ. Similarly, the consecutive DDs also have a positive correlation when DD is as the dominant state under high α. For the low α and γ, even though DD is still the dominant state, the correlation between DDs is negative as Fig. <ref>(d) and Fig. <ref>(d) shown. elsarticle-num-names
http://arxiv.org/abs/2307.05347v1
20230711153551
High density loading and collisional loss of laser cooled molecules in an optical trap
[ "Varun Jorapur", "Thomas K. Langin", "Qian Wang", "Geoffrey Zheng", "David DeMille" ]
physics.atom-ph
[ "physics.atom-ph", "quant-ph" ]
Department of Physics, Yale University, New Haven, Connecticut 06511, USA Department of Physics, University of Chicago, Chicago, Illinois 60637, USA Department of Physics, University of Chicago, Chicago, Illinois 60637, USA Department of Physics, University of Chicago, Chicago, Illinois 60637, USA Department of Physics, University of Chicago, Chicago, Illinois 60637, USA Department of Physics, University of Chicago, Chicago, Illinois 60637, USA We report optical trapping of laser-cooled molecules at sufficient density to observe molecule-molecule collisions for the first time in a bulk gas. SrF molecules from a red-detuned magneto-optical trap (MOT) are compressed and cooled in a blue-detuned MOT. Roughly 30% of these molecules are loaded into an optical dipole trap with peak number density n_0 ≈ 3× 10^10 cm^-3 and temperature T≈40 μK. We observe two-body loss with rate coefficient β = 2.7^+1.2_-0.8× 10^-10 cm^3 s^-1. Achieving this density and temperature opens a path to evaporative cooling towards quantum degeneracy of laser-cooled molecules. High density loading and collisional loss of laser cooled molecules in an optical trap David DeMille August 12, 2023 ====================================================================================== Ultracold polar molecules, with their long-range dipolar interactions and rich internal structure, have emerged as a powerful platform for quantum information science, quantum simulation, and precision probes of fundamental physics <cit.>. Techniques to directly laser cool molecules have developed rapidly in the past decade, with molecular magneto-optical traps (MOTs) demonstrated for several diatomic <cit.> and polyatomic <cit.> species. Subsequent sub-Doppler gray molasses cooling to temperatures ≲ 50 μK <cit.> has enabled loading of molecules into conservative optical dipole traps (ODTs) <cit.>. Bulk gases of laser cooled molecules in ODTs have been demonstrated with number densities n_0 ∼10^9 cm^-3 and phase space densities Φ∼ 10^-7 <cit.>. However, higher number and phase space densities are needed to implement collisional (evaporative and/or sympathetic) cooling of the trapped molecules, which is likely needed to achieve quantum degeneracy in such systems. Collisional cooling requires a sufficiently high rate of thermalizing (elastic) collisions <cit.>. However, experiments with trapped ultracold molecules typically see rapid loss due to inelastic molecule-molecule collisions. Loss mechanisms include chemical reactions, as well as “sticky collisions" where long-lived collision complexes are formed, which are kicked out of the trap by absorbing a trap light photon or by colliding with a third body <cit.>. Recent experiments with assembled bi-alkali molecules, at much lower temperatures (≲ 900 nK), have demonstrated evaporative cooling by suppressing the inelastic collision rate using microwave fields <cit.> or static electric fields <cit.>, while enhancing the elastic collision rate. This opens a path towards collisional cooling of molecules, if the density is sufficient to observe collisions. For directly laser cooled molecules, inelastic collisions have been reported between pairs of CaF molecules in tweezers <cit.>, where subsequent microwave shielding was demonstrated <cit.>, and between molecules and atoms in a magnetic trap <cit.>. Thus far, however, bulk gases of directly laser cooled molecules have been too dilute for either elastic or inelastic molecule-molecule collisions to be observed. There are two primary reasons for this. First, standard red-detuned molecular MOTs (red-MOTs) have low molecule number (N ≲ 10^5), due to inefficient slowing of the source molecular beam and low capture velocity of the MOT. Second, transfer efficiency from these red-MOTs into ODTs is low (typically ≲ 5%) <cit.>. This is due to sub-Doppler heating from the Type-II transitions (N_g=1→ N_e=0, where N_g{N_e} is the rotational angular momentum of the ground {excited} state) required to be driven for rotational closure of molecular optical cycling transitions <cit.>, limiting typical red-MOT radii to σ≳ 1 mm and temperatures to T≳ 1 mK <cit.> after a compression stage. The temperature can be reduced further to ≲ 50 μK by blue-detuned molasses <cit.>, but this does not provide any spatial compression of the molecular cloud. This has led to interest in `blue-detuned' MOTs (blue-MOTs), which can exhibit sub-Doppler cooling while simultaneously maintaining strong confining forces, with Type-II transitions. This was first demonstrated in Rb atoms <cit.>, and recently shown to work for the specific case of YO (yttrium-monoxide) molecules <cit.>. Recently published numerical simulations <cit.> suggested a more generic method to produce blue-MOTs for a large class of laser-coolable molecules, which should enable efficient transfer of molecules from a MOT to an ODT. In this paper, we experimentally realize this novel, generic scheme to produce a blue-MOT of SrF molecules. With it we achieve ∼10^2 gain in n_0 and ∼10^4 gain in Φ compared to our compressed red-MOT. We load an ODT from this blue-MOT with ∼30% transfer efficiency, ∼ 10x higher than from a compressed red-MOT <cit.>. With this high density in the ODT, we are able to observe inelastic molecule-molecule collisions that result in two-body loss; to our knowledge this is the first such observation in a bulk gas of directly laser-cooled molecules. Our apparatus is very similar to that used in our prior work <cit.>, but with several changes to improve the number of molecules captured in our MOT. We start with a cryogenic buffer gas beam source (CBGB) <cit.>, where SrF molecules are produced by laser ablation of a solid Sr target, with SF_6 gas reacting with the ablated Sr to make SrF. The molecules collide with cold (4 K) He gas and exit the cell at forward velocity ∼ 130 m/s, then are slowed using the white light slowing technique <cit.> on the X→ B transition for 14.5 ms. Slowed molecules are captured in a direct current (DC) red-MOT. Here 3 hyperfine levels are addressed by solely red-detuned light, while simultaneous red- and blue-detuned light is applied on the |J=3/2,F=1⟩ state (Fig. <ref>(a)) to create a dual-frequency trapping force <cit.>. Initially, the per-beam peak laser intensity is I∼ 100 mW/cm^2 and the axial B-field gradient is b=16 G/cm. After capturing the molecules, we linearly increase the gradient to b=29 G/cm and lower the intensity to I∼ 10 mW/cm^2 over 30 ms. In this `compressed' MOT, the cloud radius (in this work, we define radius as the Gaussian r.m.s width unless noted otherwise) is σ≈1 mm, the temperature is T≈1 mK, and the number of trapped molecules is N_red≈ 2.5× 10^4. The molecule number is determined by switching off the gradient and taking a fluorescence image (2 ms exposure) with I∼170 mW/cm^2, where the scattering rate is measured using the procedure from <cit.> and the detection efficiency is calibrated from off-line measurements <cit.>. The fluorescence image is integrated along the radial direction, then fit to a 1D Gaussian plus constant offset; the fluorescence counts are extracted from the Gaussian integral. The temperature is measured using the time-of-flight (TOF) expansion method. We note that in prior work, we began with a radiofrequency (RF) red-MOT <cit.>, which traps ∼30% more molecules with similar size and temperature as the DC red-MOT here. However, switching the B-field from RF to DC for the subsequent blue-MOT configuration is experimentally challenging, and we use the DC red-MOT here instead. Next, we instantaneously jump to the blue-MOT configuration. The laser frequencies are changed to those in Fig. <ref>(b), and the intensity is increased to I∼ 170 mW/cm^2, corresponding to I/I_sat∼ 60, where I_sat is the saturation intensity. As in the red-MOT, a dual-frequency scheme is applied to the |J=3/2,F=1⟩ state. However, blue-detuned light is applied to the other F≠ 0 states resulting in simultaneous application of both sub-Doppler cooling and spatial confinement <cit.>. We find that ∼80% of the molecules from the compressed red-MOT are captured by the blue-MOT. Within 30 ms, the MOT radius along the radial (axial) direction is reduced to as low as σ_rad≈ 149 μm (σ_ax≈ 147 μm) and the temperatures to as low as T_ax≈ T_rad≈ 200 μK (see Fig. <ref>), corresponding to a maximum n_0 ≈ 4 × 10^8 cm^-3. The temperature can be lowered further to T_ax,rad≈60 μK, by reducing I to 34 mW/cm^2, though this results in increasing the radii to σ_rad≈230 μm (σ_ax≈153 μm). The blue-MOT reaches a maximum Φ≈ 1.6× 10^-9, a gain of ∼10^4 compared to the compressed red-MOT. We note that our trapping scheme is substantially different from that used for YO molecules in Ref. <cit.>, where only blue-detuned light was used. We were, by contrast, unable to observe trapping without employing a dual-frequency mechanism. We believe the difference lies in the fact that YO, unlike SrF, has a magnetically insensitive ground state hyperfine manifold with F≠ 0. This feature has been observed to increase the robustness of sub-Doppler cooling in magnetic fields <cit.>. The lack of this feature in SrF (and most other laser cooled molecules) may necessitate the dual-frequency mechanism, which can provide stronger confining forces <cit.> at the cost of some heating. Indeed, we observe a stronger restoring force (∼10x faster compression) and smaller minimum trap volume (by a factor of 2) at the cost of higher minimum blue-MOT temperature (60 μK vs 38 μK) compared to the pure-blue YO MOT <cit.>. Next, we load the ODT by switching the lasers to the Λ-enhanced gray molasses <cit.> configuration in Fig. <ref>(c), and turning off the quadrupole field gradient. The ODT details are described elsewhere <cit.>; in brief, the ODT is formed from a 53 W, 1064 nm laser focused to a 1/e^2 radius of 38 μm, with a trap depth U_T≈ 1.3 mK for SrF. We find that loading is optimized for two-photon detuning δ = 2π× 0.11 MHz, one-photon detuning Δ = 2π× 22 MHz, and I∼57 mW/cm^2. Owing to the small size of the blue-MOT, the ODT is rapidly loaded, with up to 30% transfer efficiency achieved within 20 ms. This is an order of magnitude higher efficiency than achieved when loading from type-II red-MOTs <cit.>. Under optimal conditions, we load N≈ 4000 molecules in the ODT, at T≈ 40 μK, and n_0≈ 3.4×10^10 cm^-3. We note in passing that here, different from our previous observations, the optimal polarization of the ODT beam is linear and the temperature is higher <cit.>. We have so far been unable to trace the source of this change. With these starting conditions, we look for evidence of inelastic molecule-molecule collisions. To study collisional loss, we perform measurements of the number of molecules remaining in the trap (N) as a function of the hold time (t_h). For all of these measurements, we load the ODT for 20 ms, then let untrapped molecules fall out of the trap by turning off the Λ-cooling light for 32 ms. This defines t_h=0. We then measure the remaining number at time t_h, either by imaging in-situ with the Λ-cooling light (for all points t_h<1 s) <cit.>, or by recapturing in the compressed red-MOT and imaging in-situ (for all points t_h≥1 s). The scattering rate for each method is determined by comparing the fluorescence counts to those from a free space image (2 ms exposure) at I∼170 mW/cm^2. We assign un-corrrelated uncertainties to each N(t_h) data point by adding in quadrature contributions from fit uncertainties, the shot-to-shot fluctuations in the initial number, and uncertainties in the ratio of the extracted number between the two imaging methods <cit.>. First, we measure the loss rate in the maximally loaded ODT, with average initial number N_0≈4000. We observe a fast initial loss, followed by a long slow decay, as is characteristic of two-body loss processes (see Fig. <ref>). Density dependent losses are modeled using the two-body loss rate equation, with evolution of the number density n given by: ṅ = -1/τ n -β n^2 , where τ is the one-body lifetime and β is the two-body loss rate coefficient. To convert eq. <ref> to a number evolution, we assume a Gaussian spatial distribution and define an effective volume (V_eff=(2√(π))^3σ_xσ_yσ_z) occupied by the molecules <cit.>; here z is along the axial direction of the ODT, and x (y) is along the transverse direction in (perpendicular to) the imaging plane. This allows us to integrate over the volume to obtain: Ṅ = -1/τ N - β/V_eff N^2. If the spatial distribution is constant in time, eq. <ref> has an analytical solution: N(t) = N_0/(1+βτ N_0/V_eff)e^-t/τ - β N_0 τ/V_eff. Our imaging system cannot resolve the transverse radius (σ_x) of the molecular cloud in the ODT. We also cannot observe properties in the y-direction. We do directly measure the cloud radius along its axial direction (σ_z), as well as the temperatures T_x and T_z. We then infer σ_x using the calculated trap depth, measured ODT beam profile, and value of T_x <cit.>, and assume σ_y = σ_x by symmetry. We observe that the measured value of σ_z increases from its initial value linearly with hold time. We suspect this results from the ODT beam profile changing due to thermal lensing from optics along the beam path <cit.>. We observe an increase in T_z consistent with the observed increase in σ_z. However, we observe no change in T_x over time, so we assume that σ_x (and hence σ_y) does not change. To model this behavior, we treat V_eff as a function of time in eq. <ref>, with σ_z increasing at the measured rate. We then numerically integrate eq. <ref> to find values of β and τ that minimize the reduced chi squared (χ_red^2) of this model. With N_0=4000, we find β_ 4K=2.7(5) × 10^-10 cm^3 s^-1 and τ_ 4K=1.3 (1) (with χ_red^2=0.99, see Fig. <ref>), where we incorporate the uncertainty in V_eff by adding it in quadrature to the uncertainty determined from the fit. The final extracted value of β is strongly dependent on the initial number, so we also consider systematic uncertainties in determining N_0. The scattering rate is affected by uncertainty in the vibrational branching ratio |A^2Π_1/2,v=0⟩→|X^2Σ,v=3⟩ for SrF <cit.>, and in the calibration of the imaging optics. We estimate a combined uncertainty of 25% in N_0 <cit.>. We emphasize that this is different from shot-to-shot fluctuations, and instead is a correlated uncertainty for all points, which in turn leads to an uncertainty in the overall normalization of β. To determine the effect of this scale uncertainty, we use the same analysis method with initial numbers N_0={3000,5000} (corresponding to the lower and upper bounds given the uncertainty), and numerically integrate eq. <ref> to find the optimal β for each N_0. The final uncertainty for β is then assigned as the quadrature sum of contributions from this systematic uncertainty and from the fit error for N_0=4000. Finally, we find β = 2.7^+1.2_-0.8× 10^-10 cm^3 s^-1 and τ=1.3(1) s. As a cross-check, we also fit the data to the analytical solution (eq. <ref>) by following the prescription from Ref. <cit.>. That is: we first extract τ by fitting a pure exponential decay to only late-time (t_h ≥ 1 s) data points, and find τ_ l=1.2(2) s. Then, we extract β by fixing this value of τ and fitting only to early-time data points (t_h < 250 ms) where the axial radius change is less than 15%, such that V_eff can be treated as a constant; we use the average V_eff for t_h<250 ms. Throughout, we perform the same error analysis as before, and find β_ e = 2.7^+1.4_-1.0× 10^-10 cm^3 s^-1 (with χ_red^2=1.20), consistent with results from the more complete model. To further verify the presence of a density-dependent loss, we load the ODT with lower initial number, N_0 ≈ 650, while keeping the temperature and trap depth the same, thereby reducing the starting density by a factor of 6. This is done by reducing the length of the slowing pulse after laser ablation from 14.5 ms to 9.3 ms. We then perform the same sequence of measurements, and see that the short-time loss rate is reduced (see Fig. <ref>), as expected since the initial collision-induced loss time scale τ_c∝ 1/β n_0. There are numerous possible loss channels in our experiment. The molecules are all in the rotational N=1 states, and rotational quenching to N=0 can lead to large inelastic losses <cit.>. They also occupy all of the many individual sublevels in the N=1 manifold of hyperfine and spin-rotation states; this opens up p-wave and f-wave collision channels that would be absent if all the (bosonic) molecules were in the same quantum state. In addition, colliding pairs of SrF molecules can undergo the barrierless chemical reaction SrF + SrF → SrF_2 + Sr <cit.>. Finally, sticky collisions between the molecules can also lead to losses <cit.>. We compare our measured value of β to some experimental and theoretical benchmarks. The molecules in the trap are at temperatures above the p- and d-wave barriers (≈ 5 μK and ≈ 30 μK respectively) determined by the van der Waals C_6 coefficient for N=1 states in SrF <cit.>. We calculate the thermally averaged unitarity limit for β by assuming the molecules in the trap obey a Maxwell-Boltzmann distribution of velocities and that the probability for loss is 0 (1) for kinetic energies below (above) the barrier. We find the unitarity limit β_max≈ 11 × 10^-10 cm^3 s^-1 <cit.>, ≈ 4x above the experimental value. Values of β of the same order as that measured by us have been reported in Refs. <cit.>. In most of these experiments, the molecules are in a single quantum state, or are at much colder temperatures, making a direct comparison infeasible. The closest case to our conditions was in the observation of collisions between a pair of CaF molecules in a mixture of N=1 states in an optical tweezer trap <cit.>. There, the molecules were at T≈ 80 μK, above the p-wave, but below the d-wave barrier for CaF (≈ 20 μK and ≈ 105 μK respectively). The reported loss rate coefficient is β_CaF = 40×10^-10 cm^3 s^-1, ≈ 3x larger than the corresponding unitarity limit β_max,CaF≈ 13× 10^-10 cm^3 s^-1. We also explore light-assisted collisions due to the Λ-cooling light. For this, we turn on the Λ-cooling light at t_h=0. We observe that the one-body lifetime is unaffected by the light, however, the two-body loss rate coefficient is increased due to light-assisted collisions. Under these conditions, we find β_tot=4.9^+1.7_-1.2× 10^-10 cm^3 s^-1. This is two orders of magnitude lower than previously reported using a pair of CaF molecules in an optical tweezer <cit.>. The combined loss rate coefficient sets an upper bound on the peak density achievable by loading an ODT using Λ-cooling. Given the typical loading time (20 ms) from the blue-MOT, this bound is n_0^max∼ 10^11 cm^-3. While the peak densities we achieve are lower than n_0^max, it may be possible to reach it if larger numbers of molecules <cit.>, lower temperatures <cit.>, and/or deeper traps can be achieved. We also note that the observed value of β_tot is low enough such that light-induced losses during the in-situ imaging with Λ-cooling light do not substantially affect the extracted values of N(t_h). In conclusion, we have demonstrated high efficiency loading of a molecular gas into an ODT from a blue-MOT and observed inelastic collisions in a bulk gas of directly laser cooled molecules for the first time. Our results suggest the possibility of using a shielding mechanism to enhance the elastic collision rate while suppressing two-body losses, as already used for evaporative cooling in experiments using assembled bi-alkali molecules <cit.>. Current efforts are underway to prepare the molecules in a single quantum state and then to implement microwave shielding in our system. This will open a clear path to collisional cooling via evaporation or sympathetic cooling with co-trapped atoms. We gratefully acknowledge support from AFOSR MURI and the University of Chicago. § SUPPLEMENTAL MATERIAL FOR HIGH DENSITY LOADING AND COLLISIONAL LOSS OF LASER COOLED MOLECULES IN AN OPTICAL TRAP § SIMULATIONS OF BLUE-DETUNED MOTS We have developed software to simulate magneto-optical trapping of molecules in which acceleration a(z,v_z) is determined as a function of position z and velocity v_z; although we consider displacement and velocity along only one axis (z), the simulation is three-dimensional, and includes the effect of all 6 laser passes and the 3D anti-Helmholtz quadrupole magnetic field. This has been used previously to simulate `two-color' blue-MOTs where both XΣ→ AΠ and XΣ→ BΣ electronic transitions are driven <cit.>. We used this same software in order to determine the feasibility of a `one-color' MOT, using just the XΣ→ AΠ transition, but with a dual-frequency trapping mechanism, before we attempted it in the experiment. In Fig. <ref>, we show the results of the simulation for the parameters used in the experiment. Fig. <ref>(a) shows a(v_z)=∫_-zmin^zmin a(z,v_z)dz and Fig. <ref>(b) shows a(z)=∫_-vmin^vmina(z,v_z)dv_z, where z_min=0.4 mm and v_min=0.25 m/s are chosen because they represent 2σ and 2v_T, where σ∼ 200μm is the MOT radius observed in the experiment and v_T=√(k_BT/m) is the thermal velocity for T=200μK. We observe both a sub-Doppler velocity damping force as well as a spatial restoring force, consistent with the observation of trapping in the experiment. Furthermore, the simulation indicates a spring constant of κ∼ 7× 10^-20 N/m (here we approximate the spring constant by taking κ∼ma(z=400μm)/400μm), in line with what we observe in the experiment (using the equipartition theorem, κ=k_BT/σ^2). The maximum spatial restoring force is roughly an order of magnitude lower than in the previously simulated two-color MOTs <cit.>, and so it is possible that two-color trapping can lead to even more gains in MOT density and also enhance the robustness of the blue-MOT to day-to-day laser-pointing fluctuations, which we observed that the blue-MOT was sensitive to. § LIFETIME DATA TAKING PROCEDURE AND ERROR ANALYSIS To get an accurate measurement of the two-body loss rate, it is imperative to get early time data when the molecule number is high and the loss is mostly density dependent, especially in our system where the vacuum limited lifetime is short (τ≈ 1.3 s). As described in the main text, after loading the ODT for 20 ms, we let untrapped molecules fall out of the trap. The earliest time where we can distinguish the trapped molecules from the untrapped falling molecules is t_fall=32 ms, where the untrapped molecules are not yet out of the field of view of the camera, but have fallen enough that the trapped molecules can be distinguished by an appropriate choice of the region of interest (ROI). We define t_h=0, the starting point of the lifetime measurement to be after this fall period. We also divide our data in two chunks. For t_h< 1 s, where the number is high and the signal to noise ratio is good, we take a Λ-image of the molecular fluorescence by exposing the camera for 10 ms while Λ-cooling light (at I∼57 mW/cm^2) is applied to molecules in the ODT. For the rest, where the number is low, we turn off the ODT and recapture the remaining molecules into the compressed red-MOT and take an image of the molecular fluorescence by exposing the camera for 50 ms (at I∼10 mW/cm^2). This method can only be used when t_fall≥ 152 ms, in order to not recapture molecules untrapped by the ODT. For each method, we determine the scattering rate by comparing the fluorescence counts of an image taken immediately after t_fall=32 ( 152) ms (with maximum molecule number) to the counts from a 2 ms exposure image at laser intensity I∼170 mW/cm^2, where the scattering rate is measured. The Λ-image data is taken as follows. We first generate a list of hold times, and randomize this list. For each hold time t_i in this list, we take a set of 30 images, 15 each for t_h=t_i and for t_h=0. The order of these 30 images is randomized as well. We then extract the fluorescence counts as described in the main text. By comparing the values of N_0 inferred from the t_h = 0 data in each set of data at different values of t_i, we are able to measure the shot-to-shot drifts in starting number over the duration of the entire data set. We find that the various N_0 values have a standard deviation ∼ 8%, and we add this in quadrature to the other uncertainties in the number at each data point. For the MOT recapture data, we follow the same procedure, with the additional drop time added. In addition, we take a set of images at a few intermediate t_h using both methods to compare the extracted number and we find that the ratio of the number extracted from MOT recapture to the number from Λ-images is N_MOT recap/N_Λ-image = 1.01 ± 0.10. We hence include an additional 10% uncertainty in molecule number for all points with t_h≥ 1 s, where the MOT recapture method is used. In addition to these uncertainties, there is also ambiguity in the determination of the overall scattering rate, because of differing reported values of the branching ratio |A^2Π_1/2,J=1/2^+,v=0⟩→|X^2Σ,N=1,v=3⟩ for SrF molecules <cit.>. This results in an overall scale factor uncertainty in the molecule number when converting from fluorescence counts to number. To account for this, we use the average of the branching ratios from Refs. <cit.>, and half their difference as its uncertainty. This leads to ∼ 17% uncertainty in the determination of the starting number. We further take into account uncertainties in the calibration of the light collection optics for the imaging setup (∼ 10%) and we find a combined uncertainty ∼25% in the overall starting number. We emphasize that this is different from shot-to-shot fluctuations, as this uncertainty affects each data point in the same direction. § V_EFF CALCULATION To calculate V_eff, we assume that the trap is harmonic and the spatial density is given by: n(r) = n_0 exp(-x^2/2σ_x^2) exp(-y^2/2σ_y^2) exp(-z^2/2σ_z^2). The effective volume is given by V_eff = ∫ n(r)d^3r / n_0=(2√(π))^3σ_xσ_yσ_z. We do not have enough resolution to measure the transverse radius σ_x, and our observations give no direct information about σ_y. However, we have measured the ODT laser beam profile, and find a good fit to a Gaussian with 1/e^2 intensity radius ω_0=38 (3) μm. We have also calculated the ODT trap depth to be U_T=1.3 (1) mK <cit.>, and we measure the transverse temperature of molecules in the ODT using time-of-flight (TOF) expansion technique to be T_x=40(3) μK. From this we deduce the transverse radius σ_x= √(ω_0^2T/4U_T)=3.3(4) μm, and assume σ_y = σ_x We are able to directly measure the axial radius of the molecular cloud in the ODT, σ_z, and (as stated in the main text) we observe that σ_z increases with time. We model this as a linear increase (see Fig. <ref>). We then use the measured value of σ_z to deduce the axial temperature of the molecules in the ODT, T_z^σ, by following the procedure outlined in <cit.>, where T_z^σ=2U_Tσ_z^2/z_R^2, where z_R is the Rayleigh length of the trap. We find that this inferred axial temperature increase is consistent with the directly measured increase in T_z (Fig. <ref>(a)). This justifies our assumption of a linearly increasing value of σ_z away from its starting value. We do not see any increase in the measured radial temperature; thus we model σ_x,y to be constant. We then numerically integrate eq. <ref>, while allowing the effective volume to increase as V_eff(t)=(2√(π))^3σ_xσ_yσ_z(t), to find the result described in the main text. To fit to the analytical solution (eq. <ref>), we use the average axial radius for t_h<250 ms, V_eff = 3.4 (9) × 10^-7 cm^3. § CALCULATION OF VAN DER WAALS C_6 COEFFICIENT FOR |N = 1⟩ STATES To determine the centrifugal barriers for two-body SrF scattering, we need to compute the van der Waals (vdW) C_6 coefficient that arises from second-order dipolar coupling. Throughout, we assume the molecules are in the ground vibronic manifold |X^2Σ^+, v = 0⟩, and for simplicity we ignore electronic and nuclear spins. For rotational ground state |N = 0⟩ molecules, the rotational wavefunction is spatially isotropic and only a single rotational sublevel is occupied. This leads to the well-known result <cit.> C_6^g = -[ 1/(4πϵ_0)^2] · d_0^4/(6B_0), where d_0 = 3.47 Debye is the ground state permanent dipole moment of SrF and B_0 = 2πħ× 7.5 GHz is the ground state rotational constant of SrF. However, our molecules are in an incoherent mixture of |N = 1⟩ states, which (in the absence of external fields, and ignoring effects due to spin-rotation and hyperfine couplings) comprise a nine-fold degenerate subspace in the space of |N = 1⟩ two-body states. Following the approach of <cit.>, we apply second-order degenerate perturbation theory on the intermolecular potential operator V̂_AB in order to obtain the |N = 1⟩ C_6 coefficients. In a body-fixed (BF) frame where the orientation of the vector between the two molecules is fixed, the resultant C_6 coefficients, as a function of d_0 and B_0, are listed in Table 6 of <cit.>. We used these values in eq. <ref> to compute the centrifugal barrier heights. Due to the anisotropic nature of the vdW interaction, the C_6 coefficients including the relative angular motion of the molecules in the space-fixed (SF) frame must be accounted for. To find C_6 values in the SF frame, we numerically compute matrix elements of the second-order degenerate perturbation operator Ŵ_AB^SF associated with V̂_AB, as given by eq. 82 of <cit.>. By incorporating the ℓ^th partial wave |ℓ, m_ℓ⟩ into that matrix element, and subsequently diagonalizing the combined potential Ŵ_AB^SF + ℓ̂^2/(2μ R^2) (where R is the intermolecular separation) in the subspace of |N = 1⟩ two-body states, we obtain the intermolecular potential curves shown in Fig. <ref>. The resultant barrier heights that we obtain from these curves are in nearly exact agreement with those obtained from using the BF calculation results. Hence, for computational ease we use the analytically determined BF centrifugal barriers in the rest of this work. § UNITARITY LIMIT CALCULATION Here, we compute the unitarity limit for two-body scattering of SrF molecules in an incoherent mixture of |N = 1⟩ states. The unitarity limit corresponds to the maximum possible loss allowed by quantum scattering theory. Since our molecules are not in a single quantum state, all partial waves (indexed by ℓ) contribute to the unitarity limit scattering cross section σ. In this limit, the scattering cross section of the ℓ^th partial wave is: σ_ℓ = 4π (2ℓ + 1)/k^2. As usual in two-body scattering, we work in the center-of-mass frame of the two-body system. Therefore, the wavevector k⃗ relates to the collisional energy E_r and relative velocity v⃗_r of the two particles via E_r = ħ^2 k^2/2μ and v⃗_r = ħk⃗/μ, respectively, where μ is the reduced mass. For two-body SrF scattering, μ = M/2 where M ≈ 107 amu is the mass of SrF. We assume that SrF molecules in our trap obey a Maxwell-Boltzmann (MB) thermal distribution in their absolute velocities and energies. Since the convolution of two Gaussians is another Gaussian, it follows that the probability density function f_r(v_r, T) of relative speeds for two-body SrF scattering is also a normalized MB distribution: f_r(v_r, T) dv_r = √(2/π)(μ/k_B T)^3/2v_r^2exp(-μ v_r^2/2k_B T) dv_r, where ∫_0^∞ f_r(v_r, T) dv_r = 1 and T is the initial temperature of molecules in our trap. We define a thermally averaged two-body loss rate coefficient β_th(ℓ, T) for the ℓ^th partial wave by considering which collisional speeds will allow for unitary loss. We make the simplifying assumption that unitary loss occurs with unit (zero) probability when E_r is greater (less) than the centrifugal barrier of the intermolecular potential, i.e. we assume that colliding SrF molecules cannot tunnel through the centrifugal barrier. By denoting the collisional speed associated to the barrier height as v_b, we have: β_th(ℓ, T) = ∫_v_b^∞ f_r(v_r, T)σ_ℓ(v_r) v_r dv_r. By introducing the dimensionless parameter x ≡ E_r/k_B T and substituting in the expressions for f_r(v_r, T) and σ_ℓ(v_r), the expression for β_th(ℓ, T) can be simplified as: β_th(ℓ, T) = 2πħ^2(2ℓ + 1)/μ k_B T√(8 k_B T/πμ)∫_T_b(ℓ)/T^∞ e^-x dx. We identify λ_th = √(2πħ^2/μ k_B T) and v̅_th = √(8k_B T/πμ) as the thermal de Broglie wavelength and average speed, respectively, of a particle with mass μ in an ensemble at temperature T that obeys MB statistics. Therefore, we conclude: β_th(ℓ, T) = λ_th^2 v̅_th(2ℓ + 1)e^-T_b(ℓ)/T, where k_B T_b(ℓ) is the height of the centrifugal barrier experienced by the ℓ^th partial wave. Each distinct vdW C_6 coefficient corresponds to a distinct value for T_b(ℓ). For the i^th C_6 coefficient, denoted C_6,i, we relate it to the i^th barrier T_b,i(ℓ) as follows. If we neglect quadrupole-quadrupole interactions, the leading order terms in the SrF |N = 1⟩ + SrF |N = 1⟩ intermolecular potential lead to the potential: V_i(r) = ħ^2 ℓ(ℓ+1)/2μ r^2 + C_6,i/r^6. If C_6,i > 0, the vdW interaction is repulsive and no barrier exists. We treat this as meaning the molecules never reach short range, and so the contribution to β_th(ℓ, T) here is zero. If C_6,i < 0, the vdW interaction is attractive and there will exist a maximum in V_i(r) at r_b, corresponding to the centrifugal barrier. By only considering cases where C_6 < 0, we analytically compute the barrier height of the ℓ^th partial wave to be: T_b,i(ℓ) = V(r_b)/k_B = (ħ^2 ℓ(ℓ+1)/μ)^3/2(1/54 |C_6,i|)^1/21/k_B. For each two-body eigenstate, we compute their barrier heights up to the h-wave contribution (ℓ = 5). We neglect summation over all partial waves with ℓ > 5 because their contribution to β_th(ℓ, T) is increasingly exponentially suppressed. We thus obtain the total β_th,i(T) for the i^th two-body eigenstate: β_th,i(T) = ∑_ℓ = 0^5 λ_th^2 v̅_th(2ℓ + 1)e^-T_b,i(ℓ)/T if C_6 < 0 0 if C_6 > 0. We finally obtain the overall two-body loss rate coefficient in the unitarity limit for an incoherent mixture of |N = 1⟩ states, denoted as ⟨β_th(T)⟩_|N = 1⟩, by taking a statistical average over all nine possible |N = 1⟩ two-body eigenstates. Here, we assume a uniform probability distribution over all possible states, i.e. p_i = 1/9 ∀ i ∈{1, 2, ..., 8, 9}. So, we have: ⟨β_th(T)⟩_|N = 1⟩ = ∑_i = 1^9 p_i β_th,i(T). Carrying out this computation, we find ⟨β_th(T)⟩_|N = 1⟩≈ 11 × 10^-10 cm^3/s at T = 40 μK, the temperature of SrF molecules in our ODT. We note that the intermolecular potential for |N = 1⟩ states contains a first-order contribution from the quadrupole-quadrupole interaction, of the form C_5/r^5 <cit.>. Here, C_5 ∼θ_zz^2/(4πε_0), where θ_zz = 8.95 ea_0^2 is the quadrupole moment of SrF <cit.>. This extra term affects the barrier heights only minimally; by including the quadrupole-quadrupole interaction, we found that ⟨β_th(T)⟩_|N = 1⟩ is changed by only ≈ 3%, and hence its effect is negligible..
http://arxiv.org/abs/2307.04536v1
20230710130127
DADO -- Low-Cost Selection Strategies for Deep Active Design Optimization
[ "Jens Decke", "Christian Gruhl", "Lukas Rauch", "Bernhard Sick" ]
cs.LG
[ "cs.LG", "cs.CE" ]
Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception Chi-Chih Chang11, Wei-Cheng, Lin11, Pei-Shuo Wang11, Sheng-Feng Yu112, Yu-Chen Lu112, Kuan-Cheng Lin11 and Kai-Chiang Wu1 1 National Yang Ming Chiao Tung University 2 Macronix International Co., Ltd. August 12, 2023 =========================================================================================================================================================================================================================== In this experience report, we apply deep active learning to the field of design optimization to reduce the number of computationally expensive numerical simulations. We are interested in optimizing the design of structural components, where the shape is described by a set of parameters. If we can predict the performance based on these parameters and consider only the promising candidates for simulation, there is an enormous potential for saving computing power. We present two selection strategies for self-optimization to reduce the computational cost in multi-objective design optimization problems. Our proposed methodology provides an intuitive approach that is easy to apply, offers significant improvements over random sampling, and circumvents the need for uncertainty estimation. We evaluate our strategies on a large dataset from the domain of fluid dynamics and introduce two new evaluation metrics to determine the model's performance. Findings from our evaluation highlights the effectiveness of our selection strategies in accelerating design optimization. We believe that the introduced method is easily transferable to other self-optimization problems. Self-Optimization, Self-Supervised-Learning, Design-Optimization, Active-Learning, Numerical-Simulation § INTRODUCTION High-performance computing (HPC) systems have a high energy demand, which results in significant carbon dioxide emissions contributing notably to climate change <cit.>. Numerical simulations, for instance, computational fluid dynamics (CFD) or finite element analysis (FEA), widely used in industry and research, often demand days to weeks of computing time on HPC systems, posing a particular concern in this regard. Examples include simulations for weather predictions, structural dynamics, and electrodynamics. Reducing the number of numerical simulations or accelerating them could lead to significant savings in energy consumption and the reduction of carbon dioxide emissions. Design optimization (DO) aims to determine the optimal shape of components and typically involves many numerical simulations to identify the best design for pre-defined constraints. In recent years, deep learning methods are emerging in the field of DO to accelerate numerical simulations or to improve the overall performance <cit.>. Nevertheless, there is still the need for massive annotated training datasets. The annotations are acquired through numerical simulations, which are computationally expensive. To tackle this problem, we propose an approach to reduce the number of computer simulations required in DO processes with deep active learning (DAL) for regression. We refer to this as deep active design optimization (DADO). In DAL, the objective is to train a deep learning model, while actively selecting and annotating the most useful samples from a non-annotated dataset <cit.>. The criteria for determining the most valuable sample depends on the specific application area and will be explicitly defined later for our use case. Unlike traditional passive learning approaches, which require a large amount of annotated data, DAL aims to reduce the annotation effort by iteratively selecting the most useful samples for annotation. In DAL, the selection is typically based on selection strategies, such as uncertainty or diversity sampling <cit.>. These strategies aim to identify samples that are expected to improve the model's performance the most. The selected samples are then annotated by an oracle, which could be a human expert or a computer simulation. The annotated samples are used to update the deep learning model, incorporating the newly annotated samples into the training process. This iterative cycle of self-selecting samples, annotating samples, and updating the model continues until a pre-defined stopping criterion is achieved. The main advantage of DAL for regression is the potential to achieve high performance with less annotated data compared to traditional supervised learning approaches <cit.>. We conduct experiments on a real-world DO use case (cf. Figure <ref>) in the problem domain of fluid dynamics and thermodynamics, where flow deflections significantly contribute to efficiency losses in technical systems, such as piping systems for industrial heating and cooling. The objective is to discover a design that both reduces pumping power and ensures sufficient cooling capacity. This is a typical multi-objective and multi-physics optimization problem. Our approach employs DAL to reduce the number of computer simulations required for DO by selecting only the most valuable samples (i.e., those that are expected to yield the best performance gains), rather than accelerating individual simulations. We begin with a small number of randomly drawn annotated samples (i.e., designs) and a large data-pool of non-annotated samples (i.e., design candidates). The selection strategy iteratively selects design candidates to be evaluated by the computer simulation (i.e., expert model) to provide the ground truth annotation. The objective of this approach is to maximize performance with as few requests to the expert model as possible by selecting only those design candidates that are expected to be the most valuable for the model's performance. In typical DAL scenarios, the primary objective is to attain high predictive performance across the entire dataset. In DADO, the primary objective is to find a multi-objective optimal solution with as few candidate evaluations as possible. Since we are only interested in promising candidates, the predictive performance must only be high for these candidates, and it is not necessary to discriminate between mediocre and bad candidates. Consequently, our interest lies in a prediction model that exhibits strong performance and generalization within the feature space (i.e., design space) where the optimal solution is likely to reside. Metaphorically, this concept can be linked to a shrouded mountain range, where the peaks of different mountains emerge above a dense layer of fog. Rather than focusing on the entirety of the mountain, we solely concentrate on the elevated summits. One challenge in DAL is that the selection of promising design candidates for annotating can be biased towards certain regions of the design space <cit.> which results in bad model generalization. In contrast, we deliberately induce a bias by exploiting only the most promising regions in the design space. Thus, since conventional selection strategies are not well-suited to address our primary objective, we have developed two low-cost selection strategies that enable a model training within the relevant design space. They are characterized by their ease of implementation, low computational cost, and high effectiveness in finding promising design candidates. We refer to them as L2-Select and L2-Reject, as they select or reject design candidates based on the L2 norm. The proposed selection strategies are also applicable to other self-optimization problems and can be used to guide decision-making. Additionally, we propose two metrics tailored to the DAL regression problem to monitor and evaluate the model's performance at each iteration. Two scenarios with high and low annotation budgets with different DAL experiment parameters are investigated. This experience report presents our proposal to address DO problems using DAL methods. In addition to the publicly available code [<https://git.ies.uni-kassel.de/jdecke/lcs4dado>] developed on an open access dataset <cit.> we provide reproducible experiments and the following contributions to the research area. * We conduct initial research in applying DAL in the domain of DO as an optimization method to efficiently discover promising design candidates, therefore reducing the number of numerical simulations. * We propose two novel low-cost selection strategies for multi-objective DADO. Additionally, we introduce two metrics to evaluate the model's performance. * We make the first steps towards a deep generative active learning-based model for DO. The report also presents and discusses the challenges we encountered during this process. The remainder of this article is structured as follows. Section <ref> briefly overviews related work, focusing specifically on deep learning in design optimization and active learning. Section <ref> delves into the considered problem domain, providing a concise discussion on design optimization and focusing on the domain of fluid dynamics. Furthermore, we introduce our dataset, outlining its relevance to our research. Moving on to Section <ref>, we present our methodology in detail, describing how we trained a deep neural network and highlighting the selection strategies employed. Section <ref> is dedicated to the experimental setup and its results, where we compare our newly developed selection strategies against random strategies, providing insightful analyses and statistical observations. In Section <ref>, we present an idea to extend the described method to include a variational autoencoder (VAE) for future research. Finally, the article is concluded in Section <ref>. § RELATED WORK The optimization of design is a fundamental problem in engineering that has been extensively investigated over several decades <cit.>. Recently, there is a growing interest in employing machine learning methods to study DO problems. This interest is spurred by two factors: first, the emergence of new additive manufacturing techniques, which enable the production of free-form designs <cit.>; and second, the availability of computing power that allows the resolution of complex and relevant industrial problems <cit.>. For example, a current study shows the possibilities of combining DO and additive manufacturing of electromagnets <cit.>. Nie et al. <cit.> proposed TopologyGAN in 2021. It is used to optimize the stress and strain in a simple truss structure by comparing it with a baseline conditional generative adversarial network. The authors generate a dataset comprising already optimized truss structures, which were dependent on the size and direction of the load. The model's generalization capability was evaluated by applying unknown load boundary conditions. Although TopologyGAN did not perform optimization, it was able to identify an optimal truss structure for changed boundary conditions. The authors of <cit.> employed a graph neural network; with knowledge of the boundary conditions, they aim to generalize to previously unobserved or superimposed numerical meshes. A study from 2022 investigates if anomaly detection algorithms can be used to solve DO problems <cit.>. A significant problem is the tradeoff between exploration and exploitation. The key finding is that anomaly detection can be used to explore the design space. Still, there is a great difficulty in exploitation because anomaly detection algorithms would consider a design candidate as already detected whose target value is only slightly better than an already known one. The methodology in this work seeks to focus on exploitation without compromising exploration. Genetic algorithms (GA) such as the Non-dominated Sorting Genetic Algorithm 2 are well-established methods for solving DO problems; however, their convergence speed is rather slow <cit.>. In 2022, Parekh et al. developed a generative model for electrical machines with multiple topologies by using VAE in conjunction with a prediction network <cit.>. They concatenated the design parameter spaces of two distinct machine topologies and trained a latent representation that was highly effective in reconstructing the input. The latent dimension employed was defined to be greater than the design parameter space of the more complex machine topology in the latent space. Consequently, the latent representation did not compress any information of the input, and we hypothesize that the network learned the identity of the input designs only. The prediction network extended the capabilities of the VAE to enable it to predict objective values in a supervised manner. The dataset of both machines used in their study included 28,278 designs, which is a considerable amount of data. In real-world scenarios, DO problems do not typically provide such a large dataset. So our approach aims to use a significantly smaller number of design candidate with the help of DAL without compromising the model's prediction performance. Unfortunately, it was not possible to reproduce and extend their ideas because the code and data were not publicly available. To the best of our knowledge, DAL was not yet directly applied in DO. Nevertheless, Deng et al. introduce a comparable approach called Self-directed Online Learning Optimization for topology optimization in 2022 <cit.>. This approach integrates neural networks (NN) with numerical simulation data. The NN learns and substitutes the target as a function of design variables. At the same time, a small subset of training data is generated dynamically based on the NN prediction of the optimum. The NN fits the new training data and provides a better prediction in the region of interest until convergence. New design candidates selected for numerical evaluation are generated by adding a disturbance to the best design of the previous iteration, similar to mutation and crossover of GA. The main difference between the work of Deng et al. and this article is how the selection strategy performs. We focus on low-cost selection strategies, while they added disturbance to their design parameters. Furthermore, we have a vast dataset available to conduct our experiments offline. A request to the computer simulation can be answered instantaneously by drawing a design from the data-pool. § USE CASE The DO methodology developed in this work is based on a use case from the field of fluid dynamics and thermodynamics, but can also be applied to other problems and domains such as aerospace engineering, automotive industry, civil engineering, architecture, and manufacturing. In aerospace engineering, DO is used to improve the performance and efficiency of aircraft components, such as wings, fuselage, and engines. In the automotive industry, DO is employed to enhance the performance and safety of vehicles, such as improving aerodynamics, reducing emissions, and increasing efficiency of electromagnets <cit.>. In civil engineering, DO is applied to optimize the design of structures such as bridges, buildings, and dams, in terms of strength, stability, and cost. In architecture, DO is used to improve building performance regarding energy efficiency, natural light, and structural integrity. In manufacturing, DO is employed to optimize the design of products, such as reducing material waste and improving production efficiency. Our use case is a U-Bend flow channel. They can be found in various technical systems and applications, particularly those involving fluid transport or heat transfer. They are commonly employed in heat exchangers, such as condensers and evaporators, where they facilitate the transfer of heat between a fluid and its surroundings. U-bend flow channels can also be utilized in piping systems, refrigeration systems, air conditioning systems, and hydraulic systems to redirect or control the flow of fluids. The parameterization of the U-Bend is depicted in Figure <ref>. It is described with 28 design parameters and two target values. The parameterized geometry utilizes six boundary points, illustrated in green, with each boundary point offering two design parameters that are allowed to vary within their respective dashed bounding boxes. Additionally, we incorporate 16 curve parameters to connect these boundary points. In Figure <ref>, we present exemplary the pressure distribution of a particular design candidate, obtained through numerical simulation using the expert model. In a subsequent post-processing analysis, the pressure loss is computed based on this simulated solution. The design parameters determine the shape of the flow deflection, while the target values represent the pressure loss in [Pa] and the cooling capacity, which is quantified as the squared temperature difference between the heating surface and the cooling medium in [K^2m^2]. A small temperature difference corresponds to a high cooling capacity. The dataset comprises three distinct data formats for each design. However, for the purpose of this study, our focus lies solely on the parameter representation of the designs. This particular representation is chosen due to its streamlined and efficient nature, making it ideally suited for our methodology. The data is freely available and can be found in <cit.>, providing additional information on this specific use case and the numerical investigations to obtain the data. § METHODOLOGY §.§ DAL Process We present the methodology in Figure <ref>. The DAL process starts by randomly selecting initial_size designs candidates for training X_train_0 (depicted as a grey box) from a data-pool (depicted as a blue box). Based on the design candidates X_train_{i}, the Expert Model determines the corresponding target values y_train_{i}. Where i is the iteration loop count, indicating how many times the process has looped. Subsequently, the design candidates and the target values are used to train the Meta Model in a supervised manner. After training, the Meta Model predicts the target values of a draw_size large number of design candidates X_draw_{i} (depicted as green box). These predictions are passed to the Selector. Draw_size many random design candidates X_draw_{i} are bootstrapped in every iteration. Based on the selection strategy, the Selector chooses a subset of design candidates X_aq_{i} with the acquisition size aq_size. The Expert Model determines the true target values and the iteration loop finishes by adding the newly acquired designs to the training dataset. Each training cycle starts with newly initialized weights of the Meta Model. This loop iterates until a defined number n_iter is achieved. Expert Model: The Expert Model is not directly needed in this work, since a large annotated data-pool is available. Therefore, the Expert Model can be simulated to ensure a simple and fast pool-based experimentation and evaluation. However, the introduced experimental procedure can be used in an online setting where the Meta Model has to generate annotations on-the-fly. Meta Model: This study utilizes a multi-layer perceptron (MLP) as the Meta Model, with the first hidden layer consisting of 200 neurons and the second hidden layer comprising 100 neurons. A leakyReLU activation function and a dropout layer are applied to each hidden layer to enhance the model's generalization performance. The dropout rate is set to a constant value of 0.1. For the regression task, the output layer consists of one linear neuron for each of the two target values. Both the learning rate and the batch size are kept constant. The value for the learning rate is set to 0.0005 and the batch size to 4. An early stopping criterion in case the training error does not reduce further after 10 epochs is performed. The hyperparameters were determined based on results of preliminary studies. The weights of the best performing epoch are reloaded to evaluate the model's performance. At each process iteration, the model is trained from scratch to avoid potential bias to data selected in earlier iterations <cit.>. §.§ Selection Strategies We developed two simple but efficient selection strategies for DADO named L2-Select (L2S) and L2-Reject (L2R). These strategies can be characterized as simple in the sense that they are model-agnostic and they solely necessitate a point estimate for their targets. With these selection strategies, there is no need to rely on complex, computationally expensive and sensitive methods for uncertainty modeling. First, draw_size design candidates are bootstrapped from the entirety of the non-annotated data-pool to prevent test data leakage and to ensure an unbiased test of the model after each iteration. Subsequently, the target values y⃗_⃗n⃗ of these design candidates are determined by the Meta Model. The goal of the strategies is to choose aq_size candidates from the target value set y_draw_i: y_draw_i = {y⃗_⃗n⃗ | y⃗_⃗n⃗∈ J_1 × J_2} with |y_draw_i| = draw_size where J_1 and J_2 represent the objectives (for the use case J_1: pressure loss, J_2: cooling performance). For the L2S selection strategy, the aq_size design candidates with the smallest magnitude (or L2-norm) of the target value vector y⃗_⃗n⃗ are selected, cf. Equation (<ref>). The L2-norm L2S(y⃗) = |y⃗| = √(∑_j=1^num_obj y_j^2) { L2S(y⃗_⃗n⃗) | y⃗_⃗n⃗∈ y_draw_i, L2S(y⃗_⃗n⃗) ≤ L2S(y⃗_⃗n⃗+⃗1⃗) } is calculated by the square root of the sum of the squared elements y_n,j of the target vector as shown in Equation (<ref>). A graphical interpretation of this strategy is provided in Figure <ref>. L2R uses an adapted variant of the L2-norm based on the design candidate in the draw_size that has the largest predicted target values y_max,j as its origin. L2R uses an adapted variant of the L2-norm where the origin corresponds to the maximum values of the currently considered design candidates from y_draw_i, cf. Equation (<ref>). Equation <ref> shows the adapted L2-norm and its graphical interpretation is given in Figure <ref>. Instead of selecting the design candidate with lowest values for L2R(y⃗_⃗n⃗), the first draw_size - aq_size design candidates are rejected, cf. Equation (<ref>). Therefore, we select the remaining aq_size design candidates which are not rejected. y_max,j = max{y_n,j}, y⃗_⃗n⃗∈ y_draw_i L2R(y⃗) = √(∑_j=1^num_obj (y_j - y_max,j)^2) { L2R(y⃗_⃗n⃗) | y⃗_⃗n⃗∈ y_draw_i, L2R(y⃗_⃗n⃗) ≤ L2R(y⃗_⃗n⃗+⃗1⃗) } When comparing the two selection strategies in more detail, the differences in the choice of design candidates can be highlighted more clearly. In Figure <ref>, 400 design candidates are plotted following a multivariate Gaussian distribution. The selection strategy separates the aq_size selected design candidates from the unselected design candidates which are shown in blue. Design candidates selected by both selection strategies are indicated in purple, and design candidates marked with a red or green show the differences of the selected design candidates. We propose the L2R as an alternative to the L2S strategy because it may offer several advantages. Firstly, we assume that L2R effectively accounts for design candidates that reside at the edges of the target space, which are often overlooked by the L2S strategy. Additionally, the selected design candidates using L2R are more likely to correspond to a Pareto front, which is a key objective in multi-objective optimization. In contrast, design candidates drawn from the core of the distribution are less likely to offer diversity in the design space, assuming L2R to be the preferred selection strategy. We compare these selection strategies against each other and a random selection. § EXPERIMENTS §.§ Setup We define two experimental scenarios. One is a low-budget (S1) experiment and the other is a high-budget (S2) one. The main difference between the two scenarios is the amount of initial design candidates X_train_0 for training and the number of design candidates X_aq_i added to the training dataset per iteration. S1 holds an initial_size of 100 design candidates, its draw_size is set to 400 and the selected subset aq_size is variable in a set of {10, 20, 25, 50} design candidates per iteration until a budget of 500 design candidates is exhausted. We selected the experimental parameters for our DAL experiments based on the observation that datasets in the domain of DO are generally very small. Thus, the parameters for experiment S1 were chosen to represent a real-world scenario. Scenario S2 consists of 500 initial design candidates X_train_0 and acquires {50, 100, 125, 200} X_aq_i per loop execution from its draw_size of 2000 design candidates until a budget of 1500 is reached. In S2, we show the process again with a larger budget as it is typically used for DO, but the amount of data can still be considered to be very small for deep learning applications. All experiments for the multi-objective optimization are performed using the L2S, the L2R and a random selection strategy. In addition, each experiment is performed with 5 different random seeds to ensure a representative evaluation. The different DAL experiment parameters that are investigated are summarized in Table <ref>. To evaluate the experiments presented, we employ various metrics, including the mean square error (MSE), the spearman rank-order correlation coefficient (SROCC), as well as the mean rank (MR) and intersection metrics, which we introduce below. The results of the conducted experiments are summarized with the help of the area under the learning curve (AUC) metric in Table <ref>. This allows us to evaluate an entire optimization process in a single value per metric. To calculate the SROCC and MR metrics, it is necessary to sort the target value y⃗_⃗n⃗ of the estimated y_draw_i and the true values y_draw_i_true according to their quantity. To do so, we sort the currently drawn candidates X_draw_i based in the current selection strategy S (i.e., L2S, L2R, random) and the true performance values, cf. Equation (<ref>). The set K contains the indices from the sorted candidates set that correspond to the candidates in X_aq. The MR metric is then the average of the first aq_size indices of the set K. Additionally, we normalized the MR to be between 0 and 1, where 0 corresponds to its optimal value depending on its aq_size and the MR after the first process iteration which is to be assumed the highest value of the process. The optimal value of the MR metric would therefore result in aq_size · 1/2, K = { k | x⃗_⃗n⃗ = z⃗_⃗k⃗, x⃗_⃗n⃗∈ X_aq, z⃗_⃗k⃗∈sort(X_draw_i, S(y_draw_i_true)) } MR = 1/aq_size∑_k ∈ K k The SROCC is calculated using the first aq_size indices of both sorted lists as input and outputs a value between 0 and 1, where a value of 1 indicates that the aq_size design candidates of the predicted values match the correct sorting of the true values. The intersections metric assesses the accuracy of the top-rated designs. The metric is relatively simple. It compares the aq_size selected candidates X_aq_i against X_aq_i_true⊂ X_draw_i, the aq_size candidate selected based on the ground truth performance. The intersection of both sets can be used to directly calculate the accuracy which is based on the cardinality of the intersection, cf. Equation (<ref>). The name intersection for the metric is based on the intersection operation. intersection = |X_aq_i ∩ X_aq_i_true|/aq_size We prioritize SROCC, MR, and intersections metrics over classic MSE for DADO, as accurate ranking of designs is more crucial than precise estimations of target values. With the true ranking of the designs, the true y_aq_{i} values are calculated using the Expert Model. Nevertheless, we assume that DAL will lead to an improvement of the MSE of the added designs X_aq_{i} after each iteration. §.§ Results In Table <ref>, we present the AUC and the final value at the end of the process for all experiments and metrics. S1 is highlighted in green and S2 in blue, while the three selection strategies are differentiated by varying shades of gray. The results indicate that L2S outperforms L2R in every single experiment and that the random strategy consistently yields the worst results, except in the case of rnd_MSE which is the MSE between y_draw_i and its true annotations. Additionally, the quality of the results does not simply increase with its aq_size, for S1. For S2, the experiment with the smallest aq_size based on the best_MSE, the MR and the SROCC provide predictions with the highest performance. The best_MSE is the MSE between y_aq_i and its true annotations. The superiority of the random selection strategy in the rnd_MSE metric is attributable to the bias that we attempt to impose through our selection strategies, whereby the model's predictions in regions where the selection strategies assume promising values are expected to yield a higher performance. As such, it is reasonable that models that are trained using the design candidates suggested by L2S and L2R would perform worse in other regions. The best_MSE identifies the MSE that was evaluated on the selected design candidates. This metric monitors the predictive performance of our model. To look at the results in more detail, we have selected an experiment for S1 and S2 which we would like to discuss in more detail below. In Figure <ref> shows the results for S1 with a aq_size of 25 design candidates per iteration. Since our initial_size is 100 design candidates and our budget is 500 design candidates in total, we iterate 16 times. In Figure <ref>, we present the intersections metric, the SROCC in Figure <ref>, the rnd_MSE in Figure <ref>, and the best_MSE in Figure <ref>. The lines represent the mean values obtained from the five runs conducted for each experiment. In addition to the mean values, the plot displays the standard-error intervals for each metric and selection strategy. Throughout the course of the experiment, it is evident that the intersections and the SROCC show an increasing trend, while the rnd_MSE and the best_MSE exhibit a decreasing trend. Although the random strategy shows good predictive performance on the randomly selected design candidates, it underperforms compared to the two selection strategies on the promising design candidates. The benefits of the low-cost selection strategies become apparent upon examining the following metrics. The intersection metric shows that the process develops a self-awareness in the course of the iterations and is increasingly able to select suitable design candidates for multi-objective DO. However, it becomes apparent that the intersection metric is too strict for random selection, and despite being able to improve, models trained on the basis of random selection are still unable to satisfy this metric. Therefore, the MR and the SROCC are introduced as alternative metrics. While the visualization of the MR has been omitted due to limited space, it is proven to be a useful metric for comparing experiments (see Table <ref>). The SROCC shows a similar qualitative trend as the intersection metric, with L2S outperforming L2R and the random strategy. However, it also reveals that the random strategy improves in sorting the draw_size design candidates by rank based on their target value over the iterations, which is not reflected by the intersection metric. Unexpectedly, the L2S strategy outperforms the L2R strategy, which may be attributed to the nature of the available data. The selection strategy was originally designed for a multivariate Gaussian distribution; however, as illustrated in Figure <ref>, the two scaled target values of the real data do not conform to a Gaussian distribution, hence the solution quality of L2S exceeds that of L2R in this use case. As stated before, the S2 experiment with an aq_size of 50 produced the best results. Therefore, we will examine the results of this experiment more closely in the Figure <ref>. Since S2 had a budget of 1500 design candidates, this implies that 20 iterations were completed. When examining the results from S2, it becomes evident that the standard-error intervals in the experiments are considerably reduced due to the larger budget. Additionally, the metrics are notably improved when compared to S1. The disparities between the selection strategies mentioned earlier are also more distinct in evaluating S2 but are in line with the outcomes previously discussed for S1. Detecting any noticeable variation in prediction quality based on the rnd_MSE is challenging for both L2S and L2R. The best_MSE values exhibit almost identical patterns and trends. Nonetheless, differences in the performance between L2R and L2S can be observed with the aid of the intersection and the SROCC metrics. Notably, a decline in the slope of the curve can be inferred with random selection, as indicated by the SROCC. Also noteworthy is the high fluctuation of the best_MSE in random selection, from which it can be concluded that the prediction performance on the selected design candidates is considerably lower. In comparing the four metrics between the final iteration of S1 (cf. Figure <ref>) and the initial iteration of S2 (cf. Figure <ref>), a considerable performance improvement is observed in favor of S1, despite both scenarios having an equal budget at that stage. This finding supports the effectiveness and benefits of our methodology in DO § TOWARDS GENERATIVE DEEP ACTIVE DESIGN OPTIMIZATION Based on our confidence in the feasibility of performing self-optimizing multi-objective optimization using DAL, we aim to augment the Meta Model in the presented process with a VAE. Similar to Parekh et al. <cit.>, we extend the VAE with an additional prediction network and, thereby, perform a multi-task regression and reconstruction model. As described in Section <ref>, we believe that their VAE is exclusive learning the identity of the two motor topologies. The reason for that is the chosen size of the latent space and the fact that the used trainingset does not represent a real-world DO scenario. Our idea is to embed the VAE into the DAL process presented above. As a selection strategy, a clustering approach in the latent representation shall be applied to separate areas of promising design candidates from other less well-performing design candidates. The generative properties of the VAE will then be used to specifically generate new design candidates which belong to the promising area of latent space. The smaller the latent size is, the easier it will be for clustering methods to separate these areas, but the more challenging the subsequent reconstruction of the design candidates might be. The prediction network is parallel to the decoder of the VAE in the latent space. With the help of this additional network, the latent space can be divided based on the predicted target values, to enable clustering. Although numerous experiments have been conducted to optimize the structure and hyperparameters of the VAE, a suitable trade-off between a well-separable latent space and a reconstruction error, that is not excessively large, has yet to be found. If the reconstruction error is too large, it is expected that there will be a high deviation between the prediction of the Meta Model and the Expert Model. On the other hand, if the prediction performance is too low, patterns cannot be detected in the latent space. This issue may be attributed to the weighting factor β, which determines the influence of the Kullback Leibler divergence. Several approaches have been explored, such as introducing a cyclical annealing schedule <cit.> for β, to address the reconstruction and separability trade-off. However, no clear trend can be observed in Figure <ref>, which displays the absolute deviation of the reconstruction of eight random test design candidates. Each of the 28 boxes, arranged in a 7x4 grid, represents a design parameter. In our next steps, we plan to introduce cyclic training for the reconstruction and prediction tasks. We believe that the VAE, as a Meta Model, will play a central role in DADO, and therefore, we consider it the focus of our future work. § CONCLUSION In this experience report, we have demonstrated the feasibility of utilizing DAL to tackle DO, leveraging a large pool of non-annotated data. The developed DAL selection strategies for regression applied to a multi-objective DO have shown promising results. The results remain consistent across all of the experiments. The conducted experiments show that based on rnd_MSE metric the performance of the random selection strategy surpasses both of our selection strategies. This outcome is, however, unsurprising, as the MSE of the prediction is computed based on randomly selected design candidates from the entirety of our data-pool. Nevertheless, our objective is to bias the model to perform well on the self-selected design candidates, which are advantageous for self-optimization by proposing promising design candidates only. Our assumption that the developed L2R selection strategy would outperform the L2S strategy due to its more Pareto-like selection was not confirmed by the experiments. The reason for this is that the method was developed using a multivariate Gaussian distribution. In our dataset, the assumption of a Gaussian is not fulfilled. Especially, for small draw_sizes. Both strategies presented in this article rely on the L2-norm, which selects a sample circularly around an origin. To improve the robustness of the selection to differently scaled target values, one possibility is to replace the circular selection with an ellipsoid. Our study demonstrated that the selection strategies are providing promising results with two target values, an extension to higher dimensional multi-objective optimization should be straightforward. Further, we have shown the limitations of incorporating a generative model into the DADO process. We plan to develop a selection strategy based on a clustering procedure in the latent space, once we have achieved a good balance between reconstruction and disentanglement in the latent space. Subsequently, we propose the integration of the complex numerical simulations into the process, described in this work, enabling real-time generation of annotations for design candidates outside the existing data-pool. Moreover, there is clear potential in exploring the Meta Model and its hyperparameters to enhance prediction quality and accelerate DO, which was not the focus of this study. An investigation of the raw data that is available in numerical simulations in the sense of numerical meshes could be investigated using graph neural networks in order to determine if another data representation is advantageous for performance predictions in DADO. With our research endeavors, we seek to make a contribution towards the reduction of CFD and FEA simulations on HPC systems. By reducing the number of such simulations, we aim to effectively reduce the associated energy costs and mitigate associated climate-damaging emissions, thus promoting a more sustainable and environmentally conscious approach for future computational simulations. § ACKNOWLEDGMENT We express our gratitude to Dr. Franz Götz-Hahn for the insightful discussions. ieeetr
http://arxiv.org/abs/2307.03870v1
20230708005332
Opacity of Parametric Discrete Event Systems: Models, Decidability, and Algorithms
[ "Weilin Deng", "Daowen Qiu", "Jingkai Yang" ]
cs.FL
[ "cs.FL", "cs.SY", "eess.SY" ]
Opacity of Parametric Discrete Event Systems: Models, Decidability, and Algorithms Weilin Deng, Daowen Qiu^⋆, and Jingkai Yang Weilin Deng is with the School of Internet Finance and Information Engineering, Guangdong University of Finance, Guangzhou, 510521, China (e-mail: [email protected]). Daowen Qiu (Corresponding author) is with the Institute of Quantum Computing and Computer Theory, School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou, 510006, China (e-mail: [email protected]). Jingkai Yang is with the School of Mathematics and Statistics, Yulin Normal University, Yulin, 537000, China (e-mail: [email protected]). =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Finite automata (FAs) model is a popular tool to characterize discrete event systems (DESs) due to its succinctness. However, for some complex systems, it is difficult to describe the necessary details by means of FAs model. In this paper, we consider a kind of extended finite automata (EFAs) in which each transition carries a predicate over state and event parameters. We also consider a type of simplified EFAs, called Event-Parameters EFAs (EP-EFAs), where the state parameters are removed. Based upon these two parametric models, we investigate the problem of opacity analysis for parametric DESs. First of all, it is shown that EFAs model is more expressive than EP-EFAs model. Secondly, it is proved that the opacity properties for EFAs are undecidable in general. Moreover, the decidable opacity properties for EP-EFAs are investigated. We present the verification algorithms for current-state opacity, initial-state opacity and infinite-step opacity, and then discuss the complexity. This paper establishes a preliminary theory for the opacity of parametric DESs, which lays a foundation for the opacity analysis of complex systems. Opacity, discrete-event systems, parametric finite automata, extended finite automata § INTRODUCTION Over the last ten years, the problem of opacity analysis for discrete event systems (DESs) received considerable attention. Opacity is an important security property, which was initially introduced in computer science to analyze cryptographic protocols. Roughly speaking, a DES is said to be opaque, if the intruder cannot determine the occurrence of the secret behavior by his observations to the system. Finite automata (FAs) model is a popular tool to describe DESs in logical level due to its succinctness <cit.>. The notions of language-based opacity <cit.>, <cit.>, current-state opacity <cit.>, initial-state opacity <cit.>, infinite-step opacity <cit.> and pre-opacity <cit.> for FAs model were well investigated in recent years. In addition, the opacity enforcement based on the techniques of supervisory control and output obfuscation were proposed (e.g., see <cit.>-<cit.> and references therein). For some complex systems, it is difficult to describe the necessary details and analyze their opacity properties by means of FAs model, and thus some extended models are necessary. Actually, the opacity properties for various extended models were investigated recently, such as time systems <cit.>, networked systems <cit.>, Petri nets <cit.>-<cit.>, cyber-physical systems <cit.>, probabilistic systems <cit.> fuzzy systems <cit.>, and the other systems <cit.>-<cit.>. In the field of system modeling, control flow refers to the possible sequences of the interactions between a system and its environment, and data flow refers to the constraints on data parameters in the interactions <cit.>. FAs model well describes control flow, but fails to capture data flow and the mutual influence between control flow and data flow efficiently. A typical example is modeling network protocols, where the models must characterize how different parameter values in sequence numbers, user IDs, socket IDs, etc., affect the control flow. Another easy-to-understand example is modeling the process of web-site registering that usually requires a user to provide her/his identical password twice (see Examples <ref>-<ref> and Remark <ref> in Section II for details). Obviously, it is difficult and inefficient for FAs model to do such things. To address this problem, in this paper, we also consider a kind of extended finite automata (EFAs), in which the states and events are both augmented with parameters, and each transition carries a predicate and an update function over these parameters. The EFAs model is a powerful but complicated tool. It is hard to analyze some properties of EFAs, and we prove that the opacity properties of EFAs are undecidable. Thus, we also consider a simplified EFAs model, called Event-Parameters EFAs (EP-EFAs), where the state parameters are removed. By means of the transitions carrying predicates over parameters, the models of EFAs and EP-EFAs improve FAs model in efficiently representing and handling some complex systems where control flow, data flow and the interactions between them are required to be characterized. In general, EFAs and EP-EFAs can be viewed as a special type of infinite and finite state models, respectively, with infinite alphabet, which have been well investigated in computer science (e.g., see <cit.>-<cit.>). For the general infinite state automata (ISA), only a few properties are decidable <cit.>, <cit.>, However, for some types of ISA, there exist quite a few decidable properties, e.g., the properties of reachability, simulation and eventuality of Well-Structured ISA are all decidable <cit.>. On the other hand, for the finite state models with infinite alphabet, there are many decidable properties, as well as undecidable properties <cit.>. For example, the emptiness and language inclusion of 1N-RAs are decidable; however, its universality and equivalence are undecidable <cit.>. In this paper, the aforementioned EFAs and EP-EFAs are referred to as parametric DESs collectively. We would like to establish a preliminary theory for the opacity of parametric DESs, which lays a foundation to analyze the opacity of some complex systems. To the best of our knowledge, this is the first study on the opacity analysis of parametric DESs. The main contributions of this paper are as follows. * Two parametric models, i.e., EFAs and EP-EFAs, are introduced for DESs, and then it is proved that the latter can be simulated by the former but the reverse does not hold. This means that EFAs model is more expressive than EP-EFAs model. We also illustrate that these two parametric models are both more expressive and efficient than FAs model. * We formulate the current-state opacity, initial-state opacity and infinite-step opacity for parametric DESs, and then prove that these opacity properties for EFAs are all undecidable in general. The basic idea of the proof is reducing the halting problem of Two-Counter Machines (2CMs) to the verification of the opacity properties. * We investigate the decidable opacity properties for EP-EFAs. Based on the symbolic observer, the verification algorithms for current-state opacity, initial-state opacity and infinite-step opacity are provided, and the complexity is analyzed. The rest of this paper is organized as follows. The system models for parametric DESs are introduced and investigated in Section II. The problem formulation and necessary assumptions are provided in Section III. In Section IV, the opacity properties of EFAs are proved to be undecidable, and in Section V, the decidable opacity properties of EP-EFAs are studied. Finally, Section VI concludes this paper. § PARAMETRIC MODELS In this section, we present some notations, and introduce two parametric models: extended finite automata (EFAs) and Event-Parameters EFAs (EP-EFAs), and then discuss their expressiveness and efficiency. Let ℕ be the set of natural numbers, and [m:n] be the set of integers {m,m+1,…,n}. Let Σ be an alphabet, Σ^* be the set of finite strings over Σ including empty string ϵ, Σ^k be the set of strings of length k, and Σ^≤ k be the set of strings of length i, i ∈ [0:k], over Σ. A language L over Σ is a subset of Σ^*. We denote by |Ω| the number of elements in the set of Ω, and by |s| the length of the string s ∈ L with a slight abuse of notation. A discrete event system (DES) is usually modeled as a finite automaton H=(Q, q_0,Σ, δ) <cit.>, where Q is the finite set of states, Q_0⊆ Q is the set of initial states, Σ is the finite set of events, and δ:Q ×Σ→ Q is the deterministic (partial) transition function. The transition function δ can be extended to domains Q ×Σ^* and 2^Q × 2^Σ^* by the usual manner. The generated language by H is L(H) = {s | ∃ q_0∈ Q_0, q ∈ Q, s.t. q = δ(q_0,s) }. A Boolean algebra is a tuple 𝒜=(𝒰, Ψ, ∙), where 𝒰 is the universe of discourse, and Ψ is the set of predicates closed under the Boolean connectives, substitution, equality and if-then-else terms <cit.>. The element φ∈Ψ is called an 𝒰-predicate in 𝒜, or just predicate when 𝒰 and 𝒜 are clear from the context. The denotation function ∙: Ψ→ 2^𝒰 maps a predicate to the valuations of variables that make the predicate true. Hence, for any φ, ψ∈Ψ, φ∧ψ = φ∩ψ, φ∨ψ = φ∪ψ, φ = 𝒰\φ <cit.>. For the true predicate ⊤ and false predicate , we have ⊤ = 𝒰 and = ∅. For any φ∈Ψ, φ is said to be satisfiable, denoted by isSat(φ), if φ≠∅. This paper solely focuses on Boolean algebras in which the predicate satisfiability is decidable. Throughout this paper, we denote by X and Y the (infinite or finite) domains of event and state parameters, respectively, and denote by x and y (with superscript and subscript usually) the event and state parameters, respectively. In addition, we use a, b (with superscript and subscript usually) to denote the specific values of event and state parameters, respectively. The model of extended finite automata (EFAs) is defined as follows. An extended finite automaton (EFA) is defined as E = (Q, Σ, X, Q_0, Q_m, Y, Y_0, R ) where Q is the finite set of state tags, Σ is the finite set of event tags, X is the domain of one event parameter, Q_0⊆ Q and Q_m⊆ Q are the sets of tags of initial and marked states, respectively, Y is the domain of the state parameter and Y_0⊆ Y is the domain of parameter for initial states, and R is the set of symbolic transitions and each symbolic transition r ∈ R is of form q q where * q ∈ Q and q∈ Q are the tags of source and target states, respectively, which carry state parameters y_q∈ Y and y_q∈ Y, respectively; * k ≥ 0, the step-length of the transition, is the size of the tuple of event parameters in this transition; * σ∈Σ is the tag of the event, and if k ≥ 1, it carries a k-tuple of event parameters ⟨ x_σ^1, x_σ^2, …, x_σ^k⟩, x_σ^i∈ X, i ∈ [1:k], otherwise, it carries no event parameter; * φ is the guard of transition r, and it is a (Y × X^k)-predicate if k ≥ 1 otherwise a Y-predicate, and if event σ occurs at state q with the proper values of parameters to enable φ, then the transition r may be fired; * ξ, a Y × X^k→ Y function if k ≥ 1 otherwise a Y → Y function, is responsible for updating the parameter of target state according to the given parameters of source state and event when the transition r is fired. We denote by Ξ the special updating function that does nothing. If there are multiple transitions that can be fired at a state, then only one of them is fired nondeterministically. E is said to be deterministic, if no more than one transition can be fired synchronously at each state, i.e., for two different transitions q q_1 and q q_2, φ_1(b, ⟨ a_1, a_2, …, a_k⟩) ∧φ_2(b, ⟨ a_1, a_2, …, a_k⟩) dose not hold for any state parameter value b and event parameters values ⟨ a_1, a_2, …, a_k⟩. Moreover, the implicit ϵ-selfloop q q can be viewed as the special 0-step-length transition q q. The step-length of E is defined as the maximum of the step-lengths of the symbolic transitions in E. Actually, the symbolic transition q q, k ≥ 1, defines the set of concrete transitions {(q,b) (q,b) | (b, ⟨ a_1, a_2, …, a_k⟩) ∈φ ∧b = ξ(b,⟨ a_1, a_2, …, a_k⟩) }, where (q,b) and (q,b) are the source and target states, respectively, and σ⟨ a_1, a_2, …, a_k⟩ is the parameterized event of the concrete transition. For example, suppose X=Y={0, 1,2}, the symbolic transition q q denotes the set of concrete transitions {(q,0) (q,0), (q,1) (q,2)}. The symbolic transitions allow EFAs model to efficiently characterize the control flow (i.e., the possible sequences of fired transitions), data flow (i.e., the constraints on event parameters) and their interactions in a system. A parameterized string is a sequence of parameterized events, and a parameterized language is a set of parameterized strings. For a parameterized string u = v_1v_2… v_n, where v_i=σ_i⟨ a_i^1, a_i^2, …, a_i^k_i⟩, k_i≥ 0 [ if k_i= 0, then v_i=σ_i and 𝔇(u) = 𝔇(v_1… v_i-1v_i+1… v_n).], the data string of u, denoted by 𝔇(u), is obtained by stripping all the event tags σ_i, i.e., 𝔇(u) = ⟨ a_1^1, a_1^2, …, a_1^k_1⟩ ⟨ a_2^1, a_2^2, …, a_2^k_2⟩ … ⟨ a_n^1, a_n^2, …, a_n^k_n⟩ is a sequence of event parameter tuples. Intuitively, a data string is a sequence of data exchanges between the system and its environment that meet the data constraints. The flat data string is obtained by flattening the parameters of a data string in order, i.e., 𝔣𝔇(u) = a_1^1 a_1^2… a_1^k_1 a_2^1 a_2^2… a_2^k_2… a_n^1 a_n^2… a_n^k_n. Given an EFA E = (Q, Σ, X, Q_0, Q_m, Y, Y_0, R ). If there exist a series of concrete transitions (q_i,b_i) (q_i+1, b_i+1), where these v_i are parameterized events, i ∈ [1:n], n ≥ 1, we define the combined concrete transition as the path (q_1,b_1) (q_n+1, b_n+1) where u = v_1v_2… v_n. The language between a set of source states Q_1⊆ Q and a set of target states Q_2⊆ Q of the EFA E = (Q, Σ, X, Q_0, Q_m, Y, Y_0, R ) is defined as follows. L_Q_1^Q_2(E)= { u | ∃ q_1∈ Q_1, q_2∈ Q_2, b_1, b_2∈ Y, s.t. (q_1,b_1) (q_2, b_2) ∧ ( q_1∈ Q_0⇒ b_1∈ Y_0) }. The generated language and marked language by the EFA E are, respectively, defined as L(E) = L_Q_0^Q(E) and L_m(E) = L_Q_0^Q_m(E). The data language and marked data language of the EFA E are, respectively, defined as L_d(E) = ⋃_u ∈ L(E)𝔇(u) and L_md(E) = ⋃_u ∈ L_m(E)𝔇(u). The flat data language and flat marked data language of the EFA E are, respectively, defined as L_fd(E) = ⋃_u ∈ L(E)𝔣𝔇(u) and L_fmd(E) = ⋃_u ∈ L_m(E)𝔣𝔇(u). The EFA E shown in Fig. <ref> simulates the process of user registering in a web-site, where the user is required to provide his password twice for confirming its correctness. Suppose the charset for nickname and password are both Ω, then the domains of state and event parameters are Y=X=Ω^*. In the symbolic transition from q_0 to q_1, the user inputs his nickname, and the guard ⊤ does not block any input and the updating function Ξ does nothing. In the symbolic transition from q_1 to q_2, the user inputs his password for the first time (denoted by x_σ_2^1), and the updating function ξ is defined as y_q_2← x_σ_2^1 that means the password is stored to the target state q_2 as its parameter y_q_2. In the symbolic transitions from q_2 to q_3 and from q_2 to q_0, the user provides his password for the second time (denoted by x_σ_3^1). If these two passwords are identical (i.e., y_q_2=x_σ_3^1), then the former transition is fired, and the process goes to the final state q_3 and terminates successfully, otherwise (i.e., y_q_2≠ x_σ_3^1) the latter transition is fired, and the process fails and goes back to the initial state q_0. Note that the EFA E shown in Fig. <ref> is of 1-step-length. A more concise 3-step-length EFA with only two states and two symbolic transitions can also describe the same process. Before introducing this, we present a simplified EFAs model that has no state parameter. An Event-Parameters EFA (EP-EFA) is defined as S = (Q, Σ, X, Q_0, Q_m, T ), where Q is the finite set of states, Σ is the finite set of event tags, X is the domain of one event parameter, Q_0⊆ Q and Q_m⊆ Q are the sets of initial and marked states, respectively, and T is the set of symbolic transitions and each symbolic transition t ∈ T is of form q q where * q ∈ Q and q∈ Q are the source and target states, respectively; * k ≥ 0, the step-length of the transition, is the size of the tuple of event parameters in this transition; * σ∈Σ is the tag of the event, and if k ≥ 1, it carries a k-tuple of event parameters ⟨ x_σ^1, x_σ^2, …, x_σ^k⟩, x_σ^i∈ X, i ∈ [1:k], otherwise it carries no event parameter; * φ is the guard of transition t, and it is an X^k-predicate if k ≥ 1 otherwise the true predicate ⊤, and if event σ occurs at state q with the proper values of parameters to enable φ, then the transition t may be fired. If there are multiple transitions that can be fired at a state, then only one of them is fired nondeterministically. E is said to be deterministic, if no more than one transition can be fired synchronously at each state, i.e., for two different transitions q q_1 and q q_2, φ_1(⟨ a_1, a_2, …, a_k⟩) ∧φ_2(⟨ a_1, a_2, …, a_k⟩) does not hold for any event parameters values ⟨ a_1, a_2, …, a_k⟩[If k = 0, then φ_1 = φ_2 = ⊤ by the definition of symbolic transition. In this case, the determinism requires that there do not exist such two different transitions q q_1 and q q_2, which is actually the condition for deterministic FAs.]. Moreover, the implicit ϵ-selfloop q q can be viewed as the special 0-step-length transition q q. The step-length of S is defined as the maximum of the step-lengths of the symbolic transitions in S. The symbolic transition q q represents the set of concrete transitions {q q | ⟨ a_1, a_2, …, a_k⟩∈φ}, where σ⟨ a_1, a_2, …, a_k⟩ is the parameterized event of the concrete transition. For example, suppose X={0, 1, 2}, then the symbolic transition q q denotes the set of concrete transitions {q q, q q, q q, q q}. According to Definitions <ref> and <ref>, EP-EFAs model is just a special type of EFAs model without state parameters. This makes it impossible to keep information in the states, and thus limits the expressiveness of EP-EFAs model inevitably. The definitions of the parameterized string, data string, flat data string in EP-EFAs model are the same as themselves in EFAs model. Given an EP-EFA S = (Q, Σ, X, Q_0, Q_m, T ). If there exist a series of concrete transitions q_i q_i+1 where these v_i are parameterized events, i ∈ [1:n], n ≥ 1, we define the combined concrete transition as the path q_1 q_n+1 where u = v_1v_2… v_n. The language between the set of source states Q_1⊆ Q and the set of target states Q_2⊆ Q of the EP-EFA S is defined as follows. L_Q_1^Q_2(S)= { u | ∃ q_1, q_2∈ Q_1 s. t. q_1 q_2}. The generated language and marked language by the EP-EFA S are, respectively, defined as L(S) = L_Q_0^Q(S) and L_m(S) = L_Q_0^Q_m(S). The data language and marked data language of the EP-EFA S are, respectively, defined as L_d(S) = ⋃_u ∈ L(S)𝔇(u) and L_md(S) = ⋃_u ∈ L_m(S)𝔇(u). The flat data language and flat marked data language of the EP-EFA S are, respectively, defined as L_fd(S) = ⋃_u ∈ L(S)𝔣𝔇(u) and L_fmd(S) = ⋃_u ∈ L_m(S)𝔣𝔇(u). An EP-EFA S and an EFA E are said to be data-equivalent, if L_fmd(S) =L_fmd(E). The EP-EFA S shown in Fig. <ref> also simulates the process of user registering in a web-site. In this EP-EFA S, the event σ carries a 3-tuple of event parameters ⟨ x_σ^1, x_σ^2, x_σ^3⟩, where the first element is for user's nickname, and the second and third elements are both for user's password. Hence, if x_σ^2 = x_σ^3, then the process goes to the final state q_1 and terminates successfully, otherwise it fails and stays in state q_0. It is easy to verify that the EP-EFA S is data-equivalent to the EFA E shown in Fig. <ref>, as the parameters consumed in the transitions q_0→ q_1→ q_2→ q_0 and q_0→ q_1→ q_2→ q_3 in E are exactly the same as that consumed in the transitions q_0→ q_0 and q_0→ q_1 in S, respectively. Examples (<ref>-<ref>) show that, although the state parameter is removed, EP-EFAs model still retains a fair expressiveness by reading multiple event parameters as needed in each transition. By the definitions, the models of EFAs and EP-EFAs allow for infinite state/event spaces, while FAs model only supports finite ones. This means the parametric models are more powerful than FAs model. In Examples (<ref>-<ref>), suppose |Ω| = M and X=Ω^≤ N, then |X| = ∑_i=1^N M^i. To simulate the process of user registering in this finite space, FAs model needs at least (|X|+3) states and |X|*(|X|+2) transitions, as shown in Fig. <ref>. This suggests that even in a finite space, FAs model may be quite inefficient for certain complex systems when compared with the parametric models. There exists an EFA E that cannot be data-equivalent with any EP-EFA S_E. First of all, we construct an EFA E with X=Y=ℕ, as shown in Fig. <ref>. Obviously, E accepts even number of increasing natural numbers, i.e., the marked data string of E has the form of a_1a_2… a_2*n, where n ≥ 1, a_i+1 > a_i, i ∈ [1:(2*n-1)]. Secondly, we prove there does not exist a data-equivalent EP-EFA S_E for the EFA E by contradiction. Suppose there exists a data-equivalent EP-EFA S_E, where the number of the states is m and the step-length is K. Take a flat marked data string of S_E u=a_1a_2… a_2*n where 2*n > (m-1)*K. Suppose that u visits the sequence of states q_0→ q_1→…→ q_l in S_E, where q_0∈ Q_0 and q_l∈ Q_m. Since the step-length of S_E is K, we have l*K ≥ 2*n, and thus l>m-1. This means that there exist two states q_i, q_j in the sequence of visited states of u such that q_i = q_j and 0 ≤ i < j ≤ l, as the EP-EFA S_E has m states. Suppose that the parameters consumed from state q_i to state q_j are a_ia_i+1… a_j, 1 ≤i < j≤ 2*n. Obviously, the flat data string u = a_1… a_i-1 a_ia_i+1… a_j a_ia_i+1… a_j a_j+1… a_2*n also can be marked by S_E. Since S_E and E are data-equivalent, u is marked by E. However, it is not true, as a_j > a_i and û is not a sequence of increasing numbers. Hence, the contradiction is generated, which implies there does not exist a data-equivalent EP-EFA S_E for the EFA E shown in Fig. <ref>. For any EP-EFA S, there always exists a data-equivalent EFA E_S. It is straightforward by Definitions <ref> and <ref>. Propositions <ref> and <ref> imply that EFAs model is more expressive than EP-EFAs model. The models of EFAs and EP-EFAs extend FAs to an infinite model by means of the symbolic transitions carrying predicates over the infinite parameter space. With the help of the satisfiability modulo theories (SMT) solvers (e.g., Z3, Open SMT, MathSAT5, etc., see <cit.> for details), the data types that can be efficiently processed by parametric models include real/integer, bit vectors, arrays, difference logic, inductive data, etc. Therefore, the models of EFAs and EP-EFAs are quite expressive tools for DESs. A longer step-length adds the expressiveness of EP-EFAs. As evidence, the k-step-length transition q_1 q_2 has no equivalent series of transitions with a lower step-length. However, for the EFAs model, a longer step-length does not add its expressiveness, as the state parameter can be used to store the necessary information during the transitions. The subsequent proposition presents a formal demonstration for this fact. For any m-step-length EFA E_m, m > 1, there always exists a data-equivalent 1-step-length EFA E_1. Given any m-step-length EFA E_m, we construct the data-equivalent 1-step-length EFA E_1 as follows. For each symbolic transition (q,y_q ) (q, y_q) of E_m, 1 < k ≤ m, we add (k-1) new states: q^i, i∈ [1:k-1], and k events σ^j, j ∈ [1:k], and then construct a chain of k 1-step-length transitions q^j-1 q^j where q^0 = q and q^k = q to replace the transition q q. Specifically, the update functions are defined as follows: ξ^1def= [ y_q^1(1) ← x_σ^1^1 ] and for j ∈ [2:k-1], ξ^jdef= [ y_q^j(1) ← y_q^j-1(1); …; y_q^j(j-1) ← y_q^j-1(j-1); y_q^j(j) ← x_σ^j^1 ] where y_q^j(i) means the i^th element of the state parameter of q^j, and ξ^kdef=ξ(x_σ^1/y_q(1), …, x_σ^k-1/y_q(k-1), x_σ^k/x_σ^k^1), where the “A/B" denotes the substituting A by B in function ξ. The predicates are as follows: φ^i = ⊤ for i∈ [1:k-1], and φ^k = φ(x_σ^1/y_q(1), …, x_σ^k-1/y_q(k-1), x_σ^k/x_σ^k^1) where the “A/B" denotes the substituting A by B in the predicate. Obviously, φ^k is a (Y × X)-predicate where Y = X^k-1. The intuitive meaning of these new transitions is as follows. Each new transition is responsible for transmitting state parameters from source state to target state and storing one event parameter to target state parameter; and the first (k-1) transitions are guarded with ⊤ and the last one is guarded with φ^k that is equivalent with φ. In addition, ξ^k is also equivalent with ξ. This means that for any k event parameters, the transition (q,y_q ) (q, y_q) is fired if and only if the chain of transitions is fired, and meanwhile the parameter of the final state q is also updated in the same way. Thus, by replacing each transition of E_m with such a chain of transitions, we can obtain the data-equivalent 1-step-length E_1. § PROBLEM FORMULATION AND ASSUMPTIONS In this section, we present some assumptions and then formulate the problems discussed in this paper. In rest of this paper, we focus on the problem of opacity analysis for a parametric DES modeled by an EFA E = (Q, Σ, X, Q_0, Q_m, Y, Y_0, R ) or an EP-EFA S = (Q, Σ, X, Q_0, Q_m, T ). In the following, the parametric DES is denoted by G, and the notation L_Q_1^Q_2(G) is the language calculated by Equation (<ref>) when G is an EFA, and by Equation (<ref>) when G is an EP-EFA. The basic assumptions in this paper are as follows. * Assumption 1: The secret and non-secret behavior of the parametric system can be coded into its state space. We consider the following two cases: 1) the secret and non-secret behavior are the sets of data strings arriving in the given secret states Q_s⊆ Q and non-secret states Q_ns⊆ Q, respectively, and Definitions <ref> and <ref> are of this case; 2) the secret and non-secret behavior are the sets of data strings originating from the given secret initial states Q_s⊆ Q_0 and non-secret initial states Q_ns⊆ Q_0, respectively, and Definition <ref> is of this case. * Assumption 2: The intruder knows the complete structure of the parametric DES G, and he can observe the data exchanges between the system and its environment during the interactions (i.e., data language L_d(G)) through a static observation function θ. The observation function θ is defined as: for any data string d = ⟨ a_1^1a_1^2… a_1^k_1⟩ ⟨ a_2^1a_2^2… a_2^k_2⟩…⟨ a_j^1a_j^2… a_j^k_j⟩∈ L_d(G), θ(d) = ⟨θ(a_1^1)θ(a_1^2) …θ(a_1^k_1) ⟩⟨θ(a_2^1)θ(a_2^2) …θ(a_2^k_2) ⟩ …⟨θ(a_j^1)θ(a_j^2) …θ(a_j^k_j) ⟩ where θ(a_m^n) = a_m^n, if ϑ(a_m^n) holds, ϵ, otherwise, and ϑ is the X-predicate describing the observable condition for data elements, and the empty observation “⟨ϵ⟩" in θ(d) can be removed directly. The set of observations for G is defined as Θ(G) = ⋃_u ∈ L(G)θ(𝔇(u)). An observable unit of the observation w, w ∈Θ(G), is the substring of form “⟨ a_ia_i+1… a_i+k⟩", k ≥ 0, in w. Let |w|_u denote the number of observable units in w. According to the definition, the observations such as “⟨ a_1a_2⟩⟨ a_3⟩" and “⟨ a_1⟩⟨ a_2a_3⟩" are considered to be different. Two identical observations have the same number of observable units and the corresponding units are equal to each other. Therefore, an observable unit is regarded as a minimal information structure acquired by the intruder, and this paper considers the data language rather than the flat data language in opacity analysis. The main reasons for this treatment are as follows. 1) Since each parameter tuple is transmitted between the system and its environment as a whole and the observable unit is the observable part of parameter tuple, the intruder will obtain each observable unit as a whole. 2) Similar to the literature of opacity analysis <cit.>-<cit.>, this paper also assumes that intruders have sufficient memory and computation capabilities to keep the history of the observations and update the state estimation for the system instantaneously by their latest observations. Based on these assumptions, we present three opacity properties for parametric DESs in the following. (current-state opacity) Given the parametric DES G with the set of secret states Q_s⊆ Q, the set of non-secret states Q_ns⊆ Q, and the observation function θ. G is said to be current-state opaque w.r.t. Q_s, Q_ns and θ, if (∀ u ∈ L^Q_s_Q_0(G)) (∃ v ∈ L^Q_ns_Q_0(G)) θ(𝔇(u))=θ(𝔇(v)). (initial-state opacity) Given the parametric DES G with the set of secret initial states Q_s⊆ Q_0, the set of non-secret initial states Q_ns⊆ Q_0, and the observation function θ. G is said to be initial-state opaque w.r.t. Q_s, Q_ns and θ, if (∀ u ∈ L^Q_Q_s(G)) (∃ v ∈ L^Q_Q_ns(G)) θ(𝔇(u))=θ(𝔇(v)). (infinite-step opacity) Given the parametric DES G with the set of secret states Q_s⊆ Q, the set of non-secret states Q_ns⊆ Q, and observation function θ. G is said to be infinite-step opaque w.r.t. Q_s, Q_ns and θ, if (∀ uu∈ L_Q_0^Q(G): u ∈ L^Q_s_Q_0(G)) (∃ vv∈ L_Q_0^Q(G) : v ∈ L^Q_ns_Q_0(G)) [θ(𝔇(u))=θ(𝔇(v)) ∧θ(𝔇(u))=θ(𝔇(v))]. The opacity properties of parametric DESs presented in Definitions <ref>, <ref>, <ref> have the same intuitive meanings as their counterparts of the classic DESs. We would investigate the opacity properties for EFAs and EP-EFAs in Sections IV and V, respectively. § UNDECIDABILITY OF OPACITY IN EFAS In this section, we prove that the opacity properties presented in Definitions <ref>, <ref>, <ref> for EFAs are all undecidable in general. The main idea of the proof is reducing the halting problem of two-counter machines to the verification of the opacity properties. A counter machine is an abstract machine used to model computation in formal logic and theoretical computer science. A counter machine consists of several registers, each of which only can store an integer number, and a set of arithmetic operations and control instructions. Minsky introduced a type of counter machines including two registers r_j, j ∈{1, 2}, and three instructions: INC(r_j), DEC(r_j) and JZ(r_j, z) with the semantics of r_j← r_j + 1, r_j← r_j - 1, and goto(z) if r_j=0, respectively <cit.>. This kind of machines is usually called Two-Counter Machines (2CMs) in the literature. 2CMs are Turing equivalent <cit.>. It is well known that the halting problem for Turing machines is undecidable. Therefore, by Lemma 1, we have the following result. The halting problem of 2CMs is undecidable. Obviously, a configuration of a 2CM with program P can be described as a triple (r_1,r_2, c ) ∈ℕ^3, where r_1 and r_2 keep the values of the first and second registers, respectively, and c keeps the value of program counter. Let x(j) denote the j^th entry of the configuration x∈ℕ^3, j∈ [1:3]. Let |P| denote the number of instructions in program P. Firstly, we formulate the (ℕ^3×ℕ^3)-predicate φ^step that characterizes the configuration evolution of the 2CM with program P after executing a single instruction, where the first and second elements refer to the current and subsequent configurations, respectively. Let φ_i be the (ℕ^3×ℕ^3)-predicate describing the relation of the configurations before and after the executing of the i^th instruction of program P. We formulate φ_i according to the type of the i^th instruction as follows. * If the i^th instruction is INC(r_j), j ∈{1,2}, then φ_i(y, x) def= [(x(j) = y(j) + 1) ∧ (x(3-j) = y(3-j)) ∧ (y(3) = i ) ∧ (x(3) = i + 1)], where the first clause means that the j^th register increases by 1, the second clause means the other register remains unchanged, the third and fourth clauses mean that the program is executing the i^th instruction and the next instruction to be executed is the (i+1)^th one, respectively. * If the i^th instruction is DEC(r_j), j ∈{1,2}, then φ_i(y, x) def= [(x(j) = y(j) - 1) ∧ (x(3-j) = y(3-j)) ∧ (y(3) = i ) ∧ (x(3) = i + 1)]. The intuitive meaning of this equation is similar to that of the previous one. * If the i^th instruction is JZ(r_j,z), j ∈{1,2}, then φ_i(y, x) def= [(x(1) = y(1) ) ∧ (x(2) = y(2)) ∧ (y(3) = i ) ∧ (x(3) = ( y(j) = 0 ? z: i+1 ))], where the first and second causes mean that both the registers remain unchanged, the third cause means that the program is executing the i^th instruction, and the last cause adopts a Java-language-style expression to describe the if-then-else term, i.e., if the register r_j equals 0, then the next instruction to be executed is the z^th one, otherwise the (i+1)^th one. Hence, we obtain the special predicate φ^step for program P as follows. φ^step(y, x) def=⋁_i ∈ [1:|P|]φ_i(y, x). The (ℕ^3×ℕ^3)-predicate φ^eq describing whether two configurations are equal to each other or not is defined as follows. φ^eq(y, x) def=⋀_i∈{1,2,3}[(x(i) = y(i) )] Obviously, φ^step and φ^eq are both predicates in the Boolean algebra 𝒜=(ℕ^3×ℕ^3, Ψ, ∙). For the specific program P, we denote by ℕ^3-predicates φ^ini and φ^fin its initial configuration and final configuration, respectively. Based on the above discussions, we prove that the current-state opacity of EFAs is undecidable by constructing a special parametric DES E_P w.r.t program P and reducing the halting problem of P to the verification of current-state opacity of E_P. The current-state opacity of EFAs is undecidable in general. Firstly, we construct the EFA E_P = { Q={ q_0,q_1, q_2,q_3}, Σ = {σ_1, σ_2, σ_3, σ_4}, X = ℕ^3, Q_0={q_0}, Y = ℕ^3, Y_0=φ^ini , R } w.r.t. a 2CM with program P (shown in Fig. <ref>). The predicates of φ^step and φ^eq are defined in Equations (<ref>) and (<ref>), respectively. The predicates of φ^ini and φ^fin, as the logic characterization for the initial and final configurations of program P, respectively, are X-predicates and also can be regarded as special (Y× X)-predicates where the first variable (i.e., state parameter) has no influence to the predicates. In the symbolic transitions, the update function ξ^sto just stores the event parameter to the target state as parameter, e.g., ξ^sto in the transition from q_0 to q_1 is defined as: y_q_1← x_σ_1^1. Let the set of secret states be Q_s = { q_0,q_1,q_2} and the set of non-secret states be Q_ns = { q_3}. Consider the observation function θ: ∀ u ∈ (ℕ^3)^*, θ(u) = ϵ. According to Definition <ref>, the parametric DES E_P is current-state opaque if and only if the non-secret behavior is non-empty, i.e., the state q_3 is reachable from the initial state q_0. According to Fig. <ref>, the data strings (i.e., the sequence of configurations) that can reach the state q_3 from the initial state q_0 have the form of v = a_1a_2… a_2*na_2*n+1, n ≥ 1, and q_3 is reachable if and only if v satisfies the following formulae: a_1∈φ^ini, a_2*n+1∈φ^fin, a_2*j+1∉φ^fin, j ∈ [1:n-1], and for i ∈ [1:n], (a_2*i-1,a_2*i) ∈φ^step and (a_2*i,a_2*i+1) ∈φ^eq. For such sequence v satisfying aforementioned formulae, there exists a one-to-one corresponding sequence w=a_1a_2a_4 … a_2*(n-1)a_2*n, n ≥ 1, where a_1 and a_2*n are, respectively, the initial and final configurations, and each pair of adjacent configurations satisfies the predicate φ^step. This means that w is exactly the evolution sequence of configurations during the execution of program P, i.e., the 2CM with program P halts if and only if there exists such sequence w. By Lemma <ref>, the halting problem of 2CMs is undecidable, which implies the undecidability of the existence of such w, and further implies the undecidability of the existence of such v. Hence, the reachability of state q_3 in E_P is undecidable, and so is the current-state opacity of E_P. Therefore, the current-state opacity of EFAs is undecidable in general. The initial-state opacity of EFAs is undecidable in general. First of all, we construct the EFA E_P = { Q={ q_0, …,q_4}, Σ = {σ_1, … ,σ_5}, X=ℕ^3, Q_0={q_0,q_4}, Y=ℕ^3, Y_0=φ^ini , R } for a 2CM with program P. In EFA E_P, the predicates φ^ini, φ^eq, φ^step and φ^fin, and update function ξ^sto have the same definitions as themselves in E_P (shown in Fig. <ref>). Let the set of secret initial states be Q_s = { q_4} and the set of non-secret initial states be Q_ns = { q_0}. Consider the observation function θ: θ(u) = u, u ∈ (ℕ^3)^*. Under these settings, we have the following fact. ⋃_v ∈ L_Q_s^Q(E_P)θ(𝔇(v)) = ⋃_v ∈ L_{q_4}^{q_4}(E_P)θ(𝔇(v)) = (ℕ^3)^*. That is, the set of observations for secret behavior is the universal set (ℕ^3)^*. According to Definition <ref>, E_P is initial-state opaque if and only if the set of the observations for non-secret behavior is also the universal set (ℕ^3)^*, i.e., ⋃_u ∈ L_Q_ns^Q(E_P)θ(𝔇(u)) = ⋃_u ∈ L_{q_0}^{q_0,q_1,q_2,q_3}(E_P)θ(𝔇(u)) = (ℕ^3)^*. In order to investigate the validness of Equation (<ref>), we construct a new EFA E_P from E_P by removing state q_4 and its corresponding transitions, and adding a state q_5 and two corresponding transitions (i.e., the transitions denoted by dotted-arrow in Fig. <ref>). In the new EFA E_P, we have the fact that the disjunction of the predicates in the transitions originating the same state is equal to the true predicate ⊤, e.g., for state q_2, (φ^fin∧φ^eq ) ∨ (φ^fin∧φ^eq) ∨ ( φ^eq) = ⊤. Hence, we have the fact that ⋃_u ∈ L_{q_0}^{q_0,q_1,q_2,q_3,q_5}(E_P)θ(𝔇(u)) = (ℕ^3)^*. According to Equation (<ref>), it is obvious that Equation (<ref>) holds if and only if the state q_5 is not reachable in E_P. Notice that the reachability of q_5 in E_P is identical to the reachability of q_3 in E_P (shown in Fig. <ref>), which has been proved to be undecidable in Theorem <ref>. Hence, the validness of Equation (<ref>) is undecidable, and so is the initial-state opacity of EFA E_P. Therefore, the initial-state opacity of EFAs is undecidable in general. The infinite-step opacity of EFAs is undecidable in general. We consider the same EFA E_P with the same secret states, non-secret states and the observation function as that in Theorem <ref>. By Definition <ref>, E_P is infinite-step opaque if and only if the state q_3 is reachable from the initial state q_0, which has been proved to be undecidable in Theorem <ref>. Therefore, infinite-step opacity of E_P is undecidable, and infinite-step opacity of EFAs is undecidable in general. As mentioned before, EFAs model is a quite powerful tool to simulate the interactions between a system and its environment. However, the coexistence of event and state parameters in the predicates complicates this model and make the properties of opacity undecidable. Hence, it is necessary to consider the EP-EFAs model where the state parameter is removed. § OPACITY OF EP-EFAS In this section, we investigate the current-state opacity, initial-state opacity and infinite-step opacity of EP-EFAs. We present the verification algorithms for these opacity properties firstly, and then analyze the complexity of these algorithms. §.§ Current-State Opacity of EP-EFAs In fact, Definition <ref> implies that the current-state opacity holds if and only if for any observation, the intruder cannot determine the system is in the secret states. For the convenience of demonstrating this issue, we present the following notion. Given the EP-EFA S= (Q, Σ, X, Q_0, T), the state estimation function Est^S: Θ(S) → 2^Q is defined as follows: for any observation w ∈Θ(S), Est^S(w) = { q ∈ Q | ∃ q_0∈ Q_0, u ∈ L(S), s.t. q_0 q ∧ w = θ(𝔇(u)) }. For classic DESs, the state estimations can be calculated by constructing a special automaton: observer <cit.>. Inspired by this idea, we present an algorithm (Algorithm <ref>) to construct the symbolic observer Obs(S) = { Q^obs, q^obs_0, T^obs} for the EP-EFA S= (Q, Σ, X, Q_0, T). The symbolic observer Obs(S) is a special EP-EFA without event tags. In the following, we would like to prove that the verification of current-state opacity for the EP-EFA S can be realized by means of its symbolic observer Obs(S). Firstly, we present three necessary Lemmas. Given an EP-EFA S = (Q, Σ, X, Q_0, T) with the set of secret states Q_s, the set of non-secret states Q_ns, and the observation function θ. S is current-state opaque w.r.t. Q_s, Q_ns and θ, if and only if for any observation w ∈Θ(S), Est^S(w) ∩ Q_s≠∅ ⇒ Est^S(w) ∩ Q_ns≠∅. (⇐) Given any u ∈ L_Q_0^Q_s(S). Let w = θ(𝔇(u)). This implies Est^S(w) ∩ Q_s≠∅. Thus, we have Est^S(w) ∩ Q_ns≠∅, which means that there exist q_0∈ Q_0, a non-secret state q∈ Q_ns and a parameterized string v, such that q_0q, and w = θ(𝔇(v)). This further implies that v ∈ L_Q_0^Q_ns(S) and θ(𝔇(u)) = θ(𝔇(v)). According to Definition <ref>, S is current-state opaque. (⇒) Given any observation w ∈Θ(S) satisfying Est^S(w) ∩ Q_s≠∅. Est^S(w) ∩ Q_s≠∅ means there exists a parameterized string u ∈ L_Q_0^Q_s(S) such that w = θ(𝔇(u)). Since S is current-state opaque, there exists v ∈ L_Q_0^Q_ns(S) such that θ(𝔇(v)) = θ(𝔇(u)) = w. This means that there exist q_0∈ Q_0 and q∈ Q_ns such that q_0q, which implies that q∈ Est^S(w). Thus Est^S(w) ∩ Q_ns≠∅. Lemma <ref> implies that the verification of current-state opacity can be realized by going through all the possible state estimations. The following two Lemmas further prove that the states of the symbolic observer are exactly all the state estimations. Given an EP-EFA S = (Q, Σ, X, Q_0, T) and its symbolic observer Obs(S) = { Q^obs, q^obs_0, T^obs} constructed by Algorithm <ref>. For any observation w ∈Θ(S), Est^S(w) is the state reachable from q^obs_0 by w in Obs(S). Firstly, we claim that Obs(S) constructed by Algorithm <ref> is deterministic, i.e., given an observation, there exists only one reachable state in Q^obs. This is because Equation (<ref>) implies that if idx1 ≠ idx2, then ψ_idx1∧ψ_idx2 =, and thus no observation unit can simultaneously satisfy two different symbolic transitions originating from the same state q^obs of Obs(S). Secondly, we prove this Lemma by induction on the number of observation units in w. Let |w|_u = n. The base case is n = 0, i.e., w = ϵ. It is sufficient to show Est^S(ϵ) = q^obs_0. If q ∈ Q_0, obviously we have q ∈ Est^S(ϵ) and q ∈ q^obs_0. The remainder is to show Est^S(ϵ) \ Q_0 = q^obs_0\ Q_0. According to Equations (<ref>-<ref>), q q∈T means there exists a symbolic transition q q in S, such that φ holds for certain k unobservable event parameters or k=0. Therefore, by Equation (<ref>), a state q_n∈ q^obs_0\ Q_0, if and only if there exist a sequence of transitions q_0q_1…q_n in S, q_0∈ Q_0, where each predicate φ_i holds for k_i, i ∈ [1:n], unobservable event parameters or k_i = 0. This is equivalent to saying that there exists a parameterized string u, θ(𝔇(u)) = ϵ, and q_0 q_n by Definition <ref>, which also means that q_n∈ Est^S(ϵ) by Equation (<ref>). Thus the base case holds. The induction hypothesis is that for all observation w, |w|_u≤ n, Est^S(w) is reachable by w in Obs(S). We need to show that for any observation unit w = ⟨ a_1… a_k⟩, k ≥ 1, such that ww∈Θ(S), Est^S(ww) is reached by ww from q^obs_0 in Obs(S). This is equivalent to show that Est^S(ww) is reachable by w from state Est^S(w) due to the fact that the observer Obs(S) is deterministic. Since the observation function θ is static, we can reformulate Est^S(ww) as follows. Est^S(ww) = {q | q q∧ q ∈ Est^S(w) ∧w = θ( 𝔇(u)) }. Taking Est^S(w) as the q^obs in Equation (<ref>), then T^k_Est^S(w) is the set of observable transitions that originate from one of the states in Est^S(w) and contain k observable parameters. Suppose idx⊆ [1:|T^k_Est^S(w)|] is the only nonempty index set such that the observation unit w = ⟨ a_1… a_k⟩ satisfies ψ_idx (the existence follows from the fact that ww∈Θ(S) and the uniqueness follows from Equation (<ref>)). According to Equations (<ref>,<ref>), we obtain Est^S(ww) = q^obs. By Algorithm <ref>, we have Est^S(w) q^obs∈ T^obs, and thus Est^S(w) Est^S(ww) ∈ T^obs, which implies Est^S(ww) is reached from q^obs_0 by ww in Obs(S). This completes the proof of the induction step. Given an EP-EFA S = (Q, Σ,X, Q_0, T) and its symbolic observer Obs(S) = { Q^obs, q^obs_0, T^obs}. We have L(Obs(S)) = Θ(S). We prove this Lemma by induction on the number of observation units in w ∈ L(Obs(S)). Let |w|_u = n. The base case is n =0, i.e., w = ϵ. Obviously ϵ∈ L(Obs(S)) and ϵ∈Θ(S). Thus the base case holds. The induction hypothesis is that w ∈ L(Obs(S)) ⇔ w ∈Θ(S) holds for any observation w, |w|_u≤ n. Then we need to show for each observation unit w=⟨ a_1,…,a_k⟩, ww∈ L(Obs(S)) ⇔ ww∈Θ(S). Suppose q^obs is reached by w from q^obs_0 in Obs(S). By Lemma (<ref>) and Equation (<ref>), for each q_i∈ q^obs, there exists an initial state q_0^i∈ Q_0 such that q_0^i q_i, θ(𝔇(u_i)) = w. By Equations (<ref>-<ref>), ww∈ L(Obs(S)) holds if and only if there exists a nonempty index set idx such that ψ_idx holds for w. This is equivalent to saying that there exists at least an observable transition (t_i=q_iq_i) ∈T_q^obs^k, q_i∈ q^obs, w∈φ_i, i ∈idx, which further means there exists a parameterized event u_i such that q_iq_i and θ(𝔇(u_i)) = w by Equations (<ref>, <ref>). Therefore, ww∈ L(Obs(S)) holds if and only if there exists u_i such that q^i_0q_i, θ(𝔇(u_i)) = w, θ(𝔇(u_i)) = w, which means ww∈θ(𝔇(u_iu_i)) ∈Θ(S). This completes the proof of the induction step. Lemma <ref> implies that the state estimation for each observation is contained in the state space of the symbolic observer Obs(S). Lemma <ref> further implies that only the observations can reach the states of Obs(S). Hence, the state space of Obs(S) are exactly all the state estimations of S. Therefore, by Lemmas (<ref>, <ref>, <ref>), we have the following theorem. Given an EP-EFA S = (Q, Σ, X, Q_0, T) with the set of secret states Q_s, the set of non-secret states Q_ns, and the observation function θ. Let Obs(S) = { Q^obs, q^obs_0, T^obs} be the symbolic observer constructed by Algorithm <ref>. S is current-state opaque w.r.t. Q_s, Q_ns and θ, if and only if for any q^obs∈ Q^obs, q^obs∩ Q_s≠∅⇒ q^obs∩ Q_ns≠∅. The verification of current-state opacity and the construction of the symbolic observer (Algorithm <ref>) have the same complexity, as checking the validness for Equation (<ref>) can be finished during the construction of Obs(S). Suppose the EP-EFA S= (Q, Σ, X, Q_0, T) with K step-length has N states and M symbolic transitions. Assume that g(z) is the cost of checking satisfiability of the predicate with z free variables in the Boolean algebra. In Step 1) of Algorithm <ref>, we have |T| ≤ M*(K+1), and for each symbolic transition with l step-length, l+1 predicates are checked for satisfiability. Thus, the complexity of Step 1) is at most M*(K+1)*g(K). In Step 2), for each T^k_q^obs, there are 2^|T^k_q^obs|-1 combined predicates that need to be checked for satisfiability. Hence, there are at most ∑_q^obs∈ Q^obs∑_k=1^K(2^|T^k_q^obs|-1) predicates are checked for satisfiability. For a given q^obs, we have ∑_k=1^K|T^k_q^obs| ≤ |T|, and by this equation, we can prove ∑_k=1^K (2^|T^k_q^obs|-1) < 2^|T|≤ 2^M*(K+1). Since |Q^obs| ≤ 2^N, the complexity of Step 2) of Algorithm <ref> is at most g(K) * 2^N*2^M*(K+1). Therefore, the complexity of the verification of current-state opacity is g(K) * 2^N+M*K. As aforementioned, the EP-EFAs model can address many complex data and operations via the symbolic transitions. However, for the simplicity to demonstrate the obtained results, the following illustrative examples only consider integer arithmetic. Consider an EP-EFA S with X = ℕ shown in Fig. <ref>, where the set of secret states Q_s = {q_2} and the set of non-secret states Q_ns = Q\ Q_s. Suppose that the observation function θ is obtained by the X-predicate ϑ(x) def= [x ≥ 5 ]. Firstly, we construct the observable transitions as follows. T_t_1= { q_0 q_1}. T_t_2= { q_0 q_3; q_0 q_3}. T_t_3= { q_1 q_2 ; q_1 q_2}. T_t_4= { q_3 q_4 ; q_3 q_4}. T_t_5= { q_2 q_2 ; q_2 q_2}. T_t_6= { q_4 q_4 ; q_4 q_4}. Secondly, we have q^obs_0 = {q_0, q_1}, and obtain the corresponding set as follows. T^2_q^obs_0 = { q_0 q_3 ; q_1 q_2}. T^1_q^obs_0 = { q_1 q_2; q_0 q_3}. For T^2_q^obs_0, the set of satisfiable combined predicates are as follows. Ψ(T^2_q^obs_0) = {ψ_{1} = [x_1≥ 5 ∧ x_2≥ 5 ∧ x_2≠ x_1 +1]; ψ_{1,2} = [x_1≥ 5 ∧ x_2≥ 5 ∧ x_2 = x_1 +1] }. Through the transitions guarded with ψ_{1} and ψ_{1,2}, the states { q_3, q_4} and {q_2, q_3, q_4} are, respectively, generated and put into Q^obs. In addition, the corresponding symbolic transitions are put into T^obs. For T^1_q^obs_0, the set of satisfiable combined predicates are as follows. Ψ(T^1_q^obs_0) = {ψ_{1} = [x_1 = 5 ]; ψ_{2} = [x_1 > 5 ]}. Through the transitions guarded with ψ_{1} and ψ_{2}, the states { q_2}, { q_3, q_4} are, respectively, generated and the former is put into Q^obs. Meanwhile, the corresponding symbolic transitions are put into T^obs. For other unvisited states in Q^obs, we do the same things as that for state q^obs_0. Finally, we obtain the symbolic observer, as shown in Fig. <ref>, where Q^obs = {{ q_0,q_1}, {q_2}, { q_3,q_4}, { q_2,q_3,q_4}, {q_4}, { q_2,q_4}}. For the state {q_2}∈ Q^obs, we have { q_2}∩ Q_s≠∅ and { q_2}∩ Q_ns = ∅. Hence by Theorem <ref>, S is not current-state opaque w.r.t. {q_2}, {q_0,q_1,q_3,q_4} and θ. §.§ Initial-State Opacity of EP-EFAs The coding manners of the secret behavior in current-state opacity and initial-state opacity are reverse. According to this property, we transform the verification of initial-state opacity into the verification of current-state opacity for EP-EFAs. Firstly, we define the reverse operations for parameterized strings, data strings and symbolic transitions. Given a parameterized string u = σ_1⟨ a_1^1, a_1^2, …, a_1^k_1⟩ σ_2⟨ a_2^1, a_2^2, …, a_2^k_2⟩ … σ_n⟨ a_n^1, a_n^2, …, a_n^k_n⟩, its reverse is u^rdef=σ_n⟨ a_n^k_n, …, a_n^2, a_n^1⟩ … σ_2⟨ a_2^k_2, …, a_2^2,a_2^1⟩ σ_1⟨ a_1^k_1, …, a_1^2, a_1^1⟩. For a data string d = 𝔇(u), the reverse of d is d^rdef=𝔇(u^r). For a symbolic transition t = q q, the reverse of t is defined as t^rdef=q q, where the predicate φ^r is obtained from φ by changing the name of the free variable x^i_σ to x^k+1 -i_σ, i ∈ [1:k], e.g., the reverse of the X^4-predicate φ = [x^1_σ > x^3_σ∧ x^2_σ≠ x^4_σ] is φ^rdef= [x^4_σ > x^2_σ∧ x^3_σ≠ x^1_σ]. By the aforementioned definitions, we have d ∈φ if and only if d^r∈φ^r. Given an EP-EFA S = (Q, Σ, X, Q_0, T). The reverse of S is defined as S^r = (Q, Σ, X, Q_0^r, T^r), where the set of initial states is Q_0^r = Q and the set of symbolic transitions is T^r = {t^r|t∈ T}. Definition <ref> generalizes the notion of reverse automata <cit.>, which has been widely used in many fields. In particular, by constructing the observer for reverse finite automata, Wu et al. <cit.> proposed an approach to verify the initial-state opacity for classic DESs. The following proposition follows from the definitions of the reverse operations, symbolic transitions, languages and observations. Given a transition t, a parameter tuple d, a parameterized string u, an observation w, and an EP-EFA S = (Q, Σ, X, Q_0, T) and its reverse S^r = (Q, Σ, X, Q_0^r, T^r). The following equations hold. 1) d ∈ prd(t) ⟺ d^r∈ prd(t^r), where the pdc(t) and pdc(t^r) denote the predicates of t and t^r, respectively. 2) q q⟺q q. 3) u ∈ L_Q_1^Q_2(S) ⟺ u^r∈ L_Q_2^Q_1(S^r). 4) w = θ(𝔇(u)) ⟺ w^r = θ(𝔇(u^r)). Given an EP-EFA S = (Q, Σ, X, Q_0, T) with the set of secret initial states Q_s⊆ Q_0, the set of non-secret initial states Q_ns⊆ Q_0, and observation function θ. The reverse of S is S^r = (Q, Σ, X, Q_0^r, T^r) where Q_0^r = Q. S is initial-state opaque w.r.t. Q_s, Q_ns and θ, if and only if S^r is current-state opaque w.r.t. Q_s, Q_ns and θ. By Definition <ref>, S is initial-state opaque w.r.t. Q_s, Q_ns and θ, if and only if (∀ u ∈ L^Q_Q_s(S)) (∃ v ∈ L^Q_Q_ns(S)) θ(𝔇(u))=θ(𝔇(v)). This is equivalent to (∀ u^r∈ L^Q_s_Q(S^r)) (∃ v^r∈ L^Q_ns_Q(S^r)) θ(𝔇(u^r)) = θ(𝔇(v^r)) by Proposition <ref>. This means S^r is current-state opaque w.r.t. Q_s, Q_ns and θ according to Definition <ref>. Theorem <ref> implies that the verification of initial-state opacity can be efficiently reduced to the verification of current-state opacity. Since the reverse SPA-EFA S^r has the same scale as S, the complexity of the verification of initial-state opacity is also g(K) * 2^N+M*K. Consider the EP-EFA S shown in Fig. <ref> with the same observation function θ as that in Example <ref>. Suppose the set of initial states is Q_0 = {q_0, q_1, q_2}, and the secret initial states and non-secret initial states are Q_s = { q_2} and Q_ns = { q_0, q_1}, respectively. For the reverse SPA-EFA S^r = (Q, Σ, X, Q, T^r), we construct the symbolic observer Obs(S^r) = { Q^obs_r, q^obs_0, T^obs_r} according to Algorithm <ref>. For the initial state of the observer q^obs_0 = Q, we obtain the subsets of observable transitions as follows. T^2_q^obs_0 = { q_3 q_0 ; q_2 q_1}. T^1_q^obs_0 = { q_2 q_1; q_3 q_0; q_4 q_3; q_2 q_2; q_4 q_4}. For T^2_q^obs_0, the set of satisfiable combined predicates are as follows. Ψ(T^2_q^obs_0) = {ψ_{1} = [x_1≥ 5 ∧ x_2≥ 5 ∧ x_1≠ x_2 +1]; ψ_{1,2} = [x_1≥ 5 ∧ x_2≥ 5 ∧ x_1 = x_2 +1] }. Through the transitions guarded with the above predicates, the states { q_0} and {q_0, q_1} are generated and put into Q^obs_r. For T^1_q^obs_0, the set of satisfiable combined predicates are as follows. Ψ(T^1_q^obs_0) = {ψ_{1,3,4,5} = [x_1 = 5 ]; ψ_{2,3,4,5} = [x_1 = 6 ]; ψ_{2,4,5} = [x_1≥ 7 ]; }. Through the transitions guarded with the above predicates, the states { q_0, q_1, q_2, q_3,q_4} and { q_0, q_2, q_3, q_4} are generated and put into Q^obs_r. Similarly, we handle other unvisited states in Q^obs_r, and obtain the symbolic observer Obs(S^r), shown in Fig. <ref>, where Q^obs_r = {{q_0,q_1,q_2,q_3,q_4}, {q_0,q_2,q_3,q_4}, {q_0,q_1}, {q_0}}. Notice that for each state q^obs_r∈ Q^obs_r, q^obs_r∩ Q_s≠∅ always implies q^obs_r∩ Q_ns≠∅, thus S^r is current-state opaque w.r.t. {q_2}, {q_0,q_1} and θ. By Theorem <ref>, S is initial-state opaque w.r.t. {q_2}, {q_0,q_1} and θ. §.§ Infinite-Step Opacity of EP-EFAs Yin et al. <cit.> presented an ingenious method to verify the infinite-step opacity of FAs by combining the observers of the obverse and reverse automata (called two-way observers in <cit.>). Following this idea, we have the theorem as follows. Given an EP-EFA S = (Q, Σ, X, Q_0, T) with the set of secret states Q_s, the set of non-secret states Q_ns, and the observation function θ. The reverse of S is S^r = (Q, Σ, X, Q_0^r, T^r) where Q_0^r = Q. S is infinite-step opaque w.r.t. Q_s, Q_ns and θ, if and only if (∀ w ∈Θ(S))(∀w^r∈Θ(S^r)) [Est^S(w) ∩ Est^S^r(w^r) ∩ Q_s ≠∅⇒ Est^S(w) ∩ Est^S^r(w^r) ∩ Q_ns≠∅]. By Equation (<ref>), Est^S(w) and Est^S^r(w^r) are {q∈ Q | q_0q∧ q_0∈ Q_0∧ w = θ(𝔇(u))} and {q∈ Q | q q∧ q ∈ Q ∧w^r = θ(𝔇(u^r))}, respectively, and the latter further implies that {q∈ Q | q q ∧ q ∈ Q ∧w = θ(𝔇(u))} by Proposition <ref>. Therefore, Est^S(w) ∩ Est^S^r(w^r) ∩ Q^s and Est^S(w) ∩ Est^S^r(w^r) ∩ Q^ns, respectively, are equivalent to A={ q∈ Q_s | q_0q∧q q ∧ q_0∈ Q_0∧ w = θ(𝔇(u)) ∧w = θ(𝔇(u)) }, and B={ q∈ Q_ns | q_0q∧q q ∧ q_0∈ Q_0∧ w = θ(𝔇(v)) ∧w = θ(𝔇(v)) }. To complete the proof, it is sufficient to show the equivalence between Equations (<ref>) and (<ref>). Firstly, we prove that Equation (<ref>) implies Equation (<ref>). For any uu∈ L(S) satisfying u ∈ L_Q_0^Q_s(S), let w_1 = θ(𝔇(u)), w_1 = θ(𝔇(u)), and then we have w_1∈Θ(S) and w_1^r∈Θ(S^r). Taking the w_1 and w_1 here as the w and w in Equation (<ref>), then we have A ≠∅. By Equation (<ref>), we have B ≠∅. which implies that there exists vv∈ L(S) satisfying v∈ L_Q_0^Q_ns(S), such that θ(𝔇(u)) = θ(𝔇(v)) and θ(𝔇(u)) = θ(𝔇(v)). This means that Equation (<ref>) holds. Secondly, we prove Equation (<ref>) implies Equation (<ref>). For any w ∈Θ(S) and w^r∈Θ(S^r) satisfying A ≠∅, we have uu∈ L(S), such that u ∈ L^Q_s_Q_0(S), w = θ(𝔇(u)) and w = θ(𝔇(u)). By Equation (<ref>), there exists vv∈ L(S) such that v ∈ L^Q_ns_Q_0(S), θ(𝔇(u))=θ(𝔇(v)) and θ(𝔇(u))=θ(𝔇(v)), which implies B ≠∅. Therefore, Equation (<ref>) holds. According to Lemmas <ref>, <ref>, the state space of the observer of an EP-EFA are exactly the set of state estimations. By Theorem <ref>, the verification of infinite-step opacity can be realized by going through the state spaces of Obs(S) and Obs(S^r). Hence, we have the following algorithm (Algorithm <ref>) to verify the infinite-step opacity of EP-EFAs. As discussed before, the complexity of step 1) and step 2) of Algorithm <ref> is g(K) * 2^N+M*K. Since |Q^obs| ≤ 2^N and |Q^obs_r| ≤ 2^N, the complexity of Step 3) of Algorithm <ref> is 4^N. Therefore, the complexity of the verification of infinite-step opacity is g(K) * 2^N+M*K + 4^N. Consider the EP-EFA shown in Fig. <ref>, where the set of secret states Q_s ={ q_3} and the set of non-secret states Q_ns = { q_4}. The Obs(S) and Obs(S^r) have been calculated in Examples <ref> and <ref>, as shown in Fig. <ref> and Fig. <ref>, respectively. Notice that q^obs∩ q^obs_r∩{q_3}≠∅ implies q^obs∩ q^obs_r∩{q_4}≠∅ for all the pairs of states (q^obs,q^obs_r) ∈ Q^obs× Q^obs_r. Therefore, S is infinite-step opaque w.r.t. {q_3}, {q_4} and θ. § CONCLUSION In this paper, we have investigated two parametric DESs models, i.e., EFAs and EP-EFAs, and then established a preliminary opacity theory for parametric DESs, which lays a foundation to analyze the opacity for complex systems. The parametric DESs well extends the classic DESs by means of the symbolic transitions carrying predicates over the infinite parameter space. The parametric DESs can efficiently represent and process many real-world data with the help of SMT solvers. It has been illustrated that the coexistence of state and event parameters in the predicates not only enhances the parametric model but also complicates it. Specifically, we have proved that EFAs model is more expressive than EP-EFAs model, and also proved that the opacity properties of EFAs are undecidable in general. In addition, EP-EFAs model reduces the complexity of EFAs by removing the state parameter, which makes its opacity properties decidable. We have provided the verification algorithms for the current-state opacity, initial-state opacity and infinite-step opacity of EP-EFAs model, and discussed the complexity of these algorithms. One of the future work is to investigate the opacity enforcement of parametric DESs. Another work worthy of further investigation is to explore a more powerful parametric model whose opacity properties are still decidable. § ACKNOWLEDGMENTS This work is supported by the National Natural Science Foundation of China (Grant No. 61876195), the Natural Science Foundation of Guangdong Province of China (Grant No. 2022A1515011136), the Special Projects in Key Fields Foundation of the Department of Education of Guangdong Province of China (Grant No. 2021ZDZX1043), Guangxi Science and Technology Project (No. Guike AD23026227) and the Project Improving the Basic Scientific Research Ability of Young and Middle-aged Teachers in Guangxi Universities of China (Grant No. 2021KY0591). 1 IEEEtran desbook C. G. Cassandras and S. Lafortune, Introduction to Discrete Event Systems, 2nd Ed., New York, NY, USA: Springer, 2008. opacity-review R. Jacob, J. J. Lesage, and J. M. Faure, “Overview of discrete event systems opacity: models, validation, and quantification," Annual Reviews in Control, vol. 41, pp. 135-146, 2016. l-opacity F. Lin, “Opacity of discrete event systems and its applications," Automatica, vol. 47, no. 3, pp. 496-503, 2011. cso A. Saboori and C. N. Hadjicostis, “Notions of security and opacity in discrete event systems," in Proceeding of the 46th IEEE Conference on Decision and Control, New Orleans, LA, USA, 2007, pp. 5056-5061. iso A. Saboori and C. N. Hadjicostis, “Verification of initial-state opacity in security applications of DES," in Proceedings of the 9th International Workshop on Discrete Event Systems, Göteborg, Sweden, 2008, pp. 328-333. ifo Y. Wu and S. Lafortune, “Comparative analysis of related notions of opacity in centralized and coordinated architectures," Discrete Event Dynamic Systems, vol. 23, no. 3, pp. 307-339, 2013. infinite X. Yin and S. Lafortune, “A new approach for the verification of infinite-step and K-step opacity using two-way observers," Automatica, vol. 80, pp. 162-171, 2017. pre-opacity S. Yang and X. Yin, “Secure your intention: on notions of pre-opacity in discrete-event systems," IEEE Transactions on Automatic Control, DOI:10.1109/TAC.2022.3210148, 2022. supervisory-enforcement1 J. Dubreil, P. Darondeau, and H. Marchand, “Supervisory control for opacity," IEEE Transactions on Automatic Control, vol. 55, no. 5, pp. 1089-1100, 2010. supervisory-enforcement2 Y. Xie, X. Yin, and S. Li, “Opacity enforcing supervisory control using non-deterministic supervisors," IEEE Transactions on Automatic Control, DOI: 10.1109/TAC.2021.3131125, 2021. output-enforcement1 X. Yin, S. Li , “Synthesis of dynamic masks for infinite-step opacity," IEEE Transactions on Automatic Control, vol. 65, no. 4, pp. 1429-1441, 2020. output-enforcement2 C. Keroglou and S. Lafortune, “Embedded insertion functions for opacity enforcement ," IEEE Transactions on Automatic Control, vol. 66, no. 9, pp. 4184-4191, 2021. output-enforcement3 X. Li , C. N. Hadjicostis and Z. Li, “Extended insertion functions for opacity enforcement in discrete-event systems," IEEE Transactions on Automatic Control, vol. 67, no. 10, pp. 5289-5303, 2022. timed-opacity F. Cassez, “The dark side of timed opacity," in Advances in Information Security and Assurance (Lecture Notes in Computer Science), Berlin, Germany: Springer, vol. 5576, 2009, pp. 21-30. timed-opacity2 L. Wang, N. Zhan, and J. An, “The opacity of real-time automata," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 37, no. 11, pp. 2845-2856, 2018. network-opacity1 J. Yang, W. Deng, D. Qiu, and C. Jiang, “Opacity of networked discrete event systems," Information Sciences, vol. 543 pp. 328-344, 2021. network-opacity2 Z. Zhang, S. Shu, C. Xia, Networked opacity for finite state machine with bounded communication delays, Information Sciences, vol. 572, pp. 57-66, 2021. petri-opacity1 Y. Tong, Z. Li, C. Seatzu, and A. Giua, “Verification of state-based opacity using Petri nets," IEEE Transactions on Automatic Control, vol. 62, no. 6, pp. 2823-2837, 2017. petri-opacity2 X. Cong, M. Fanti, A. Mangini, and Z. Li, “On-line verification of current-state opacity by Petri nets and integer linear programming," Automatica, vol. 94, pp. 205-213, 2018. petri-opacity3 Y. Dong, Z. Li, and N. Wu, “Symbolic verification of current-state opacity of discrete event systems using Petri nets," IEEE Transactions on Systems, Man, and Cybernetics: Systems, DOI:10.1109/TSMC. 2022.3151695, 2022. opacity-cps1 X. Yin, M. Zamani, and S. Liu, “On approximate opacity of cyber-physical systems," IEEE Transactions on Automatic Control, vol. 66, no. 4, pp. 1630-1645, 2021. opacity-cps2 S. Liu, A. Trivedi, X. Yin, and M. Zamani, “Secure-by-construction synthesis of cyber-physical systems," Annual Reviews in Control, vol. 53, pp. 30-50, 2022. probilistic-opacity1 A. Saboori and C. N. Hadjicostis, “Current-state opacity formulations in probabilistic finite automata," IEEE Transactions on Automatic Control, vol. 59, no. 1, pp. 120-133, 2014. probilistic-opacity2 X. Yin, Z. Li, W. Wang, and S. Li, “Infinite-step opacity and K-step opacity of stochastic discrete-event systems," Automatica, vol. 99, pp. 266-274, 2019. fuzzy-opacity1 W. Deng, D. Qiu, and J. Yang, “Opacity measures of fuzzy discrete event systems," IEEE Transactions on Fuzzy Systems, vol. 29, no. 9, pp. 2612-2622, 2021. fuzzy-opacity2 W. Deng, D. Qiu, and J. Yang, “Fuzzy infinite-step opacity measure of discrete event systems and its applications," IEEE Transactions on Fuzzy Systems, vol. 30, no. 3, pp. 885-892, 2022. efa1 Y. Chen and F. Lin, “Modeling of discrete event systems using finite state machines with parameters." in Proceedings of 2000 IEEE International Conference on Control Applications (CCA), Anchorage, Alaska, USA, 2000, pp. 941-946. efa2 L. Ouedraogo, R. Kumar, R. Malik, and K. Åkesson, “Nonblocking and safe control of discrete-event systems modeled as extended finite automata," IEEE Transactions on Automatation Science and Engineering, vol. 8, no. 3, pp. 560-569, 2011. efa3 M. A. Goorden, M. Fabian, J. M. Mortel-Fronczak et al., “Compositional coordinator synthesis of extended finite automata," Discrete Event Dynamic Systems, vol. 31, no. 3, pp. 317-348, 2021. learning S. Cassel, F. Howar, B. Jonsson, and B. Steffen, “Learning extended finite state machines," Formal Aspects of Computing, vol. 28, no. 2, pp. 233-263, 2016. esfa L. D'Antoni and M. Veanes, “Extended symbolic finite automata and transducers," Formal Methods in System Design, vol. 47, no. 1, pp. 93-119, 2015. sft M. Veanes, P. Hooimeijer, B. Livshits et al., “Symbolic finite state transducers: algorithms and applications," in Proceedings of the 39th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Philadelphia PA, USA, 2012, pp. 137-150. infinite-alphabet1 Abdulla, Parosh Aziz, et al. “General decidability theorems for infinite-state systems," in Proceedings 11th Annual IEEE Symposium on Logic in Computer Science. New Brunswick, NJ, USA, pp. 313-321, 1996. infinite-alphabet2 Segoufin L, Segoufin L, “Automata and logics for words and trees over an infinite alphabet," in Computer Science Logic: 20th International Workshop, Szeged, Hungary, Springer Berlin Heidelberg, pp. 41-57, 2006. infinite-alphabet3 F. Neven, T. Schwentick, and V. Vianu, “Finite state machines for strings over infinite alphabets," ACM Transactions on Computational Logic, vol. 5, no. 3, pp. 403-435, 2004. solver2 C. Barrett and C. Tinelli, “Satisfiability modulo theories," in Handbook of Model Checking, Cham, Switzerland: Springer, 2018, pp. 305-343. book-computation S. Michael, Introduction to the Theory of Computation, 3rd Ed., Boston, MA, USA: Cengage Learning, 2012. CM M. Minsky, Computation: Finite and Infinite Machines, 1st Ed., Englewood Cliffs, N. J., USA: Prentice-Hall, 1967. [ < g r a p h i c s > ] Weilin Deng received the B.S. and M.S. degrees in computer science from South China University of Technology, Guangzhou, China, in 2003 and 2008, respectively, and the Ph.D. degree in computer software and theory from Sun Yat-Sen University, Guangzhou, China, in 2016. From 2016 to 2019, he was an associate research fellow with Sun Yat-Sen University. He is currently an associate professor with Guangdong University of Finance. His current research interests include discrete-event systems, fuzzy/probabilistic systems and computations, and theoretical computer science. He is the author or co-author of more than 20 peer-review papers published in various academic journals and conferences, including IEEE TAC, IEEE TFS, IEEE CDC, INT J CONTROL and Information Sciences. [ < g r a p h i c s > ] Daowen Qiu received the M.S. degree in mathematics from Jiangxi Normal University, Nanchang, China, in 1993 and the Ph.D. degree in mathematics from Sun Yat-Sen University, Guangzhou, China, in 2000. During 2000 and 2001, he was a Postdoctoral Researcher in computer science with Tsinghua University, Beijing, China. Since August 2002, he has been associated with Sun Yat-Sen University, and then a Full Professor of computer science in May 2004. His current research interests include quantum computing, discrete-event systems, fuzzy and probabilistic computation, and he has focused on models of quantum and probabilistic computation, quantum information. He is the author or co-author of more than 160 peer-review papers published in various academic journals and conferences, including Information and Computation, Artificial Intelligence, Journal of Computer and System Sciences, Theoretical Computer Science, IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B, IEEE TRANSACTIONS ON AUTOMATIC CONTROL, IEEE TRANSACTIONS ON FUZZY SYSTEMS, Physical Review A, Quantum Information and Computation, Journal of Physics A, and Science in China. He is an editor of Theoretical Computer Science. [ < g r a p h i c s > ] Jingkai Yang received the B.S. and M.S. degrees in mathematics from Guangxi Normal University, Guilin, China, in 2006 and 2009, respectively, and the Ph.D. degree in computer science and technology from Sun Yat-Sen University, Guangzhou, China, in 2022. He is currently an associate professor with Yulin Normal University. His main research interests include opacity analysis, supervisory control and failure diagnosis of discrete-event systems.
http://arxiv.org/abs/2307.04101v1
20230709052851
Enhancing Building Semantic Segmentation Accuracy with Super Resolution and Deep Learning: Investigating the Impact of Spatial Resolution on Various Datasets
[ "Zhiling Guo", "Xiaodan Shi", "Haoran Zhang", "Dou Huang", "Xiaoya Song", "Jinyue Yan", "Ryosuke Shibasaki" ]
cs.CV
[ "cs.CV", "eess.IV" ]
Enhancing Building Semantic Segmentation Accuracy with Super Resolution and Deep Learning: Investigating the Impact of Spatial Resolution on Various Datasets Zhiling Guo^1,2, Xiaodan Shi^2, Haoran Zhang^2, Dou Huang^2, Xiaoya Song^3, Jinyue Yan^1, Ryosuke Shibasaki^2 ^1Department of Building Environment and Energy Engineering, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China ^2Center for Spatial Information Science, The University of Tokyo, Kashiwa, Japan ^3School of Architecture, Harbin Institute of Technology, Harbin, China August 12, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================== empty The development of remote sensing and deep learning techniques has enabled building semantic segmentation with high accuracy and efficiency. Despite their success in different tasks, the discussions on the impact of spatial resolution on deep learning based building semantic segmentation are quite inadequate, which makes choosing a higher cost-effective data source a big challenge. To address the issue mentioned above, in this study, we create remote sensing images among three study areas into multiple spatial resolutions by super-resolution and down-sampling. After that, two representative deep learning architectures: UNet and FPN, are selected for model training and testing. The experimental results obtained from three cities with two deep learning models indicate that the spatial resolution greatly influences building segmentation results, and with a better cost-effectiveness around 0.3m, which we believe will be an important insight for data selection and preparation. § INTRODUCTION Buildings semantic segmentation via remote sensing imagery has become an important research topic in recent years <cit.>. With the rapid development of data acquisition systems and machine learning, the ever-expanding choices of datasets with very high resolution (VHR) <cit.> and deep learning methods <cit.> expand the opportunities for researchers to conduct more accurate analysis. Although VHR imagery would express finer information contents of the landscape, it requires higher cost, longer processing time, and bigger storage space. Thus, the evaluation of the technical and economic trade-offs associated with using different resolution imagery is essential. The previous scholars have studied the impact of resolution in plant species <cit.>, land use <cit.>, and water <cit.> pattern recognition based on coarser-resolution or conventional machine learning methods. In this study, we investigate the impact of spatial resolution for building semantic segmentation via VHR imagery and deep learning methods, as shown in figure <ref>. To compare the segmentation accuracy under different resolutions, we created remote sensing imagery in a specific area with resolutions from 0.075m to 2.4m by super-resolution (SR) <cit.> and down-sampling processing. The experimental results obtained from three different study areas via two deep learning models reveal that the finer the spatial resolution may not be the best in building semantic segmentation tasks, and the relatively low-cost imagery would be sufficient in many study cases. Thus, choosing a cost-effective spatial resolution for different scenarios is worth discussing. The main contributions of this study can be highlighted as two folds. First, to the best of our knowledge, it is the first investigation for the impact of spatial-resolution on deep learning-based building semantic segmentation. Second, the resolution is not the higher the better for segmentation accuracy. According to our dataset, a resolution around 0.3m is better for cost-effectiveness, which enables researchers and developers to conduct their research efficiently. § DATA We analyzed the impact of spatial resolution for building semantic segmentation over three representative study areas: Austin, Christchurch, and Tokyo. The original resolutions of the datasets mentioned above are about 0.075m, 0.150m, and 0.300m, respectively. § METHODS The variation of spatial resolution will lead to differences in semantic segmentation results. At first, we resampled the imagery to a total of six pixel scales according to the spatial resolution range of most VHR images in data preprocessing, as shown in figure <ref>. After that, two representative semantic segmentation models are applied for building semantic segmentation. Finally, the comparison is conducted based on four assessment criteria. §.§ Preprocessing Compared with upscaling low-resolution imagery to HR space using a single filter such as bicubic interpolation, SR could increase the image resolution while providing finer spatial details than those captured by the original acquisition sensors. In this study, one of the typical deep learning SR models: ESPCN <cit.> is utilized to perform SR. In terms of the resample to lower-resolution, the pixel aggregate method is adopted. After that, six pixel scales in 0.075m, 0.150m, 0.300m, 0.600m, 1.200m, 2.400m can be generated. §.§ Semantic Segmentation As the representative deep learning models, in this study, we propose to adopt UNet <cit.> and FPN <cit.> to conduct the building semantic segmentation and investigate the impact of spatial Resolution in results. In general, Unet applies multiple skip connections between upper and downer layers, while FPN obtains features in bottom-up and top-down pathways. Both models have shown the high feasibility and robustness in many segmentation tasks. It should be noted, the data augmentation methods are adopted without random scaling in training, and a model trained by a specific area and resolution is applied to test the corresponding area and resolution for a fair comparison. § RESULTS AND DISCUSSIONS After testing, we generated segmentation results in three cities with different resolutions by two deep learning architectures. Figure <ref> illustrates the impact of spatial resolution on deep learning-based building semantic segmentation, and the detailed quantitative results in IoU can be found in Table <ref>. It can be seen that resolution significantly influences the segmentation results, although images in some resolutions are generated by resampling methods. With the decrease of spatial resolution, in the beginning, the IoU increases slightly in Austin and is stable in both Christchurch and Tokyo. After a certain threshold of 0.300m, the IoU drops rapidly in all study areas. Importantly, both UNet and FPN show a similar tendency. This makes sense, as building features have specific physical size, and the spatial resolution is significantly finer than the certain threshold, which may not help the segmentation performance while providing redundant information. Therefore, the spatial resolution should reach a certain threshold to achieve decent accuracy, and the excessively pursue of finer resolution than the threshold is no need in many cases. Such a trade-off should be involved while selecting an appropriate data source. The experimental results obtained from three cities with two deep learning models demonstrate that the resolution is not the higher the better, and 0.3m resolution would be a better cost-effective choice for data selection and preparation in building semantic segmentation tasks. § CONCLUSION In this study, we have investigated the impact of spatial resolution on deep learning-based building semantic segmentation and demonstrated the effectiveness of super resolution techniques in enhancing segmentation accuracy. Our results suggest that spatial resolution plays a critical role in the accuracy and generalization capability of deep learning models for building semantic segmentation, and that super resolution techniques can help to overcome the limitations of low-resolution data. To further advance this line of research, future work could extend our empirical evaluation to other deep learning models, study areas, and data sources. § ACKNOWLEDGEMENT We are grateful for the support and funding provided by the JSPS 21K14261 grant. ieee_fullname
http://arxiv.org/abs/2307.04400v1
20230710080159
ARK: Robust Knockoffs Inference with Coupling
[ "Yingying Fan", "Lan Gao", "Jinchi Lv" ]
stat.ME
[ "stat.ME", "math.ST", "stat.ML", "stat.TH" ]
ARK: Robust Knockoffs Inference with Coupling Yingying Fan is Centennial Chair in Business Administration and Professor, Data Sciences and Operations Department, Marshall School of Business, University of Southern California, Los Angeles, CA 90089 (E-mail: [email protected]). Lan Gao is Assistant Professor, Business Analytics and Statistics Department, Haslam College of Business, University of Tennessee, Knoxville, TN 37996 (E-mail: [email protected]). Jinchi Lv is Kenneth King Stonier Chair in Business Administration and Professor, Data Sciences and Operations Department, Marshall School of Business, University of Southern California, Los Angeles, CA 90089 (E-mail: [email protected]). This work was partially supported by NIH R01 Grant 1R01GM131407-01 and NSF Grants DMS-1953356 and EF-2125142. Yingying Fan^1, Lan Gao^2 and Jinchi Lv^1 University of Southern California^1 and University of Tennessee^2 July 8, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We investigate the robustness of the model-X knockoffs framework with respect to the misspecified or estimated feature distribution. We achieve such a goal by theoretically studying the feature selection performance of a practically implemented knockoffs algorithm, which we name as the approximate knockoffs (ARK) procedure, under the measures of the false discovery rate (FDR) and family wise error rate (FWER). The approximate knockoffs procedure differs from the model-X knockoffs procedure only in that the former uses the misspecified or estimated feature distribution. A key technique in our theoretical analyses is to couple the approximate knockoffs procedure with the model-X knockoffs procedure so that random variables in these two procedures can be close in realizations. We prove that if such coupled model-X knockoffs procedure exists, the approximate knockoffs procedure can achieve the asymptotic FDR or FWER control at the target level. We showcase three specific constructions of such coupled model-X knockoff variables, verifying their existence and justifying the robustness of the model-X knockoffs framework. Running title: ARK Key words: Knockoffs inference; High dimensionality; Feature selection; False discovery rate control; Family-wise error rate control; Coupling; Robustness § INTRODUCTION The knockoffs inference framework <cit.> is a powerful innovative tool for feature selection with controlled error rates. In particular, the model-X knockoffs <cit.> achieves the false discovery rate (FDR) control at a predetermined level in finite samples without requiring any specific model assumptions on how the response depends on the features, making it an attractive option for feature selection in a wide range of statistical applications. The fundamental idea of the knockoffs procedure is to construct knockoff variables that are exchangeable in distribution with the original features but are independent of the response conditional on the original variables. These knockoff variables serve as a control group for the original features, allowing researchers to identify relevant original features for the response. The model-X knockoffs inference has gained increasing popularity since its inception and there have been flourishing developments and extensions of the knockoffs framework and spirits, such as the k-familywise error rate (k-FWER) control with knockoffs <cit.>, power analysis for knockoffs procedure <cit.>, derandomized knockoffs <cit.>, knockoffs inference for time series data <cit.>, kernel knockoffs procedure <cit.>, and FDR control by data splitting or creating mirror variables <cit.>. A key assumption in the model-X knockoffs inference is that the joint distribution of features is known. However, such information is almost never available in practice. There has been overwhleming empirical evidence that the model-X knockoffs framework is robust to misspecified or estimated feature distributions <cit.>. Yet, the theoretical characterization of its robustness is still largely missing. A notable exception is the recent work of <cit.>, where it was formally and elegantly shown that the knockoffs data matrix collecting the knockoff variables can be generated from a distribution, which we name as the working distribution for the ease of presentation, that is different from the true underlying feature distribution, and that the resulting FDR inflation can be measured by the empirical Kullback–Leibler (KL) divergence between the true conditional distribution X_j | X_-j and the working conditional distribution. Here, X_j∈ℝ stands for the jth feature, X_-j∈ℝ^p-1 stands for the feature vector with the jth feature removed, and p is the feature dimensionality. Two important assumptions in their analyses for ensuring the asymptotic FDR control are 1) the working distribution should be learned independently from the training data used for feature selection and 2) the empirical KL divergence between the two knockoffs data matrices (of diverging dimensionalities) generated from the working and true distributions, respectively, needs to vanish as the sample size increases. Although their results are general and apply to arbitrary dependence structure of the response on features, these two assumptions do not always describe the practical implementation. Our results in the current paper are free of the two assumptions discussed above. To put more content into our statements above, especially the one about assumption 2), let us consider the scenario when the true feature matrix has independent and identically distributed (i.i.d.) entries from the t-distribution with ν degrees of freedom, but we misspecify it and use the Gaussian distribution as a working distribution to generate the knockoff variable matrix ∈ℝ^n× p, where n is the sample size. It can be calculated that the empirical KL divergence between and the model-X knockoff variable matrix ∈ℝ^n× p defined in <cit.> has mean and variance both at order np/ν (ν + p). Thus, only when ν^2 ≫ n min (n, p) (which is equivalent to np/ν (ν + p)→ 0), the FDR inflation as derived in <cit.> can vanish asymptotically. In contrast, our theory shows that as long as ν^2≫ s^4 (log p)^4 + 4/γ for some γ∈ (0, 1) with s≪ n^1/2 a sparsity parameter, the knockoffs procedure based on the working distribution can achieve the asymptotic FDR control. More details for our results and model assumptions are summarized formally in Section <ref>. We provide additional comparisons of our results with those of <cit.> in various parts of the paper where more specifics can be discussed. We emphasize and acknowledge that <cit.> established general robustness results without specific model assumptions, while some of our results rely on certain specific model assumptions. The main point we advocate here is that a different notion of closeness than the KL divergence can be advantageous in studying the robustness of the model-X knockoffs. The major goal of our paper is to establish a general theory on the robustness of the model-X knockoffs framework for the FDR and FWER control. We approach the problem by studying the performance of the approximate knockoffs (ARK) procedure, an algorithm that is most popularly implemented in practice when applying the knockoffs framework. The approximate knockoffs procedure differs from the model-X knockoffs in that the former generates the knockoff feature matrix from a working distribution that can be misspecified or learned from the same training data for feature selection. By showing that the approximate knockoffs inference procedure achieves the asymptotic FDR and FWER control as sample size increases, we can verify the robustness of the model-X knockoffs. An important idea in our technical analyses is coupling, where we pair the approximate knockoffs procedure with the model-X knockoffs procedure in such a way that random variables in these two paired procedures are close in realizations with high probability. Hereafter, we will refer to the model-X knockoffs as the perfect knockoffs procedure to emphasize its difference from the approximate knockoffs procedure. It is important to emphasize that we require the realizations of random variables in the paired procedures to be close, instead of the corresponding distributions being close. This is a major distinction from the assumption in <cit.>. Our new notion of closeness allows us to justify the robustness of the model-X knockoffs in some broader contexts not covered by studies in the existing literature. We also emphasize that although our conditions are imposed on the perfect knockoff variables, we do not need to know or construct them in implementation; the existence of such variables is sufficient for our theoretical robustness analyses. We present our theory by first stating the general conditions on the existence of the coupled perfect knockoff statistics and their closeness to the approximate knockoff statistics in Section <ref>, and then provide examples justifying these general conditions in Sections <ref> and <ref>. More specifically, our theory has three layers, related to different stages in applying the knockoffs inference procedure. Our general theory presented in Section <ref> directly makes assumptions on the quality of the approximate knockoff statistics (cf. (<ref>)) by requiring the existence of the perfect knockoff statistics that are sufficiently close to the approximate knockoff statistics. Then under some regularity conditions imposed on the distribution of these perfect knockoff statistics, we prove that the FDR and FWER are controlled asymptotically using the approximate knockoff statistics. This lays the theoretical foundation for our subsequent analyses that are developed by verifying these general conditions in various more specific scenarios. The second layer of our theory, presented in Section <ref>, delves deeper and replaces the coupling condition imposed on the knockoff statistics in Section <ref> with a coupling condition on the approximate knockoff variables generated from some misspecified or estimated feature distribution. Similar in nature to the coupling condition in our general theory, this new condition assumes that there exist perfect knockoff variables that can be coupled with approximate knockoff variables so that their realizations are close to each other with high probability. Since knockoff statistics are known functions of knockoff variables, such alternative condition intuitively and naturally leads to the verification of the coupling condition on knockoff statistics in our general theory. Indeed, we showcase using two commonly used knockoff statistics, namely the marginal correlation statistics and the regression coefficient difference statistics, that the coupling condition on knockoff variables can guarantee the coupling condition on knockoff statistics. We also verify that for each of these two constructions of knockoff statistics, the other regularity conditions in our general theory also hold, ensuring the asymptotic FDR and FWER control. The last layer of our theory is presented in Section <ref> and showcases three specific constructions of the coupled perfect knockoff variables. By imposing conditions on the misspecified or estimated feature distribution, we construct explicitly the coupled perfect knockoff variables and prove that the coupling conditions in the first and second layers of our general theory are satisfied. This gives us a complete theory with conditions imposed on the working distribution for generating knockoff variables and verifies the robustness of the model-X knockoffs inference procedure. Our theory allows high dimensionality of features and does not require an independent learning data set for estimating the feature distribution. There exist some other less related works in the literature that contribute to relaxing the assumption of fully known feature distribution in the model-X knockoffs framework. For instance, <cit.> relaxed such assumption via assuming the existence of sufficient statistic for the model and proposing an alternative conditional exchangeablity for knockoffs given the sufficient statistic. <cit.> investigated the robustness of knockoffs inference with estimated feature distribution in terms of the FDR control in the linear model setting where the features follow a latent factor model with parametric idiosyncratic noise. <cit.> provided theoretical guarantee of the asymptotic FDR control for the approximate knockoffs procedure under an assumption that the FDR function is Lipschitz with respect to feature covariance matrix when the features have the joint Gaussian distribution. The rest of the paper is organized as follows. Section <ref> first introduces the approximate knockoffs procedure and then presents the general conditions and theory for the asymptotic FDR and k-FWER control. We also introduce the coupling idea, a key technique in our theoretical analyses. We illustrate our general theory using two commonly used constructions of knockoff statistics in Section <ref>. Section <ref> further provides three specific constructions of the coupled perfect knockoff variables. We conclude our paper by summarizing the key results and discussing some future research directions in Section <ref>. All the proofs and technical details are provided in the Supplementary Material. To facilitate the technical presentation, let us introduce some notation that will be used throughout the paper. We use a_n ≪ b_n or a_n = o(b_n) to represent a_n / b_n → 0, a_n ≫ b_n to represent a_n / b_n →∞, and a_n ≲ b_n or a_n = O(b_n) to represent a_n ≤ C b_n for an absolute constant C > 0. Let a b and a b be the minimal and maximal values of a and b, respectively. For a vector ∈ℝ^p, denote by _1, _2, and _0 the ℓ_1-norm, ℓ_2-norm, and ℓ_0-norm, respectively. For 1 ≤ j ≤ p, _j is the jth component of and _-j is a subvector of with the jth component removed. For a matrix ∈ℝ^n × p, denote by _i, j the (i, j)th entry of , _j the jth column of , and _A_1, A_2 a submatrix of consisting of (_i, j)_i ∈ A_1, j ∈ A_2 for sets A_1 ⊂{1, …, n } and A_2 ⊂{1, …, p }. Let _max and _2 be the maximum norm and spectral norm of a matrix , respectively. For 1 ≤ j ≤ p, -j represents the set {1, …, p }∖{j}, and denote by |𝒜| the cardinality of set 𝒜. For a positive definite matrix , let λ_min() and λ_max() be the smallest and largest eigenvalues of , respectively. § GENERAL THEORY ON ROBUST KNOCKOFFS INFERENCE WITH COUPLING §.§ Model setup and model-X knockoffs framework Assume that we have n i.i.d. observations { (_i, y_i)}_i = 1^n from the population (X, Y), where X = (X_1, …, X_p)^⊤ is the p-dimensional feature vector and Y ∈ℝ is a scalar response. Here, the feature dimensionality p can diverge with the sample size n. Adopting the matrix notation, the n i.i.d. observations can be written as the data matrix = (_i, j ) ∈ℝ^n × p collecting the values of all the features and vector = (y_1,⋯, y_n)^⊤∈ℝ^n collecting the values of the response. A feature X_j is defined as null (or irrelevant) if and only if it is independent of the response conditional on all the remaining features; that is, Y X_j | X_-j, where X_-j is a subvector of X with the jth component removed. Denote by ℋ_0 = {1 ≤ j ≤ p: X_j  } the set of null features and ℋ_1 = ℋ_0^c that of nonnull (or relevant) features. To ensure the model identifiability and interpretability, we follow <cit.> and assume that ℋ_1 exists and is unique. Further assume that the subset of relevant features is sparse such that p_1 = | ℋ_1 | = o(n p), where |𝒜| stands for the cardinality of a given set. The goal is to select as many relevant features as possible while controlling some error rate measure at the prespecified target level. Two commonly used measures for evaluating the feature selection performance are the FDR <cit.> and k-FWER <cit.>, where the FDR is defined as the expectation of the fraction of false discoveries among all the discoveries and the k-FWER is defined as the probability of making k or more false discoveries. Specifically, for an outcome Ŝ of some feature selection procedure, the FDR and k-FWER are defined as = [] with = | Ŝ∩ℋ_0 | / | Ŝ | k- = ℙ ( |Ŝ∩ℋ_0| ≥ k ), respectively. The model-X knockoffs framework provides a flexible way for controlling the FDR at some prespecified target level in finite samples <cit.>, allowing arbitrary dimensionality of X and arbitrary dependence between response Y and feature vector X. The knockoffs method has also been explored in the context of the k-FWER control by <cit.>. A key step of the model-X knockoffs inference <cit.> is to generate the model-X knockoff variables X = (X_1, …, X_p)^⊤ such that X Y | X and (X, X)_(S)d= (X, X) S ⊂{1, …, p}, where (X, X)_(S) is obtained by swapping the components X_j and X_j in (X, X) for each j ∈ S. The construction of the model-X knockoff variables, which we will refer to as the perfect knockoff variables in the future presentation, requires the exact knowledge of the distribution of feature vector X. For example, Algorithm 1 in <cit.> provided a general approach to generating the perfect knockoff variables when such information is available. However, the exact knowledge of feature distribution is usually unavailable in real applications. Thus, in practical implementation, the problem becomes identifying the relevant subset ℋ_1 with the approximate knockoff variables generated from a feature distribution that can be different from the true underlying one. As stated in the Introduction, we study the robustness of the model-X knockoffs procedure by investigating the feature selection performance of its practical implementation, which we name as the approximate knockoffs procedure and formally present it in the next section for completeness. §.§ Feature selection with approximate knockoffs In practice, the approximate knockoffs inference procedure below is implemented popularly for controlling the FDR or k-FWER. 1) Generating approximate knockoff variables. Since the true underlying feature distribution F(·) is generally unavailable, we generate the knockoff variables from some user-specified feature distribution F̂(·), which may depend on the sample (, ), using the same algorithm proposed for generating the perfect knockoff variables (e.g., Algorithm 1 in <cit.>). Denote by = (_i, j) ∈ℝ^n× p the resulting approximate knockoffs data matrix. 2) Constructing approximate knockoff statistics. Pretend that were perfect knockoffs data matrix and follow the same procedure as in <cit.> to calculate the knockoff statistics Ŵ_j with j=1,⋯, p. Specifically, we first compute the feature importance statistics (Z_1, …, Z_p, Ẑ_1, …, Ẑ_p)^⊤ = t((, ), ), where t(·) is a measurable function of input ((, ), ), and Z_j and Ẑ_j measure the importance of the jth feature and its approximate knockoff counterpart relative to the response, respectively. Then the approximate knockoff statistic Ŵ_j for the jth feature is defined as Ŵ_j = f_j(Z_j, Ẑ_j), where f_j (·, ·) is an antisymmetric function satisfying f_j(x, y) = - f_j (y, x). See <cit.> for examples and characterizations on the valid construction of knockoff statistics. 3) Selecting relevant features. Calculate a data-driven threshold T for the knockoff statistics {Ŵ_j}_j=1^p and select the set of important features as Ŝ = {1≤ j ≤ p: Ŵ_j ≥T}. The thresholds for the FDR control and k-FWER control are different. Specifically, denoting 𝒲̂ = {|Ŵ_1|, …, |Ŵ_p|}, the thresholds for the FDR and k-FWER control are defined as T = min{t ∈𝒲̂: #{j: Ŵ_j ≤ -t}/#{Ŵ_j ≥ t} 1 ≤ q} and T_v = sup{t ∈𝒲̂: #{j: - Ŵ_j ≥ t} = v } with v the largest integer such that ∑_i = k^∞ 2^-(i+v)i+v-1 i≤ q, respectively, where q∈ (0,1) is the prespecified level for the FDR or k-FWER. It is seen that the only difference of the algorithm above from the perfect knockoffs procedure is how the knockoffs data matrix is generated. The perfect knockoffs procedure based on the true underlying feature distribution F(·) has been shown to control the FDR or k-FWER at the target level <cit.>. For the approximate knockoffs inference, however, it is reasonable to expect some inflation in the FDR and k-FWER control, and the inflation level depends on the qualities of the approximate knockoff variable matrix and the resulting knockoff statistics {Ŵ_j}_j=1^p. A desired property is that as the approximate knockoff statistics “approach" the perfect knockoff statistics, the level of inflation also vanishes. One contribution of our paper is to formally introduce a notion of closeness measuring the qualities of the approximate knockoff statistics {Ŵ_j}_j=1^p and knockoff variable matrix . As stated in the Introduction, our technical analyses have three layers, corresponding reversely to the different steps of the approximate knockoffs inference procedure described above. To put it into more content, note that the set of selected features Ŝ is defined directly as a function of the approximate knockoff statistics {Ŵ_j}_j=1^p. Hence, given {Ŵ_j}_j=1^p feature selection can be conducted without the knowledge of the approximate knockoff variable matrix or the feature distribution F(·). For this reason, the first layer of our analysis concerns the quality of {Ŵ_j}_j=1^p and characterizes what kind of approximate knockoff statistics can yield the asymptotic FDR or k-FWER control. The second layer of our analysis studies the quality of and is built on the first layer. We characterize what kind of approximate knockoff variable matrix can lead to valid knockoff statistics {Ŵ_j}_j=1^p in the sense of achieving the asymptotic FDR or k-FWER control. The third layer of our analysis goes all the way to the root of the knockoffs inference and provides specific examples and conditions on F̂(·) for ensuring the asymptotic FDR or k-FWER control. The key idea empowering our analyses is variable coupling behind the approximate knockoffs (ARK) procedure, which we formally introduce in the next section. §.§ Robust knockoffs inference with coupling An important observation is that the perfect knockoff variables in the model-X knockoffs framework <cit.> are not unique. Consequently, the knockoff statistics are not unique either. Indeed, even with the same algorithm (e.g., Algorithm 1 in <cit.>), the knockoff variables generated from different runs of the algorithm are only identically distributed. Our coupling idea is deeply rooted on such observation. Let us introduce some additional notation to facilitate our formal presentation of the general theory. Following the model-X knockoffs framework, for a realization of the perfect knockoff variable matrix generated from the true feature distribution F(·), we let (Z_1^*,…, Z_p^*, Z_1, …, Z_p)^⊤ = t((, ), ) and define the perfect knockoff statistics W_j = f_j (Z_j^*, Z_j) for 1 ≤ j≤ p, where functions t(·) and f_j(·) are chosen to be the same as in the approximate knockoffs inference procedure. We next establish a general theory on the asymptotic FDR control and k-FWER control for the approximate knockoffs inference procedure with regularity conditions imposed on Ŵ_j's. [Coupling accuracy] There exist perfect knockoff statistics {W_j}_j = 1^p such that for some sequence b_n → 0, ℙ( max_1 ≤ j ≤ p | Ŵ_j - W_j | ≥ b_n ) → 0. Conditions on the convergence rate b_n for ensuring the asymptotic FDR or k-FWER control will be specified in the subsequent assumptions. Condition <ref> above couples each realization of the approximate knockoff statistics {Ŵ_j}_j=1^p with a realization of the perfect knockoff statistics {W_j}_j=1^p, and they need to be sufficiently close to each other with high probability. Note that the existence of such {W_j}_j=1^p is required only for the theory, whereas the implementation uses only {Ŵ_j}_j=1^p. We will provide examples in later sections verifying the existence of such coupled {W_j}_j=1^p. The two conditions below are on the quality of the ideal knockoff statistics {W_j}_j=1^p and the signal strength in the data as measured by W_j's. [Average concentration of W_j] There exist deterministic quantities { w_j }_j=1^p such that p^-1∑_j = 1^pℙ ( | W_j - w_j | ≥δ_n ) = o(p^-1), where δ_n → 0 is a sequence satisfying δ_n ≥ b_n. [Signal strength] Let 𝒜_n = {j ∈ℋ_1: w_j ≥ 5 δ_n }. It holds that a_n = | 𝒜_n | →∞ and w_j > - δ_n for j ∈𝒜_n^c. As discussed in <cit.> and <cit.>, a desired property of the knockoff statistics is to have a large and positive value for W_j if j∈ℋ_1, and a small and symmetric around zero value for W_j if j∈ℋ_0. Conditions <ref> and <ref> above together formalize this property in the average probability sense. Note that there is no requirement that each individual w_j with j∈ℋ_1 is positive and large; we only need that there exist enough number (i.e., a_n) of w_j's with j∈ℋ_1 that are positive and large enough. Implicitly, a_n→∞ requires that the number of relevant features |ℋ_1| diverges with sample size as well. Condition <ref> requires that each perfect knockoff statistic W_j is concentrated around its corresponding w_j with rate δ_n in an average probability sense. Let us define p_0 = |ℋ_0| and G(t) = p_0^-1∑_j∈ℋ_0ℙ (W_j ≥ t). By <cit.>, the perfect knockoff statistics W_j with j∈ℋ_0 are symmetrically distributed around zero. It follows that G(t) = p_0^-1∑_j∈ℋ_0ℙ (W_j ≤ -t). We need to impose the technical conditions below on the distribution of the perfect knockoff statistics for our robustness analysis. [Weak dependence] For some constants 0 < γ < 1, 0< c_1 < 1, C_1 > 0, and a positive sequence m_n = o(a_n), it holds that ( ∑_j ∈ℋ_01 (W_j > t) ) ≤ C_1 m_n p_0 G(t) + o ( ( log p )^ - 1/γ [p_0 G (t)]^2 ) uniformly over t ∈ (0, G^-1 ( c_1 q a_n / p ) ]. [Distribution of W_j] Assume that G(t) is a continuous function. For the same constants γ and c_1 as in Condition <ref>, it holds that as n →∞, (log p)^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] G(t - b_n ) - G(t + b_n) / G(t) → 0 and a_n^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + b_n ) → 0. Condition <ref> above can be easily satisfied if W_j's with j∈ℋ_0 are independent of each other. At the presence of dependence, it imposes an assumption on the strength of correlation among the indicator functions 1(W_j>t) with j∈ℋ_0. The ratio G(t - b_n ) - G(t + b_n) / G(t) in Condition <ref> above is closely related to the hazard rate function in survival analysis if G(t) has a probability density function. Loosely speaking, assumption (<ref>) requires that the hazard rate function should have enough smoothness and be more or less bounded uniformly over the range t ∈ (0, G^-1 ( c_1 q a_n / p ) ]; it plays an essential role in determining the coupling accuracy b_n. Assumption (<ref>) allows a small fraction of W_j's for important features to take negative values with nonvanishing probabilities. We are now ready to present our first general theorem on the FDR control for the approximate knockoffs inference procedure. Under Conditions <ref>–<ref>, we have lim sup_n →∞≤ q. The main idea of the proof is to show that ∑_j ∈ℋ_01 (Ŵ_j ≥ t) ≈∑_j ∈ℋ_01 (W_j ≥ t) ≈∑_j∈ℋ_0ℙ (W_j ≥ t) and similarly ∑_j ∈ℋ_01 (Ŵ_j ≤ - t) ≈∑_j∈ℋ_0ℙ (W_j ≤ -t) uniformly over 0 < t ≤ G^-1 (c_1 q a_n/p) with asymptotic probability one, under Conditions <ref>, <ref>, and <ref>. In addition, we will show that threshold T falls into the range (0, G^-1 (c_1 q a_n/p)] with asymptotic probability one under Conditions <ref>–<ref>. Thus, ∑_j ∈ℋ_01 (Ŵ_j ≥ T) ≈∑_j ∈ℋ_01 (Ŵ_j ≤ - T) with asymptotic probability one by the symmetry of {W_j}_j ∈ℋ_0. Consequently, the FDR of the approximate knockoffs procedure is asymptotically the same as that of the perfect knockoffs procedure, where the latter has been proved to be controlled at the target level. This ensures that the FDR of the approximate knockoffs procedure can be controlled asymptotically. Next we will establish the companion result on the k-FWER control for the approximate knockoffs inference procedure. Recall that the selected subset is Ŝ_2 := {1 ≤ j ≤ p: Ŵ_j ≥ T_v} with threshold T_v given in (<ref>). Denote by V̂ = | Ŝ_2 ∩ℋ_0| the number of false discoveries. Similar to the FDR analysis, we assume that |ℋ_1|→∞ as n→∞. Further, we consider the scenario when k diverges very slowly with n as well. Our theorem will again be built on the key Condition <ref> that there exist coupled perfect knockoff statistics that are sufficiently close to the approximate knockoff statistics. However, different from the FDR study when Conditions <ref>–<ref> are needed, we assume instead the two technical conditions below for the distribution of the perfect knockoff statistics, which can be interpreted intuitively similar to Conditions <ref>–<ref>. [Weak dependence] For constants 0 < γ < 1 and C_2 > 0, and a positive sequence m_n = o(k), it holds that ( ∑_j ∈ℋ_01 (W_j > t) ) ≤ C_2 m_n p_0 G(t) + o( (log k )^ - 1/γ [p_0 G (t)]^2 ) uniformly over t ∈ (G^-1 ( 3 k / 2 p ), G^-1 ( k / 2 p ) ). Assume that G(t) is a continuous function. It holds that as n →∞, sup_t ∈( G^-1 (3k/2p), G^-1 (k/2p)) G (t - b_n) - G(t + b_n)/G(t)→ 0 and k^-1∑_j ∈ℋ_1ℙ(W_j < - G^-1 (3 k /2 p) ) → 0 Now we are ready to present our general theorem on the k-FWER control for the approximate knockoffs procedure. Assume that Conditions <ref>, <ref>, and <ref> are satisfied, k →∞, and m_n / k → 0 as n→∞. Then for each ε > 0, we have lim sup_n →∞ ℙ ( V̂≥ k(1 + ε) ) ≤ q. <cit.> showed that the perfect knockoffs inference procedure for the k-FWER control provides precise finite-sample control on the k-FWER. The main idea for proving Theorem <ref> is to compare the approximate knockoff statistics with their coupled perfect counterparts and show that the approximate threshold T_v satisfies |T_v - T_v| ≤ b_n as long as max_1 ≤ j ≤ p |Ŵ_j - W_j| ≤ b_n, where T_v is the corresponding threshold from the perfect knockoff statistics. Moreover, we can show that for each > 0, with high probability it holds that T_v + M_v + 1 < T_v - 2b_n ≤T_v + M_v for M_v ≤ v. Therefore, the probability of the approximate knockoffs inference procedure making at least k false discoveries can be related to that of the FWER control with the perfect knockoff statistics, which establishes the desired result in Theorem <ref>. § ILLUSTRATION OF THE GENERAL THEORY §.§ Characterization of approximate knockoff variables We have established in Section <ref> a general theory on the asymptotic control of the FDR and k-FWER for the approximate knockoffs inference. The key assumption for ensuring the asymptotic FDR and k-FWER control is Condition <ref>. Since the knockoff statistics are intermediate results calculated from the knockoff variables, it is important to provide a characterization on the quality of the approximate knockoff variable matrix that can guarantee Condition <ref>. The assumption below is imposed for such a purpose. For constructed from the approximate knockoffs inference procedure, there exists a perfect knockoff data matrix and an asymptotically vanishing sequence Δ_n such that ℙ(max_1 ≤ j ≤ p n^-1/2_j - _j _2 ≥Δ_n) → 0, where _j and _j are the jth columns of the approximate and perfect knockoff data matrices and , respectively. Condition <ref> above couples each approximate knockoff variable _j with a perfect knockoff variable _j. Similar to Condition <ref>, we need the realizations instead of the distributions of _j and _j to be close, which is a major distinction from the assumption in <cit.>. We next show that the closeness between and can lead to the closeness between Ŵ_j's and W_j's as required by Condition <ref>. Since different construction of the knockoff statistics depends on the feature matrix differently, we showcase the theory using two popularly used constructions of the knockoff statistics: the marginal correlation knockoff statistics and the regression coefficient difference (RCD) knockoff statistics. §.§ Marginal correlation knockoff statistics The marginal correlation is a commonly used measure on variable importance for feature screening due to its simplicity. Given an approximate knockoff variable matrix and its coupled perfect counterpart satisfying Condition <ref>, the approximate knockoff statistics based on the marginal correlation difference are defined as Ŵ_j = ( √(n)_2 )^-1 ( | _j^⊤ | - | _j^⊤ | ) for 1 ≤ j ≤ p, and the coupled perfect knockoff statistics are given by W_j = ( √(n)_2 )^-1 ( | _j^⊤ | - | _j^⊤ | ) for 1 ≤ j ≤ p. Observe that W_j - Ŵ_j = ( √(n)_2 )^-1 ( | _j^⊤ | - | _j^⊤ | ) and thus under Condition <ref>, we have that with asymptotic probability one, max_1≤ j≤ p|W_j-W_j|≤Δ_n. This result is summarized formally in Lemma <ref> in Section <ref> of the Supplementary Material. We consider the flexible nonparametric regression model Y = f(X_ℋ_1) + ε, where f is some unknown regression function, X_ℋ_1 = (X_j)_j ∈ℋ_1 contains all the relevant features for response Y, and ε is the model error satisfying that ε X and (ε) = 0. Assume that feature vector X = (X_1, …, X_p)^⊤d∼ N(0, ) with the positive definite covariance matrix. Moreover, let the distribution of the perfect knockoff variables X = (X_1, ⋯, X_p)^⊤ satisfy that (X, X) = (X_1, ⋯, X_p, X_1, ⋯, X_p) d∼ N (0, [ - r I_p; - r I_p ; ]), where r > 0 is a constant such that the above covariance matrix is positive definite. Here, we consider the equicorrelated construction for simpler presentation and the diagonal matrix r I_p can be replaced with a general version (r_1, …, r_p) with possibly distinct diagonal entries { r_j}_j = 1^p. The above construction of the perfect knockoff variables has been discussed in <cit.>. Note that the Gaussian distribution assumption is imposed mainly to verify the general Conditions <ref> and <ref>. If one assumes directly these two conditions, the Gaussian distribution assumption can be removed. Furthermore, we make the additional technical assumptions below on the generative model (<ref>) to verify the conditions in our general theory presented in Section <ref>. Y is a sub-Gaussian random variable with sub-Gaussian norm Y _ψ_2. Define 𝒜_n = { j ∈ℋ_1:( Y^2)^ - 1/2 (| (X_j Y) | - | ( X_j Y) |)| ≥ 5 δ_n } with δ_n = C_X,Y√(log p/n), where C_X,Y:=max_1 ≤ j ≤ p{ 16 √(2) X_j _ψ_2 Y _ψ_2/ ( Y^2)^1/2 8√(2) |w_j| Y _ψ_2^2 / Y^2 }. It holds that a_n:= |𝒜_n| →∞ and C_X,Y is a positive constant that is independent of p and n. Denote by (^-1)_j the jth column of matrix ^-1, _i, j the (i, j)th entry of matrix , and _ℋ_1, j a vector given by (_i, j)_i ∈ℋ_1. Recall the definition G(t) = p_0^-1∑_j∈ℋ_0ℙ (W_j ≥ t). Matrices ^-1 and are sparse in the sense that max_1≤ j ≤ p (^-1)_j_0 ≤ m_n and ∑_j ∈ℋ_01 ( _ℋ_1, j≠ 0 ) ≤ m_n. In addition, C_1 < r < min_1 ≤ j ≤ p_j, j≤max_1 ≤ j ≤ p_j, j < C_2 for some constants C_1> 0 and C_2 > 0. It holds that |ℋ_1|^-1∑_j ∈ℋ_1ℙ (W_j < - t ) ≤ G(t) for all t ∈ (0, C_3√(n^-1log p)) with C_3>0 some large constant. Under Conditions <ref>–<ref>, we can verify that Conditions <ref>–<ref> are satisfied. This together with Condition <ref> and our general theorem on the FDR control (cf. Theorem <ref>) leads to the theorem below. Assume that Conditions <ref>–<ref> are satisfied. In addition, assume that for some constant 0 < γ < 1, (log p)^1/γ m_n / a_n → 0 and the coupling accuracy Δ_n in Condition <ref> satisfies √(n)Δ_n (log p)^1/2 + 1/γ→ 0. Then for the approximate knockoffs inference based on the marginal correlation, we have lim sup_n →∞≤ q. Let us make a few remarks on the conditions and result presented in Theorem <ref> above. Condition <ref> verifies the signal strength assumption in Condition <ref> in the specific context of model (<ref>) and marginal correlation knockoff statistics. We show in Lemma <ref> in Section <ref> of the Supplementary Material that Condition <ref> holds with δ_n = O(√(n^-1log p )). Since we assume Gaussian feature distribution in this section, the dependence among the indicator functions as required by Condition <ref> is determined by covariance matrix . Hence, Condition <ref> is imposed to justify the validity of Condition <ref>. It is worth mentioning that the sparse dependence structure assumed in Condition <ref> can be replaced with a general assumption that the conditional distribution X_ℋ_0 | X_ℋ_1 has sparse pairwise dependency and the sequence { h_j(t; X_ℋ_1): = ( 1 (W_j ≥ t) | X_ℋ_1 ) }_j ∈ℋ_0 has sparse pairwise correlation for each given t > 0. Condition <ref> is a technical assumption that is intuitive and requires that on average, the probability of a relevant feature having a negative valued W_j is smaller than the corresponding probability of an irrelevant feature. Such condition is compatible with our requirement that relevant features should have positive and larger magnitude for W_j. The convergence rate assumption √(n)Δ_n (log p)^1/2 + 1/γ→ 0 in Theorem <ref> indicates that Δ_n ≪δ_n ∼√(n^-1log p), where δ_n is the concentration rate of individual W_j to w_j. In view of (<ref>), the requirement of Δ_n≪δ_n indeed constrains that the quality of the approximate knockoff statistics, as measured by Δ_n, is of an order smaller than the concentration rate δ_n; this is a general condition we need and not unique to the marginal correlation knockoff statistics. It is worth mentioning that the bound obtained in (<ref>) may be improved under more specific model assumptions. For instance, if (X̂_j, X_j) Y for j ∈ℋ_0 and Y is a sub-Gaussian random variable with (Y) = 0, then under Condition <ref> we can show that max_1 ≤ j ≤ p | Ŵ_j - W_j| ≤ C Δ_n √(n^-1log p). Later in Section <ref>, we will provide extensive analysis on the coupling order Δ_n using some specific examples of feature distributions. Next we will present the parallel result on the k-FWER control. Assume that Conditions <ref>, <ref>, and <ref> are satisfied, k →∞, m_n / k → 0, and Δ_n √(n log p)→ 0. Then for each ε > 0, we have lim sup_n →∞ ℙ ( V̂≥ k(1 + ε) ) ≤ q. The interpretations of the assumptions in the context of the k-FWER control are similar and thus omitted here for simplicity. §.§ Regression coefficient difference with debiased Lasso Another popularly used construction of the knockoff statistics is the regression coefficient difference (RCD). Let us consider the linear regression model = + , where = (β_j)_1 ≤ j ≤ p∈ℝ^p is the true regression coefficient vector, d∼ N(0, σ^2 I_n) is the model error vector, and . Assume that feature vector X = (X_1, …, X_p)^⊤ has mean 0_p ∈ℝ^p and covariance matrix ∈ℝ^p × p. Denote by ^ = (^⊤ , 0_p^⊤)^⊤∈ℝ^2p the augmented true parameter vector. Let =(β̂_j)_1 ≤ j ≤ 2p∈ℝ^2p be the debiased Lasso estimator (<cit.>) based on the augmented design matrix ^ := [, ], where is the approximate knockoff variable matrix. Assume that Condition <ref> is satisfied and is the coupled perfect knockoffs variable matrix. Similarly, define ^ := [, ]. Then can be coupled with the debiased Lasso estimator denoted as =(β_j)_1 ≤ j ≤ 2p∈ℝ^2p based on ^. Then the regression coefficient difference knockoff statistics can be defined as Ŵ_j = |β̂_j| - |β̂_j+p| and W_j = |β_j| - |β_j+p| for the approximate and perfect knockoffs procedures, respectively, for 1 ≤ j ≤ p. We provide the explicit definition of the debiased Lasso estimator to assist future presentation. For 1≤ j ≤ 2p, the debiased Lasso estimator is a one-step bias correction from some initial estimator ^=(β̂_j^)_1 ≤ j ≤ 2p∈ℝ^2p and is defined as β̂_j = β̂^_j + _j^⊤( - ^^) /_j^⊤^_j, where _j is the score vector defined as _j = ^_j - _-j_j with _j := _b{^_j - ^_-jb_2^2 /2n + λ_j b_1 } and {λ_j}_j = 1^2p the nonnegative regularization parameters. We construct the initial estimator as ^ := _b{ - ^b_2^2 /2n + λb_1 } with λ = C √(n^-1log (2p)) the regularization parameter and C>0 some constant. Analogously, the coupled debiased Lasso estimator can be defined componentwisely as β_j = β^_j + _j^⊤( - ^^) /_j^⊤^_j 1 ≤ j ≤ 2p, where ^ = (β_j^)_1 ≤ j ≤ 2p := _b{ - ^b_2^2 /2n + λb_1 } and _j = ^_j - ^_-j_j _j := _b{^_j - ^_-jb_2^2 /2n + λ_j b_1 }. It is important to emphasize that the same regularization parameters λ and λ_j's in defining should be used as in defining in (<ref>) so that their constructions differ only by the used feature matrix; this plays a key role in applying our coupling technique. Indeed, we prove in Lemma <ref> in Section <ref> of the Supplementary Material that the coupling technique together with Condition <ref> and some other regularity conditions ensures that with asymptotic probability one, max_1≤ j≤ 2p|β_j-β̂_j|≲Δ_ns√((log p)/n). The above result guarantees that Ŵ_j's and W_j's are also uniformly close over 1≤ j≤ p with max_1≤ j≤ p|Ŵ_j - W_j|≲Δ_ns√((log p)/n). As long as sΔ_n→ 0, this upper bound has a smaller order than the concentration rate δ_n of W_j (cf. Condition <ref>), because here δ_n ∼√(n^-1log p) as shown in our Lemma <ref> in Section <ref>. As commented after Theorem <ref>, the assumption that the coupling rate of max_1≤ j≤ p|W_j-Ŵ_j| is of a smaller order than the concentration rate δ_n plays a key role in establishing our theory on the asymptotic FDR control. We next introduce some additional notation and formally present the regularity conditions specific to this section. Observe that by symmetry, the augmented feature vector with the perfect knockoff variables has covariance matrix ^A = [ - D; - D ; ], where D is a diagonal matrix such that matrix ^A is positive definite. Let ^A = (^A)^-1 and _j = (_j, l)_l ≠ j with _j, l = - ^A_j, l /^A_j, j. It has been shown in <cit.> that the residuals e_j = X_j^ - X^_-j_j satisfy that ( e_j, X_-j^ ) = 0, ( e_j ) = 1/ ^A_j, j, and (e_j, e_l ) = ^A_j, l/^A_j, j^A_l, l. For 1 ≤ j ≤ 2p, denote by 𝒮_j = (_j) ∪(_j) ∪(_j). Let J = (^) ∪() ∪() and s := ^_0 = _0 = o(n). We make the technical assumptions below. a) For some constant C_4 > 0, ℙ (|J| ≤ C_4 s) → 1. b) For some sequence m_n ≲ s, it holds that max_1 ≤ j ≤ 2 p^A_j _0 ≤ m_n and ℙ ( max_1 ≤ j ≤ 2p |𝒮_j| ≤ C_5 m_n ) → 1 with some constant C_5>0. c) max_1 ≤ j ≤ 2p_j _2 ≤ C_6 and C_7 < λ_min (^A ) ≤λ_max (^A) < C_8 with some positive constants C_6, C_7, and C_8. [Restrictive eigenvalues] Assume that with probability 1 - o(1), min_δ_0 ≤ C_9 s δ^⊤[^ ] ^⊤^δ/ nδ_2^2 ≥ c_1 for some large enough constant C_9>0 and a small constant c_1 > 0. The features X_j's and errors e_j's are sub-Gaussian with sub-Gaussian norms X_j _ψ_2≤ϕ and e_j _ψ_2≤ϕ for some constant ϕ > 0. Let 𝒜_n = {j ∈ℋ_1: |β_j| ≫√(n^-1log p)} and it holds that a_n := |𝒜_n| →∞. The features X_j's and the errors e_j's are sub-Gaussian satisfying X_j _ψ_2≤ϕ and e_j _ψ_2≤ϕ for some constant ϕ > 0 and with probability 1 - O( p^-c), _j - _j _1 ≤ C n^-1/2 a_n,1,  [, ]_-j (_j - _j ) _2^2 ≤ C a_n,2 n^-1 [, ]^⊤_j _max≤ C,  ^init - _1 ≤ C s √(log p/n), where a_n,1 and a_n,2 are two possibly diverging sequences. Moreover, |σ^jk|/√(σ^jjσ^kk) < c for some constant 0< c < 1. We are now ready to state our results on the FDR control for the approximate knockoffs inference based on the debiased Lasso coefficients. Assume that Conditions <ref> and <ref>–<ref> hold, m_n / a_n → 0, and m_n^1/2 s (log p)^3/2 + 1/γ/√(n) + Δ_n s (log p)^1 + 1 /γ→ 0 for some constant 0 < γ < 1. Then we have lim sup_n →∞≤ q. Similarly as discussed in the last section, Condition <ref> is used to verify the weak dependence assumption in Condition <ref>. Conditions <ref> and <ref> are two regularity assumptions imposed for proving (<ref>). Condition <ref> contributes to verifying the general signal strength requirement in Condition <ref>. We have the parallel theorem for the k-FWER control below. Assume that Conditions <ref> and <ref>–<ref> are satisfied, k →∞, m_n / k → 0, and m_n^1/2 s (log p)^3/2 (log k)^1/γ/√(n) + Δ_n s log p → 0 for some constant 0 < γ < 1. Then for each ε > 0, we have lim sup_n →∞ ℙ ( V̂≥ k(1 + ε) ) ≤ q. Assume Conditions <ref> and <ref>-<ref> are satisfied. When (log p)^1/γ + 1/2 ( n^1/2 b_n + n^-1/2 s log p) → 0, we have lim sup_(n, p) →∞≤ q. We start with verification of Condition <ref>. Let J = (_0) ∪() ∪(). Assume Condition <ref> is satisfied. Moreover, with asymptotic probability one, it holds that |J| ≤ m = o(n) and the restricted eigenvalue of n^-1 [,]^⊤_J' [,]_J' is lower bounded by κ_c for any J' with |J'| ≤ m. Then we have ℙ( max_1 ≤ j ≤ p |Ŵ_j - W_j |≲κ_c^-1 m^3/2Δ_n (n^-1log p)^1/2 + m^1/2Δ_n n^-1/2) ≥ 1 - o(1). This result can be improved if we apply another version of condition (take advantage of the specific structure in the distance between approximate and perfect knockoff variables). Cite Lv and Fan's JASA paper on asymptotic equivalence.... We continue to verify Condition <ref>. let w_j = |β_j^0| for 1 ≤ j ≤ p. Then we can obtain the following result. Assume that ^init - ^0 _1 = o_p(1). Then we have ∑_j = 1^p ℙ( |W_j - w_j| > C √(log p/n)) → 0. Next we turn to verification of Condition <ref>. Suppose that the features X_j's and the errors e_j's are sub-Gaussian satisfying X_j _ψ_2≤ϕ and e_j _ψ_2≤ϕ for some constant ϕ > 0 and with probability 1 - O( p^-c), _j - _j _1 ≤ C n^-1/2 a_n,1,  [, ]_-j (_j - _j ) _2^2 ≤ C a_n,2 n^-1 [, ]^⊤_j _max≤ C,  ^init - _1 ≤ C s √(log p/n), where a_n,1 and a_n,2 are two possibly diverging sequences. Moreover, |σ^jk|/√(σ^jjσ^kk) < c for some constant 0< c < 1. If (log p)^1/γ + 1/2 [ n^1/2 b_n + n^-1/2 s log p ] → 0, then we have (<ref>) in Condition <ref> is satisfied. Here we assume both the features X_j and the errors e_j are sub-Gaussian for simpler presentation. In fact, if we assume X_j_ψ_2≤ϕ and sparsity of correlations between features, that is, _j_0 ≤ s = o(n). In addition, suppose the coefficients γ_j_max≤ c are bounded. Then the errors e_j is also sub-Gaussian with e_j _ψ_2≤ c s ϕ. (This results follows from the fact: If X_1, X_2 are random variables such that X_i is b_i-sub-Gaussian, then X_1 + X_2 is (b_1 + b_2)-sub-Gaussian). § KNOCKOFF VARIABLE COUPLING In this section, we present three specific constructions for the coupled perfect knockoff variables and verify that they satisfy Condition <ref> with the desired convergence rate. §.§ Knockoffs for multivariate t-distribution In this example, we will construct knockoffs for multivariate t-distributed features by leveraging only information of the first two moments; the knowledge of the t-distribution will not be utilized in the approximate knockoffs construction. Assume that the underlying true feature distribution for X = (X_1, …, X_p)^⊤ is the multivariate centered t-distribution t_ν (0, ^-1) with unknown parameters ν and ^-1. We construct the approximate knockoff variables from the Gaussian distribution with the attempt to match the first two moments of feature vector X. It is seen that the working distribution F̂ is misspecified. It has been a common practice to use the multivariate Gaussian distribution to construct knockoff variables in practice; see, e.g., <cit.>. Assume that there is an effective estimator constructed using data matrix for the precision matrix := [(X)]^-1 = ν - 2/ν. We construct the approximate knockoffs data matrix from the Gaussian distribution as = (I_p - r ) + (2 r I_p - r^2 )^1/2, where r is a constant such that 2 r I_p - r^2 is positive definite, and ∈ℝ^n × p is independent of (, ) and consists of i.i.d. standard normal entries. Before suggesting our coupled perfect knockoff variables, it is necessary to review some properties of the multivariate t-distribution. Note that an alternative representation of X is given by X = η/√(Q / ν), where ν > 0 is the degrees of freedom, ηd∼ N( 0, ^-1), Q d∼χ_ν^2, and η Q. Here, χ_ν^2 is the chi-square distribution with ν degrees of freedom. When ν is large, the distribution of X is close to the Gaussian distribution N( 0, ^-1). We are ready to introduce our construction of the coupled perfect knockoff variable matrix = (I_p - r ) + (1/√( /ν) ) (2 r I_p - r^2 )^1/2, where (1/√( /ν)) = ( 1/√(Q_1 / ν),1/√(Q_2 / ν), …, 1/√(Q_n / ν)) with {Q_i}_i = 1^n i.i.d. random variables sampled from the conditional distribution Q|X. Let η = (η_1, …, η_n) be sampled from the conditional distribution η | X, and r and the identical realizations to those used in (<ref>). By construction, we can see that (, ) = (1/√( /ν)) (η, η(I_p - r ) + (2 r I_p - r^2 )^1/2) d=(1/√( /ν)) ( η, η ), where (η, η) have i.i.d. rows that follow a common Gaussian distribution N( 0, ^) with ^ = [ ^-1 ^-1 - r I_p; ^-1 - r I_p ^-1 ]. Thus, this verifies that forms a perfect knockoff data matrix for . The proposition below verifies that the coupling assumption in Condition <ref> holds. Assume that C_l≤^-1_2 ≤ C_u and (2 r I_p - r^2 )^-1_2≤ C_u for some constants C_u> 0 and C_l > 0. Assume further that and are both sparse in the sense that max_1 ≤ j ≤ p (_j _0 + _j _0 ) ≤ρ_n almost surely with ρ_n √(log p/n)→ 0 and ρ_n ν^-1/2→ 0, and that there exists a constant C > 0 such that ℙ ( - _2 ≥ C ρ_n (n^-1log p)^1/2) → 0. Then as ν≥ 9 and log p = o(n^1 - 4/ν), we have that for some constant C > 0, ℙ(max_1≤ j ≤ p n^-1_j - _j _2 ≤ C ( ρ_n (n^-1log p)^1/2 + ν^-1/2) ) → 1. The assumed convergence rate of ρ_n (n^-1log p)^1/2 for precision matrix estimation in (<ref>) has been verified in many existing works (e.g., <cit.>, <cit.>, and <cit.>) under the sparsity assumption. Proposition <ref> above indicates that the knockoffs procedure can potentially achieve the asymptotic FDR control even when the working distribution is misspecified but with the first two moments matched. We next compare our results to those in <cit.>. For simplicity, let us further assume that = I_p and is known. Then X d∼ t_ν ( 0, I_p) and the constructed approximate knockoff variables X̂d∼ N( 0, I_p). We set r = 1 in (<ref>) and (<ref>) when constructing the approximate and perfect knockoff matrices, hence the augmented covariance matrix in (<ref>) is given by ^ = I_2p. In such case, Proposition <ref> guarantees that ℙ( max_1 ≤ j ≤ p n^-1_j - _j _2 ≤ C ν^-1/2) → 1. This implies that Condition <ref> is satisfied with Δ_n = C ν^-1/2. Observe that X_j = Z_j/√(𝒳_ν^2 / ν) with Z_j d∼ N(0, 1) and the denominator satisfies that for an absolute constant C > 0 and ν≫log (np), ℙ( |𝒳_ν^2 / ν - 1 | ≥ C √(log (np )/ν)) = O( (np)^- C^2 / 8 ). These indicate that the multivariate t-distribution is asymptotically close to the standard Gaussian distribution when ν≫log (np). Thus, under Conditions <ref>–<ref> and <ref> for the setting of the linear model, if we construct the knockoff statistics as the regression coefficient difference from the debiased Lasso, we can prove using similar technical analysis as for Theorem <ref> that lim sup_n →∞≤ q, when ν^1/2≫ s (log p)^1 + 1/γ and s (log p)^3/2 + 1/γ/√(n)→ 0 for some 0 < γ < 1. <cit.> also derived an upper bound on the FDR inflation. Directly applying their result and calculating the KL divergence in their upper bound under the specific model setting stated above, we can obtain the lemma below. By applying Theorem 1 in <cit.>, it requires at least ν^2 ≫ n min(n, p) for lim sup_n →∞≤ q. The intuition behind Lemma <ref> above is that Theorem 1 in <cit.> requires the empirical KL divergence max_j∈ℋ_0K̂L̂_j converging to zero in probability, where K̂L̂_j = ∑_i = 1^n [ _i, j ^2/2 - ν + p/2log(1 + _i, j ^2/ν + _i, -j_2^2) - (_i, j^2/2 - ν + p/2log(1 + _i, j^2/ν + _i, -j_2^2) )]. Here, = (_i, j ) ∈ℝ^n × p consists of i.i.d. rows sampled from t_ν ( 0, I_p), while = (_i, j)∈ℝ^n × p consists of i.i.d. rows sampled from N( 0, I_p). As shown in the proof of Lemma <ref> in Section <ref> of the Supplementary Material, K̂L̂_j is a sum of i.i.d. random variables with positive mean of order O(p/ν (ν + p)). Hence, K̂L̂_j is concentrated at O(n p/ν (ν + p)) and to ensure that K̂L̂_j d→ 0, we need at least np/ν (ν + p)→ 0, or equivalently, ν^2 ≫ n min(n, p). Such condition is stronger than our requirement ν^1/2≫ s (log p)^1 + 1/γ derived from the coupling technique when s = o(√(n)) and p ≥ n. §.§ Gaussian knockoffs We now study the commonly used example of Gaussian knockoffs with the correctly specified distribution family. Assume that feature vector X = (X_1, …, X_p)^⊤d∼ N(0, ^-1) with unknown precision matrix , and we have an effective estimator for the precision matrix . Then the approximate knockoff variable matrix can be constructed as = (I_p - r ) + (2 r I_p - r^2 )^1/2, where r > 0 is some constant such that 2 r I_p - r^2 is positive definite, and = (_i, j) ∈ℝ^n × p is independent of (, ) with independent entries Z_i,jd∼ N(0, 1). Note that the approximate knockoff variable matrix above uses the correctly specified distribution family for (i.e., the Gaussian distribution). We couple the approximate knockoff variable matrix with the perfect knockoff variable matrix = (I_p - r ) + (2 r I_p - r^2 )^1/2, where importantly, and r are exactly the same as those used in constructing . We present the result below regarding the accuracy of the approximate knockoff variables. Assume that C_l≤^-1_2 ≤ C_u and (2 r I_p - r^2 )^-1_2 ≤ C_u for some constants C_u> 0 and C_l > 0. Assume further that precision matrix and its estimator are both sparse in the sense that max_1 ≤ j ≤ p (_j _0 + _j _0) ≤ρ_n almost surely with ρ_n √(log p/n)→ 0, and that there exists a constant C > 0 such that ℙ( - _2 ≥ C ρ_n √(log p/n)) → 0. Then we have that for some constant C > 0, ℙ(max_1≤ j ≤ p n^-1/2_j - _j _2 ≤ C ρ_n √(log p/n))→ 1. Proposition <ref> above implies that Condition <ref> is satisfied with coupling accuracy Δ_n = C ρ_n √(log p/n), where ρ_n represents the sparsity level of ^-1 and its estimator. We again consider the linear model setting and construct the knockoff statistics as the regression coefficient difference from the debiased Lasso. Then it follows from Theorem <ref> that under Conditions <ref>–<ref> and <ref>, we have lim sup_n →∞≤ q provided that s ρ_n (log p )^3/2 + 1/γ = o(√(n)) for some 0 < γ < 1. Our technical analyses do not require data splitting or an independent pretraining sample. The results in <cit.> require an independent unlabeled pretraining data set with sample size N to estimate the unknown precision matrix. Specific to the model setting considered in this section, their results indicate that lim sup_n →∞≤ q when N≫ n ρ_n (log p )^2. This again shows some potential advantage of our coupling technique in the robustness analyses. §.§ Nonparanormal knockoffs We further investigate a much more general distribution family, that is, the Gaussian copula distributions. Assume that X = (X_1, …, X_p)^⊤ has marginal distributions X_j d∼ F_j(·) and satisfies that (Φ^-1 (F_1(X_1)), …, Φ^-1(F_p(X_p)))^⊤d∼ N( 0, ^-1), where the diagonal entries of ^-1 are all one. Further assume that we have effective estimators F̂_j for F_j and for . Define = (_i, j) ∈ℝ^n× p with _i,j = Φ^-1 (F̂_j (_i, j) ) and = (_i, j) ∈ℝ^n× p with _i,j = Φ^-1 (F_j (_i, j) ). Let = (_i, j) ∈ℝ^n × p be given by = (I_p - r ) + (2 r I_p - r^2 )^1/2, where r > 0 is some constant such that 2 r I_p - r^2 is positive definite, and = (_i, j) ∈ℝ^n × p is independent of (, ) with i.i.d. entries Z_i,jd∼ N(0, 1). We construct the approximate knockoff variable matrix as = (_i, j) with _i, j = F̂_j^-1 (Φ(_i, j)). It is seen that this example also uses the correctly specified distribution family for X, i.e., the Gaussian copula. We suggest to construct the coupled perfect knockoff variable matrix as = (_ij) with _i,j = F_j^-1 (Φ(_i,j)), where _i,j represents the (i,j)th entry of matrix = (I_p - r ) + (2 r I_p - r^2 )^1/2 with and r identical in values to the ones used in (<ref>). The proposition below characterizes the coupling rate between and . Assume that (<ref>) is satisfied and both and are sparse in the sense that max_1≤ j ≤ p (_j _0 +_j _0) ≤ρ_n with p ρ_n = o(n /(log n)^3) almost surely. Assume further that for 1 ≤ j ≤ p, the distribution estimators satisfy 1/2n≤F̂_j (x) ≤ 1 - 1/2n for each x ∈(X_j), (X_j) ⊂ [-b, b] for some constant b >0, and there exists a constant M > 0 such that ℙ( max_1 ≤ j ≤ psup_ x ∈ [ 2M n^-1log n, 1 - 2M n^-1log n] | F̂_j^-1 (x) - F_j^-1 (x) | ≥ (M n^-1 log n )^1/2) → 0, ℙ( max_1 ≤ j ≤ psup_x ∈ (F_j^-1(2M n^-1log n), F_j^-1 (1 - 2M n^-1log n) ) | F̂_j(x) - F_j(x) | / F_j(x) [1 - F_j(x)] ≥ (M n^-1 log n )^1/2) → 0, ℙ( max_1 ≤ j ≤ psup_ x, y ∈ (0, 1) | F̂_j^-1 (x) - F̂_j^-1 (y) | / |x - y| + ( n^-1 (log n) |x - y| )^1/2 + n^-1log n ≥ M ) → 0. Then we have ℙ(max_1≤ j ≤ p n^-1_j - _j _2 ≤ C ( ρ_n √(log p/n) + √( p ρ_n (log n)^3/n)) ) → 1. When estimators {F̂_j}_j = 1^p are the empirical distribution functions and p = o(n), it can be shown that (<ref>), (<ref>), and (<ref>) can be satisfied when the density function f_X_j is uniformly bounded on the support. See, e.g., <cit.> for the estimation of nonparanormal distributions, and we opt not to discuss it here due to the space constraint. We also remark that the bounded support assumption of (X_j) ⊂ [-b, b] is to simplify the technical proofs and may be removed by applying the truncation technique and letting b slowly diverge with n. Since such technical relaxation is not the main focus of the current paper, we choose not to explore it here. § DISCUSSIONS We have investigated in this paper the robustness of the model-X knockoffs framework introduced in <cit.> by characterizing the feature selection performance of the approximate knockoffs (ARK) procedure, a popularly implemented version of the model-X knockoffs framework in practice. The approximate knockoffs procedure differs from the model-X knockoffs procedure in that it uses the misspecified or estimated feature distribution to generate the knockoff variables without the use of sample splitting. We have proved formally that the approximate knockoffs procedure can achieve the asymptotic FDR and FWER control as the sample size diverges in the high-dimensional setting. A key idea empowering our technical analysis is coupling, where we pair statistics in the approximate knockoffs procedure with those in the model-X knockoffs procedure so that they are close in realizations with high probability. The knockoff variable coupling has been investigated under some specific distribution assumptions in the current work. An interesting future study is to investigate the coupling idea under broader class of or even general feature distributions. chicago Supplementary Material to “ARK: Robust Knockoffs Inference with Coupling” Yingying Fan, Lan Gao and Jinchi Lv This Supplementary Material contains the proofs of Theorems <ref>–<ref>, Propositions <ref>–<ref>, and some key technical lemmas. All the notation is the same as defined in the main body of the paper. § PROOFS OF THEOREMS <REF>–<REF> AND PROPOSITIONS <REF>–<REF> §.§ Proof of Theorem <ref> It has been shown in <cit.> that the model-X knockoffs inference procedure achieves the exact FDR control when the perfect knockoff statistics are employed. Note that the approximate knockoff statistics {Ŵ_j } are expected to provide a reliable approximation to the perfect knockoff statistics {W_j }, as assumed in Condition <ref>. The main idea of the proof is to establish the FDR control for the approximate knockoffs inference procedure through a comparison of the approximate knockoff statistics and a certain realization of the perfect knockoff statistics. The two lemmas below provide a sketch of the proof and can be established under Conditions <ref>–<ref>. Assume that Conditions <ref>, <ref>, and <ref> are satisfied. When a_n →∞ and m_n / a_n → 0, we have that sup_t ∈(0, G^-1 ( c_1 q a_n / p ) ] | ∑_j ∈ℋ_01 (Ŵ_j ≥ t) /∑_j ∈ℋ_0ℙ ( W_j ≥ t ) - 1 | = o_p(1), sup_t ∈(0, G^-1 ( c_1 q a_n / p ) ] | ∑_j ∈ℋ_01 (Ŵ_j ≤ -t) /∑_j ∈ℋ_0ℙ ( W_j ≤ - t ) - 1 | = o_p(1). sup_t ∈(0, G^-1 ( c_1 q a_n / p ) ) | ∑_j ∈ℋ_01 (Ŵ_j ≥ t) /∑_j ∈ℋ_0ℙ ( W_j ≥ t ) - 1 | = o_p(1), sup_t ∈(0, G^-1 ( c_1 q a_n / p ) ) | ∑_j ∈ℋ_01 (Ŵ_j ≤ - t) /∑_j ∈ℋ_0ℙ ( W_j ≤ - t ) - 1 | = o_p(1) . Under Conditions <ref>–<ref>, we have that for some constant 0 < c_1 < 1, ℙ ( T ≤ G^-1 ( c_1 q a_n / p ) ) → 1. Under Condition <ref>, we have sup_t ∈ (0, M_n, p) ∑_j ∈ H_01 (t - b_n ≤W_j ≤ t + b_n) /∑_j ∈ H_0ℙ ( W_j ≥ t ) = o_p(1) We present the proofs of Lemmas <ref> and <ref> in Sections <ref> and <ref>, respectively. Now we are ready to prove Theorem <ref>. Let us define two events ℬ_1 = {T ≤ G^-1 ( c_1 q a_n/p ) } and ℬ_2, ϵ = {sup_t ∈ (0, G^-1 ( c_1 q a_n/p )](| ∑_j ∈ℋ_01 (Ŵ_j ≥ t )/∑_j ∈ℋ_0ℙ (W_j ≥ t ) - 1 | | ∑_j ∈ℋ_01 (Ŵ_j ≤ - t )/∑_j ∈ℋ_0ℙ (W_j ≤ - t ) - 1 | )≤ϵ} for ϵ > 0. Lemmas <ref> and <ref> above have shown that ℙ(ℬ_1^c ) → 0 and ℙ(ℬ_2, ϵ^c ) → 0 for each ϵ > 0. In addition, it holds naturally that 0 ≤≤ 1. Then it follows that ≤( ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) / 1 ∑_ j = 1^p 1 (Ŵ_j ≥ T) ·1 (ℬ_1)1 (ℬ_2, ϵ) ) + ℙ(ℬ_1^c ) + ℙ(ℬ_2, ϵ^c ) = ( ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) / 1 ∑_ j = 1^p 1 (Ŵ_j ≥ T) ·1 (ℬ_1)1 (ℬ_2, ϵ) ) + o(1) . In view of the definition of threshold T in (<ref>), we can deduce that ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) / 1 ∑_ j = 1^p 1 (Ŵ_j ≥ T) ·1 (ℬ_1)1 (ℬ_2, ϵ) = ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) /∑_j ∈ℋ_0 1 (Ŵ_j ≤ -T) ·∑_j ∈ℋ_0 1 (Ŵ_j ≤ - T) / 1 ∑_ j = 1^p 1 (Ŵ_j ≥ T) ·1 (ℬ_1)1(ℬ_2, ϵ) ≤ q ·∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) /∑_j ∈ℋ_0 1 (Ŵ_j ≤ - T ) ·1 (ℬ_1)1(ℬ_2, ϵ). Furthermore, it is easy to see that on event ℬ_1 ∩ℬ_2, ϵ, we have ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) /∑_j ∈ℋ_0 1 (Ŵ_j ≤ - T ) ≤sup_t ∈ (0, G^-1 ( c_1 q a_n/p )]∑_j ∈ℋ_01 ( Ŵ_j ≥ t ) /∑_j ∈ℋ_0 1 (Ŵ_j ≤ - t ) ≤1 + ϵ/1 - ϵsup_t ∈ (0, G^-1 ( c_1 q a_n/p )]∑_j ∈ℋ_0ℙ ( W_j ≥ t ) /∑_j ∈ℋ_0ℙ ( W_j ≤ - t ) = 1 + ϵ/1 - ϵ, where the last equation above is obtained by the symmetry of the perfect knockoff statistics {W_j }_j ∈ℋ_0 that ℙ ( W_j ≥ t ) = ℙ ( W_j ≤ - t ). Therefore, we can obtain that for each ϵ > 0, ≤ q ·1 + ϵ/1 - ϵ + o(1), which yields the desired result (<ref>). This completes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> We first define the corresponding threshold T_v for the perfect knockoff statistics {W_j}_j=1^p in the model-X knockoffs inference for the k-FWER control as T_v = sup{ t ∈𝒲: #{j: - W_j ≥ t} = v }, where v is defined as in (<ref>) and 𝒲 = { | W_1 |, …, | W_p | }. As sketched in Lemmas <ref>–<ref> below, the main idea of the proof is to show that the threshold T_v based on the approximate knockoff statistics and the threshold T_v based on the perfect knockoff statistics are sufficiently close under Condition <ref> such that for any > 0, the number of W_j's that lie between T_v and T_v is at most v with asymptotic probability one, where v satisfies v / k → 1 as k →∞. Specifically, let M_v be the integer such that T_v + M_v≥T_v - 2 b_n > T_v + M_v +1. Then we can establish a bound for M_v as shown in Lemma <ref> below. We first present the three lemmas below that provide an outline of the proof. The proofs of Lemmas <ref>–<ref> are provided in Sections <ref>–<ref>. Under Condition <ref>, we have that ℙ ( | T_v - T_v | ≥ b_n) → 0. Assume that k →∞. Then we have that v / k = 1 + O ( k^ - 1 /2 ). Under all the conditions of Theorem <ref>, we have that sup_t ∈ (0, G^-1 (v/2p) )| ∑_j ∈ℋ_01 ( W_j ≥ t ) /∑_j ∈ℋ_0 ℙ ( W_j ≥ t ) - 1 | = o_p (ϵ_n) Under all the conditions of Theorem <ref>, we have that for each > 0, ℙ ( M_v ≤ v ) → 1. We are now ready to prove Theorem <ref>. It follows straightforwardly from Lemma <ref> that ℙ ( V̂≥ k(1 + 2 ) ) = ℙ( ∑_j ∈ℋ_01 ( Ŵ_j ≥T_v ) ≥ k(1 + 2) ) ≤ℙ( ∑_j ∈ℋ_01 ( W_j ≥T_v - 2 b_n ) ≥ k(1 + 2) ) ≤ℙ( ∑_ j ∈ℋ_01 ( W_j ≥T_v + M_v ) ≥ k (1 + 2) ) = ℙ( ∑_ j ∈ℋ_01 ( - W_j ≥T_v + M_v ) ≥ k (1 + 2) ) ≤ℙ( ∑_j ∈ℋ_0 1 ( - W_j ≥T_v) ≥ k(1 + 2 ) - M_v ), where the second last step above is because of the symmetry of W_j's with j∈ℋ_0 and the last step above is due to ∑_ j ∈ℋ_01 ( - W_j ≥T_v + M_v )-∑_ j ∈ℋ_01 ( - W_j ≥T_v )≤ M_v by the definitions of T_v and M_v. Moreover, Lemma <ref> above shows that M_v ≤ v with asymptotic probability one and Lemma <ref> above proves that v / k = 1 + o(1). Then it holds that 2 k > M_v with asymptotic probability one. Hence, combining the above results and by the union bound, we can deduce that ℙ ( V̂≥ k(1 + 2 ) ) ≤ℙ( ∑_j ∈ℋ_01( - W_j ≥T_v ) ≥ k ) + o(1) = q + o(1). Consequently, it follows that for each > 0, lim sup_n →∞ℙ ( V̂≥ k(1 + 2 ) ) ≤ q . This concludes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> The main idea of the proof is to directly apply Theorem <ref> by verifying Conditions <ref>–<ref> involved. We will show in the lemmas below that Conditions <ref>–<ref> are satisfied for the marginal correlation knockoff statistics under Conditions <ref>–<ref> and the setting of nonparametric regression model (<ref>) with normal features. Proofs of Lemmas <ref>–<ref> are presented in Sections <ref>–<ref>. Assume that Condition <ref> is satisfied. Then we have that ℙ( max_1 ≤ j≤ p | Ŵ_j - W_j | ≥Δ_n ) → 0. Lemma <ref> above shows that Condition <ref> is satisfied with sequences b_n := Δ_n. Define w_j = ( Y^2)^-1/2 ( | (X_j Y)| - |(X_j Y)| ) for 1 ≤ j ≤ p. Note that w_j = 0 for j ∈ℋ_0 since (X_j, X_ℋ_1) d= (X_j, X_ℋ_1) for j ∈ℋ_0 by the exchangeability between X_j and X_j. Recall from the definition in (<ref>) that δ_n = √(log p/n)max_1 ≤ j ≤ p{ 16 √(2) X_j _ψ_2 Y _ψ_2/ ( Y^2)^1/2 8√(2) |w_j| Y _ψ_2^2 / Y^2 }. We have the concentration inequality below for W_j under the sub-Gaussian assumption in Condition <ref>. Assume that Condition <ref> is satisfied. When log p = o(n), we have that ∑_j = 1^p ℙ ( |W_j - w_j | ≥δ_n ) ≤ 6 p^-1 + p exp{ - n ( Y^2)^2 /8 Y^4 }. Lemma <ref> above indicates that Condition <ref> related to the concentration rate of W_j is satisfied with δ_n defined in (<ref>) and that Δ_n ≤δ_n, where Δ_n is the approximation accuracy of the approximate knockoff statistics obtained in Lemma <ref>. In addition, from the definition of w_j, under Condition <ref> we have that the general Condition <ref> on the signal strength is also satisfied. Next we will turn to the verification of Conditions <ref>–<ref>. Assume that Condition <ref> is satisfied. Then we have that for each t ≥ 0, ( ∑_j ∈ℋ_01 (W_j ≥ t) ) / p_0 G (t) ≤ 2 m_n. Assume that Conditions <ref> and <ref> are satisfied. Then when (log p)^1/γ m_n / a_n → 0 and √(n)Δ_n (log p)^1/2 + 1/γ→ 0 for some constant 0< γ < 1, we have that (log p)^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] G(t - Δ_n ) - G(t + Δ_n) / G(t) → 0 and a_n^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + Δ_n ) → 0 as n →∞. Lemma <ref> above shows that Condition <ref> is satisfied, while Lemma <ref> above implies that Condition <ref> is satisfied. Finally, the conclusion of Theorem <ref> can be obtained by directly applying the general Theorem <ref>. This completes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> The proof of Theorem <ref> is analogous to that of Theorem <ref> in Section <ref>. We omit the detailed proof here to avoid redundancy. §.§ Proof of Theorem <ref> The main idea of the proof is to directly apply Theorem <ref> by verifying Conditions <ref>–<ref> for the knockoff statistics constructed from the debiased Lasso coefficients. A key observation is that the debiased Lasso coefficients are asymptotically normal. Denote by τ_j = _j _2 / |_j^⊤^_j|. The debiased Lasso coefficient can be written as √(n) ( β_j - β_j^ ) = _j^⊤/_j _2 ·√(n)τ_j + ∑_k ≠ j√(n)_j^⊤^_k (β_k^ - β_k^)/_j^⊤^_j . Observe that _j^⊤/_j _2 ∼ N(0, σ^2), √(n)τ_j =O_p(1), and the remainder term in (<ref>) above is of order o_p(1). Thus, the debiased Lasso estimator is asymptotically normal in the sense that τ_j^-1 (β_j - β_j^) d→ N(0, σ^2). Our proof will build mainly on such intuition. Throughout the proof below, constant C may take different values from line to line. We first present two lemmas below about the consistency of Lasso estimators ^ and _j. We omit the proofs of Lemmas <ref> and <ref> here to avoid redundancy since they are well-known results for the consistency of Lasso estimators in the literature. Under Conditions <ref>–<ref>, we have that with probability 1 - O(p^-3), ^ - ^_1 ≤ C s √(log p/n), ^ - ^_2 ≤ C √( slog p/n) ^ (^ - ^ ) _2 ≤ C √( slog p) . Under Conditions <ref>–<ref>, we have that with probability 1 - O(p^-3), max_1 ≤ j ≤ 2p _j - _j _1 ≤ C m_n √(log p/n) , max_1 ≤ j ≤ 2p _j - _j _2 ≤ C √( m_nlog p/n), max_1 ≤ j ≤ 2p _-j^ (_j - _j) _2 ≤ C √( m_n log p). In addition, when m_n log p/n→ 0 we have that with probability 1 - O(p^-3), |√(n)τ_j - ( e_j^2)^-1/2 | ≤ C √(m_n log p/n), |_j^⊤_l - (e_j, e_l) | ≤ C √(m_n log p/n). The four lemmas below outline the proof for verifying the general Conditions <ref>–<ref>. Proofs of Lemma <ref>–<ref> are provided in Sections <ref>–<ref>. Assume that Conditions <ref> and <ref>–<ref> are satisfied. Then as Δ_n s^1/2→ 0 and √(s log p/n)→ 0, we have that ℙ( max_1 ≤ j ≤ 2p | β_j - β̂_j | ≥ C Δ_n s √(log p/n)) → 0. Lemma <ref> above indicates that Condition <ref> is satisfied with sequences b_n := C Δ_n s √(log p/n). Let us define w_j = |β_j|. Assume that Conditions <ref>–<ref> are satisfied. Then as s √(m_n log p/n)→ 0, we have that for some C > 0, ∑_j= 1^p ℙ (|W_j - w_j | ≥ C √(n^-1log p) ) → 0. Lemma <ref> above shows that Condition <ref> related to the concentration rate of W_j is satisfied with δ_n = C √(n^-1log p). In addition, it holds that b_n ≪ C √(n^-1log p) due to the assumption Δ_n s → 0 in Theorem <ref>. In addition, in light of the definition of w_j, under Condition <ref> we have that the general Condition <ref> on the signal strength is also satisfied. We next turn to the verification of Conditions <ref>–<ref>. Assume that Conditions <ref>–<ref> are satisfied. Then as m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0, we have that ( ∑_j ∈ℋ_01 (W_j > t) ) ≤ V_1 (t) + V_2 (t), where for some 0 < γ < 1 and 0< c_1 < 1, (log p )^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] V_1 (t) / [p_0 G (t)]^2 → 0 and sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] V_2(t) / p_0 G (t) ≲ m_n. Assume that Conditions <ref> and <ref>–<ref> are satisfied. Then when m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0 and Δ_n s (log p)^1 + 1 /γ→ 0, we have that (log p)^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] G(t - b_n ) - G(t + b_n) / G(t) → 0 and a_n^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + b_n ) → 0 as n →∞. Lemma <ref> above shows that Condition <ref> is satisfied, whereas Lemma <ref> implies that Condition <ref> is satisfied. Finally, the conclusion of Theorem <ref> can be derived by directly applying the general Theorem <ref>. This completes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> The proof of Theorem <ref> is similar to that of Theorem <ref> in Section <ref>. Hence we omit the detailed proof here to avoid redundancy. §.§ Proof of Proposition <ref> From the definitions in (<ref>) and (<ref>), we see that - = r + + ( 1 - 1/√(Q_1/ν), …, 1 - 1/√(Q_n/ν)) , where = -, =(2 r I_p - r^2 )^1/2 - (2 r I_p - r^2 )^1/2, and = (2 r I_p - r^2 )^1/2. In view of assumption (<ref>) and the fact that := [(X)]^-1 = ν - 2/ν, it follows from the triangle inequality that with probability 1 - o(1), - _2 ≤ - _2 + - _2 = - _2 + 2 ν^-1_2 ≤ C ρ_n √(log p/n) + 2 ν^-1 C_l^-1. Now we deal with the three terms on the right-hand side of (<ref>) above separately. First, for the second term above, an application of similar arguments as for (<ref>) gives that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 ()_j _2^2 ≤ 3 _2^2 /2 ≤ C - _2^2 ≤ C ( ρ_n^2 log p/n + ν^-2). Regarding the first term on the right-hand side of (<ref>) above, observe that (_i, j , _i, l ) d= (η_i, j/√(Q_i / ν), η_i, l/√(Q_i / ν)), where (η_i, 1, …, η_i, p) d∼ N( 0, ^-1) and {Q_i}_i = 1^n are independent and identically distributed (i.i.d.) chi-square random variables with ν degrees of freedom. It holds that for some large constant C_1 > 0, ℙ( n^-1 ^⊤ - ^-1_max≥ C_1 √(log p/n) + ν^-1/2) = ℙ(max_1 ≤ j, l ≤ p| n^-1∑_i = 1^n η_i, jη_i, l/Q_i / ν - (η_i, jη_i, l) ( ν/Q_i ) | ≥ C_1 √(log p/n)+ ν^-1/2) ≤ℙ(max_1 ≤ j, l ≤ p| n^-1∑_i = 1^n ν/Q_i (η_i, jη_i, l - (η_i, jη_i, l)) | ≥ C_1 √(log p/n)) +ℙ(max_1 ≤ j, l ≤ p|n^-1∑_i = 1^n (η_i, jη_i, l) ( ν/Q_i - ( ν/Q_i ) )| ≥ν^-1/2). Before showing the bounds for the two probabilities on the right-hand side of the expression above, we first present some basic results for chi-square random variables. Note that from the property of the chi-square distribution, we have through some immediate calculations that ( ν^2/Q_i^2) = ν^2/(ν - 2) (ν - 4), ( ν/Q_i ) = ν^2 /(ν - 2)(ν - 4) - (ν/ν - 2)^2 = O(ν^-1), ( ν^2/Q_i^2 ) = ν^4/(ν - 2)(ν - 4)(ν - 6)(ν - 8) - (ν^2 /(ν - 2)(ν - 4))^2 = O (ν^-1). Thus, noting that ( ν^2/Q_i^2) + ν^-1/2 = ν^2/(ν - 2) (ν - 4) + ν^-1/2≤ 3 and ( ν^2/Q_i^2) - ν^-1/2≥ 2/3 when ν≥ 9, an application of the Markov inequality leads to ℙ( n^-1∑_i = 1^n ν^2/Q_i^2≥ 3 ) + ℙ( n^-1∑_i = 1^n ν^2/Q_i^2≤ 2/3 ) ≤ℙ( n^-1∑_i = 1^n ν^2/Q_i^2≥( ν^2/Q_i^2) + ν^-1/2) + ℙ( n^-1∑_i = 1^n ν^2/Q_i^2≤( ν^2/Q_i^2) - ν^-1/2) ≤ν n^-1( ν^2/Q_i^2 ) = O(n^-1 ) → 0. In addition, noting that e^-x/2 ≤ 1 and Stirling's formula for the gamma function Γ(x) = √(2 π / x) (x/ e)^x (1 + O(x^-1)) for x ≥ 0, we have through applying the density function of the chi-square distribution that for each constant C > 0, ℙ( max_1 ≤ i ≤ n ν/Q_i≥ C √(n/log p)) ≤ n ∫_0^C^-1ν√(log p/n)x^ν / 2 - 1 e^- x / 2/ 2^ν / 2Γ(ν / 2) dx ≤ 2 n (C^-1ν√(log p/n))^ν / 2/ν 2^ν / 2Γ(ν / 2) ≲ n ( C^-2log p/n)^ν / 4ν ^ν /2 /ν 2^ν /2√(4 π / ν) (ν / 2 e)^ν / 2 = ( C^-2 e^2 log p/n^1 - 4/ ν)^ν / 4 1/√(4 πν)→ 0 when log p = o(n^1 - 4/ν). Now we are ready to deal with the two probabilities on the right-hand side of (<ref>) above. Let us define two events 𝒟_1 = {max_1 ≤ i ≤ nν/Q_i≤ C_2 √(n/log p)} for a small constant C_2 > 0 and 𝒟_2 = {2/3 ≤ n^-1∑_i = 1^n ν^2/Q_i^2≤ 3 }. It follows from (<ref>) and (<ref>) that ℙ (𝒟_1^c) → 0 and ℙ (𝒟_2^c) → 0. For the first probability in (<ref>) above, since η_i, jη_i, l is a sub-exponential random variable and Q_i η_i, jη_i, l, we can obtain by applying the concentration inequality for the weighted sum of sub-exponential random variables (cf. Corollary 4.2 in <cit.>) that when C_1 is large enough and C_2 is small enough, ℙ(max_1 ≤ j, l ≤ p| n^-1∑_i = 1^n ν/Q_i (η_i, jη_i, l - (η_i, jη_i, l)) | ≥ C_1 √(log p/n)) ≤ℙ(max_1 ≤ j, l ≤ p| n^-1∑_i = 1^n ν/Q_i (η_i, jη_i, l - (η_i, jη_i, l)) | ≥ C_1 √(log p/n) , 𝒟_1 ∩𝒟_2 ) + ℙ (𝒟_1^c) + ℙ (𝒟_2^c) ≤ 2 p^2 exp{ - 3 log p } + o(1) → 0. Regarding the second probability in (<ref>), since max_1 ≤ j, l ≤ p | (η_i, jη_i, l) | ≤max_1 ≤ j ≤ p(η_i, j^2) ≤max_1 ≤ j ≤ p (^-1)_j, j≤ C_u, an application of the Markov inequality and (<ref>) yields that ℙ(max_1 ≤ j, l ≤ p|n^-1∑_i = 1^n (η_i, jη_i, l) ( ν/Q_i - ( ν/Q_i ) )| ≥ν^-1/2) ≤ℙ( |n^-1∑_i ( ν/Q_i - ( ν/Q_i ) )| ≥ C_u^-1ν^-1/2) ≤ C_u^-2ν n^-1 (ν/Q_i) = O(n^-1) → 0. By plugging (<ref>) and (<ref>) into (<ref>), we can show that with probability 1 - o(1), max_δ: δ_0 ≤ρ_n |δ^⊤ ( n^-1^⊤ - ^-1 ) δ |/δ_2^2 ≤ C ρ_n ( √(log p/n) + ν^-1/2) , which along with the fact ^-1_2 = ν/ν - 2^-1_2 ≤ν/ν - 2 C_u entails that as ρ_n = o(√(n / (log p))) and ρ_n = o(√(ν)), max_δ: δ_0 ≤ρ_nδ^⊤^⊤δ/ n δ_2^2 ≤C for some constant C > 0. Using (<ref>) and the sparsity assumption that max_1 ≤ j ≤ p_j_0 + _n_0 ≤ρ_n, an application of similar arguments as for (<ref>) gives that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1_j _2^2 = n^-1_j^⊤ ^⊤_j ≤ C max_1 ≤ j ≤ p_j ^2_2 = C - _2^2 ≤ C( ρ_n^2 log p/n + ν^-2). We now proceed with examining the third term on the right-hand side of (<ref>) above. Observe that _j d∼ N( 0, _j _2^2 I_n) and max_1 ≤ j ≤ p_j _2 ≤_2 ≤ 2r. Hence, it holds for some large constant C_3 > 0 that ℙ( max_1 ≤ j ≤ p n^-1( 1 - 1/√(Q_1/ν), …, 1 - 1/√(Q_n/ν)) _j _2^2 ≥ C_3 ν^-1) = ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n (1 - 1/√(Q_i / ν))^2 _j ^2 Z_i^2 ≥ C_3 ν^-1) ≤ℙ( n^-1∑_i = 1^n (1 - 1/√(Q_i / ν))^2 Z_i^2 ≥ C_3 ν^-1 / 4r^2 ) , where {Z_i }_i = 1^n are i.i.d. standard normal random variables that are independent of and {Q_i }_i = 1^n. Similar to the calculations in (<ref>) and (<ref>), we can deduce that [ (1 - 1/√(Q_i / ν))^2 Z_i^2 ] = (Z_i^2) [ (1 - 1/√(Q_i / ν))^2 ] = 1 - ( 2/√(Q_i / ν)) + ( 1/ Q_i / ν) = 1 - √(2 ν)Γ(ν - 1/2)/Γ(ν/2) + ν/ν - 2 and [ (1 - 1/√(Q_i / ν))^4 Z_i^4 ] = 3 ( 1 - 2 √(2)νΓ(ν - 1/2)/Γ(ν/2) + 6 (ν - 2)/ν - √(2)ν^3/2Γ(ν - 3/2)/Γ(ν/2) + ν^2/(ν - 2) (ν - 4)) By applying the asymptotic series of the gamma function Γ(x + 1/2 )/Γ(x) = √(x)(1 - 1/8 x + O(x^-2) ), we can obtain through some direct calculations that [ (1 - 1/√(Q_i / ν))^2 Z_i^2 ] = O(ν^-1) and [ (1 - 1/√(Q_i / ν))^4 Z_i^4 ] = O(ν^-2). Combining (<ref>) and (<ref>) and applying the Markov inequality, we have that for some large enough constant C_3 > 0, ℙ( max_1 ≤ j ≤ p n^-1( 1 - 1/√(Q_1/ν), …, 1 - 1/√(Q_n/ν)) _j _2^2 ≥ C_3 ν^-1) ≤ℙ( n^-1∑_i = 1^n (1 - 1/√(Q_i / ν))^2 Z_i^2 -[ (1 - 1/√(Q_i / ν))^2 Z_i^2 ] ≥ C_3 (ν^-1 ) / 4r^2 - O(ν^-1) ) ≤ C ν^-2 n^-1( (1 - 1/√(Q_i / ν))^2 Z_i^2 ) ≤ C ν^-2 n^-1( ( (1 - 1/√(Q_i / ν))^4 Z_i^4 ) ) = O (n^-1) → 0. Therefore, a combination of (<ref>), (<ref>), (<ref>), and (<ref>) yields the desired conclusion in (<ref>). This concludes the proof of Proposition <ref>. §.§ Proof of Proposition <ref> It follows from (<ref>) and (<ref>) that - = r + , where = - and =(2 r I_p - r^2 )^1/2 - (2 r I_p - r^2 )^1/2. By the Gaussianity of X, we see that X_j X_l is a sub-exponential random variable and thus for 0< u < C, ℙ ( | n^-1_j^⊤_l - (X_j X_l) | ≥ u ) ≤ 2 exp{ - C n u^2 }. Then we can obtain that ℙ( max_1 ≤ j ≤ p, 1 ≤ l ≤ p | n^-1_j_l - (X_j X_l) | ≥ C √(log p/n)) = o(1). Consequently, with probability 1 - o(1) it holds that max_δ: δ_0 ≤ρ_n |δ^⊤ ( n^-1^⊤ - ^-1 ) δ |/δ_2^2 ≤ C ρ_n √(log p/n), which combined with the assumption that ^-1_2 ≤ C_u leads to max_δ: δ_0 ≤ρ_nδ^⊤^⊤δ/ n δ_2^2 ≤ C_u + C ρ_n √(log p/n)≤C for some constant C > 0. Since _j _0 = ( - )_j_0 ≤ C ρ_n because of the sparsity of and , it follows from (<ref>) that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 ()_j_2^2 = max_1 ≤ j ≤ p n^-1_j_2^2 ≤max_1 ≤ j ≤ pC_j _2^2 = max_1 ≤ j ≤ pC (- )_j _2^2 ≤max_1 ≤ j ≤ pC- _2^2 ≤Cρ_n^2 log p/n, where we have used the accuracy assumption in (<ref>). Next we proceed with analyzing term . Observe that given , has i.i.d. standard normal components and is independent of , and hence _j|_j d∼ N( 0, _j_2^2 I_n). It holds that _j|_j d= (Z_1 _j_2, …, Z_n _j _2) with { Z_i }_i = 1^n i.i.d. standard normal random variables. Then we can deduce that ℙ ( max_1 ≤ j ≤ p n^-1 ()_j _2^2 ≥ 3_2^2 /2 | ) = ℙ ( max_1 ≤ j ≤ n^-1_j _2^2 ≥ 3_2^2 /2 |) = ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n Z_i^2 _j _2^2 ≥ 3 _2^2 /2 |) ≤ℙ( n^-1∑_i = 1^n Z_i^2 _2^2 ≥ 2 _2^2 |) = ℙ( n^-1∑_i = 1^n Z_i^2 ≥ 3/2 ) ≤ e^- n / 32→ 0 as n→∞, where we have used the fact that max_1 ≤ j ≤ p_j _2 ≤_2 and the concentration inequality for chi-square random variables that for 0 < t < 1, ℙ( | n^-1∑_i = 1^n Z_i^2 - 1 | ≥ t ) ≤ 2 e^- n t^2 / 8. Now we aim to bound _2. For two square matrices A and B, it holds that A^1/2 - B^1/2_2 = A^1/2 (B - A) B^-1 + (A^3/2 - B^3/2) B^-1_2 ≤A^1/2 (B - A) B^-1_2 +3 max{ A _2^1/2, B _2^1/2}A - B _2B^-1_2. Applying the above inequality to leads to _2 ≤ 2 r I_p - r^2 _2^1/2· r^2 - _2 · 2 r I_p - r^2 ^-1 + 3 max{ 2 r I_p - r^2 _2^1/2, 2 r I_p - r^2 _2^1/2}· r^2 - _2· 2 r I_p - r^2 ^-1 ≤ C - _2. Thus, from (<ref>) and assumption (<ref>), we can obtain that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 ()_j _2^2 ≤ 3 _2^2 /2 ≤ C - _2^2 ≤ C ρ_n^2 log p/n. Note that _j - _j _2 ≤ r _j _2 + _j _2. Therefore, in view of (<ref>) and (<ref>) we can show that for some constant C > 0, ℙ(n^-1/2_j - _j _2 ≤ C ρ_n √(log p/n)) → 1. This completes the proof of Proposition <ref>. §.§ Proof of Proposition <ref> In light of the definitions of and , we can obtain through the triangle inequality that n^-1/2max_1 ≤ j ≤ p_j - _j _2 ≤max_1 ≤ j ≤ p n^-1/2( ∑_i = 1^n [F̂_j^-1(Φ(_i, j)) - F̂_j^-1 (Φ(_i, j )) ]^2 )^1/2 + max_1 ≤ j ≤ p n^-1/2( ∑_i = 1^n [F̂_j^-1(Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 )^1/2. We claim that ℙ(max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F̂_j^-1 (Φ(_i, j )) ]^2 ≥C( ρ_n^2 log p/n + p ρ_n (log n)^3/n) ) → 0, ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 ≥2 M p (log n)^2 /n) → 0, which together with (<ref>) yields the desired conclusion of Proposition <ref>. It remains to establish (<ref>) and (<ref>). We will begin with the proof of (<ref>). Proof of (<ref>). From assumption (<ref>) and the observation that log n/n^2≪p ρ_n (log n)^3/n, it holds that for some large constant C > 0, ℙ(max_1 ≤ j ≤ p n^-1 ∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F̂_j^-1 (Φ(_i, j )) ]^2 ≥ C (ρ_n^2 log p/n + p ρ_n (log n)^3/n) ) ≤ℙ(max_1 ≤ j ≤ p n^-1 ∑_i = 1^n [ | Φ(_i, j) - Φ(_i, j)|^2 + (log n)^2 n^-2 + n^-1 (log n )|Φ(_i, j) - Φ(_i, j)| ] ≥ C (ρ_n^2 log p/n + p ρ_n (log n)^3/n) ) + ℙ( max_1 ≤ j ≤ psup_ x, y ∈ (0, 1) | F̂_j^-1 (x) - F̂_j^-1 (y) | / |x - y| + ( n^-1 (log n) |x - y| )^1/2 + n^-1log n ≥ M ) ≤ℙ(max_1 ≤ j ≤ p n^-1 ∑_i = 1^n [ | Φ(_i, j) - Φ(_i, j)|^2 + n^-1 (log n) |Φ(_i, j) - Φ(_i, j)| ] ≥C(ρ_n^2 log p/n + p ρ_n (log n)^3/n) ) + o(1) : = P_1 + o(1). We next bound term P_1 above. Using the fact that |Φ(x) - Φ(y)| ≤1/√(2 π) | x - y | and the basic inequality ∑_i = 1^n |a_n| ≤√(n) (∑_i = 1^n a_n^2)^1/2, we have that P_1 ≤ℙ( max_1 ≤ j ≤ p( n^-1_j - _ j_2^2 + (log n) n^-3/2_j - _ j_2 ) ≥C(ρ_n^2 log p/n + p ρ_n (log n)^3/n) ). It suffices to consider the bound of max_1 ≤ j ≤ p n^-1_j - _ j_2^2. With the aid of the triangle inequality and the definitions of and , it follows that max_1 ≤ j ≤ p n^-1_j - _ j_2^2 ≤ 3 max_1 ≤ j ≤ p n^-1 ( - ) (I_p - r )_j _2^2 + 3 r^2 max_1 ≤ j ≤ p n^-1 ( _j - _j ) _2^2 + 3 max_1 ≤ j ≤ p n^-1 [(2 r I_p - r^2 )^1/2 - (2 r I_p - r^2 )^1/2 ] _2^2. We will investigate the three terms in the upper bound above separately. Regarding the third term above, under the assumption in (<ref>) it has been shown in (<ref>) that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 [(2 r I_p - r^2 )^1/2 - (2 r I_p - r^2 )^1/2 ] _2^2 ≤ C ρ_n^2 log p/n . As for the second term in the upper bound in (<ref>), noting that the rows of are i.i.d. and follow the Gaussian distribution N( 0, ^-1), an application of similar arguments as for (<ref>) gives that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 ( _j - _j ) _2^2 ≤ Cρ_n^2 log p/n . For the first term in the upper bound in (<ref>) above, noting that I_p - r)_j ≤ρ_n + 1 by the sparsity assumption that _j≤ρ_n, we have that max_1 ≤ j ≤ p n^-1 ( - ) (I_p - r )_j _2^2 ≤max_J: |J| ≤ρ_n +1 n^-1 (_J - _J)^⊤ (_J - _J) _2 ×max_1 ≤ j ≤ p (I_p - r )_j _2^2. For the second term in the bound above, from the triangle inequality and inequality _j_2 ≤_2 for each matrix , it is easy to see that max_1 ≤ j ≤ p (I_p - r )_j _2 ≤ I_p - r _2 ≤ I_p - r _2 + r - _2. Thus it follows from assumption (<ref>) that for a constant C > 0, with probability 1 - o(1) we have max_1 ≤ j ≤ p (I_p - r )_j _2 ≤ C. Regarding the first term on the right-hand side of (<ref>) above, using the definitions of and , and inequality _2 ≤ d _max for each square matrix ∈ℝ^d × d, we can deduce that max_J: |J| ≤ρ_n +1 n^-1 (_J - _J)^⊤ (_J - _J) _2 ≤ (ρ_n + 1) n^-1 ( - )^⊤ ( - ) _max ≤ (ρ_n + 1) max_1 ≤ j ≤ p n^-1∑_i = 1^n |_i, j - _i, j|^2 = (ρ_n + 1) max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) - Φ^-1 ( F_j (_i, j )) |^2 . Denote by H_j, n =[F_j^-1 (2M n^-1 log n), F_j^-1 (1 - 2M n^-1log n )] with constant M as given in assumption (<ref>). We can write that max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) - Φ^-1 ( F_j (_i, j )) |^2 = max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) - Φ^-1 ( F_j (_i, j )) |^2 1 (_i, j∈ H_j, n) + max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) - Φ^-1 ( F_j (_i, j )) |^2 1 (_i, j∉ H_j, n) := E_1 + E_2. Let us first consider term E_2 above. Observe that E_2 ≤max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) |^2 1 (_i, j∉ H_j, n) + max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F_j (_i, j )) |^2 1 (_i, j∉ H_j, n). For the first term in the bound above, notice that |Φ^-1 (F̂_j (_i, j)) |= O(√(log n )) due to the assumption that 1/2n≤ F_j(x) ≤ 1 - 1/2n for each x∈(X_j). Then it follows from the union bound, the Markov inequality, and the definition of H_j, n that ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F_j (_i, j )) |^2 1 (_i, j∉ H_j, n) ≥p (log n )^3/n) ≤∑_j = 1^p ℙ( n^-1log n ∑_i = 1^n 1 (_i, j∉ H_j, n) ≥p (log n )^3/n) ≤ n /p (log n)^2∑_j = 1^p ℙ (_i, j∉ H_j, n) = p n / p (log n )^2 · 4 M n^-1log n = 4 M (log n )^-1→ 0. As for the second term in the upper bound in (<ref>) above, an application of the Markov inequality and the fact that F_j(_i, j ) follows the standard uniform distribution gives that ℙ(max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F_j (_i, j )) |^2 1 (_i, j∉ H_j, n) ≥p (log n )^3/n) ≤n/p (log n)^3∑_j = 1^p (| Φ^-1 (F_j (_i, j )) |^2 1 (_i, j∉ H_j, n) ) = 2 n/ (log n)^3∫_-∞^Φ^-1 (2Mlog n/n ) 1/√(2π) u^2 e^-u^2/2 du ≤2n/(log n)^3 |Φ^-1 (2Mlog n/n )| ∫_-∞^Φ^-1 (2Mlog n/n ) 1/√(2π) |u|^3 e^-u^2/2 du ≤ C n/(log n)^3 |Φ^-1 (2Mlog n/n ) | ·| Φ^-1 (2Mlog n/n)|^3 ·Φ(Φ^-1 (2Mlog n/n) ) ≤ C (log n)^-1→ 0, where in the last step above, we have used the facts that |Φ^-1 (M log n/n) | ≤ C √(log n), ∫ u^3 e^-u^2 / 2 du = - (u^2 + 2) e^-u^2/2, and e^-x^2/2/Φ(x) = O(|x|) for x < -2. Combining (<ref>), (<ref>), and (<ref>) yields that with probability 1 - o(1), E_2≤p (log n )^3/n. Next we proceed with studying term E_1. First, note that when |Φ^-1 (y )| > 2, it holds that [ Φ^-1 (y) ]' = 1/Φ'( Φ^-1 (y) )≤ C 1/(y (1 - y)) |Φ^-1 (y)| due to the fact that Φ'(x) /(1 - Φ(x) ) ≥ C x for x > 2 and Φ'(x) / Φ(x) ≥ C |x| for x < -2. When |Φ^-1(y)| ≤ 2, it is easy to see that [ Φ^-1 (y) ]' = 1/Φ'( Φ^-1 (y) )≤ C. Thus, combining the previous two results shows that for y ∈ℝ, [ Φ^-1 (y) ]' ≤C/(y (1 - y)) |Φ^-1 (y)| ≤C/(y (1 - y)) . Let us define an interval δ_j(x) = [F_j(x) - √(M [F_j(x) (1 - F_j(x)) ] log n/n), F_j(x) + √(M [F_j(x) (1 - F_j(x)) ]log n/n)]. Observe that under assumption (<ref>), we have that ℙ ( E_1≥ x) ≤ℙ( max_1 ≤ j ≤ p n^-1 (M log n/n) ∑_i = 1^n (sup_y ∈δ_j(_i, j ) [Φ^-1 (y)]' )^2 F_j (_i, j ) (1 - F_j (_i, j ) ·1 (_i, j∈ H_j, n) ≥ x) + o(1). When _i, j∈ H_j, n, it holds that F_j(_i, j ) ∈ [2 M n^-1log n, 1 - 2 M n^-1log n] and hence sup_y ∈δ(_i, j ) | y/F (_i, j) - 1 | ≤√(M log n/n F_j(_i, j ))≤ 1/√(2). Similarly, we have that sup_y ∈δ(_i, j ) | 1 - y/1 - F (_i, j) - 1 | ≤ 1/√(2). The above two bounds combined with (<ref>) yields that for _i, j∈ H_j, n, sup_y ∈δ_j(_i, j ) [Φ^-1 (y)]' ≤sup_y ∈δ_j(_i, j )C / y (1 - y)≤C /F_j(_i, j ) (1 - F_j(_i, j )). In view of the above bound, (<ref>), and the fact that F_j(_i, j ) follows the standard uniform distribution, we can deduce that ℙ ( E_1≥p (log n)^3/n) ≤ℙ(max_1 ≤ j ≤ p n^-1 (M log n/n) ∑_i = 1^n C / F_j(_i, j ) (1 - F_j(_i, j )) 1 (_i, j∈ H_j, n) ≥p (log n)^3/n) + o(1) ≤C M /p (log n)^2 ∑_j = 1^p ( 1 / F_j(_i, j ) (1 - F_j(_i, j )) 1 (_i, j∈ H_j, n) ) = C M / (log n)^2 ∫_2 M n^-1log n^1 - 2 M n^-1log n1/u(1 - u) du ≤C M / (log n)^2 · C log n ≤C M /log n→ 0. A combination of (<ref>), (<ref>), (<ref>), and (<ref>) shows that with probability 1 - o(1), max_J: |J| ≤ρ_n +1 n^-1 (_J - _J)^⊤ (_J - _J) _2 ≤C p ρ_n (log n )^3 /n, which together with (<ref>)–(<ref>) entails that with probability 1 - o(1), n^-1max_1 ≤ j ≤ p_j - _ j_2^2 ≤ C ( ρ_n^2 log p/n + p ρ_n (log n )^3 /n) and (log n) n^-3/2max_1 ≤ j ≤ p_j - _ j_2 ≤ C (log n) n^-1( ρ_n log p/n + √( p ρ_n (log n )^3 /n)). Plugging (<ref>) into (<ref>), it follows that P_1 → 0. Therefore, substituting (<ref>) into (<ref>) derives the desired result (<ref>). It remains to establish (<ref>). Proof of (<ref>). Let us define I_n = [2M n^-1log n, 1 - 2M n^-1log n]. It holds that ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 ≥2 M p (log n)^2 /n) = ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 1 ( Φ(_i, j) ∈ I_ n) ≥ M p (log n)^2 /n) + ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 1 ( Φ(_i, j) ∉ I_ n) ≥ M p (log n)^2 /n). For the first term on the right-hand side of (<ref>) above, under assumption (<ref>) we have that ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 1 ( Φ(_i, j) ∈ I_ n) ≥ M p (log n)^2 /n) ≤ℙ( M log n /n≥ M p (log n)^2 /n) + o(1) = 0 + o(1) → 0. Regarding the second term on the right-hand side of (<ref>) above, observe that | F_j^-1 (Φ(_i, j)) | ≤ b and |F̂_j^-1 (Φ(_i, j)) | ≤ b by the assumption (X_j) ∈ [-b, b]. In addition, Φ(_i, j) follows the standard uniform distribution and thus ℙ (Φ(_i, j) ∉ I_n) = 4 M n^-1log n. Then we can deduce that ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 1 ( Φ(_i, j) ∉ I_1, n) ≥ M p (log n)^2 /n) ≤ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n 1 ( Φ(_i, j) ∉ I_ n) ≥ M p (log n)^2 /4 n b^2 ) ≤4 n b^2 / M p (log n)^2 · p ℙ ( Φ(_i, j∉ I_n) ) = 16 b^2 /log n→ 0. Finally, combining (<ref>)–(<ref>) leads to the desired result (<ref>). This concludes the proof of Proposition <ref>. § PROOFS OF SOME KEY LEMMAS §.§ Proof of Lemma <ref> Let g_j (· | _-j) be the conditional density function of X_j | X_-j = _-j for X = (X_1, …, X_p )^⊤d∼ t_ν ( 0, I_p) and h_j(· | _ -j) the conditional density function of X̂_j | X̂_-j = _-j for X̂ = (X̂_1, …, X̂_p)^⊤d∼ N( 0, I_p). Following the definition in <cit.>, we define K̂L̂_j : = ∑_i = 1^n log( g_j (_i, j | _i, -j) · h_j(_i, j | _i, j ) / h_j (_i, j | _i, - j) g_j (_i, j | _i, -j) ), where = (_i, j ) ∈ℝ^n × p consists of i.i.d. rows sampled from t_ν ( 0, I_p) and = (_i, j)∈ℝ^n × p consists of i.i.d. rows sampled from N( 0, I_p). Note that Theorem 1 in <cit.> states that ≤min_≥ 0{q e^ + ℙ(max_j ∈ℋ_0K̂L̂_j > ) }. We claim that if n p /ν (ν + p)≥ C for some constant C> 0, there exists some positive constant α such that ℙ( K̂L̂_j≥ C/4 ) ≥α. Then it holds that for 0 < < C/4, ℙ(max_1 ≤ j ≤ pK̂L̂_j≥) ≥α, and thus we cannot obtain the desired asymptotic FDR control lim sup_(n, p)≤ q via applying Theorem 1 in <cit.>. By contradiction, to allow ℙ(max_1 ≤ j ≤ pK̂L̂_j≥) → 0, we must have that np/ν (ν + p)→ 0 , which is equivalent to ν^2 ≫ n min (n, p). Hence, Lemma <ref> is proved. Now it remains to establish (<ref>). Proof of (<ref>). Note that <cit.> showed that the conditional density g_j(_j | _-j) of the multivariate t-distribution satisfies that g_j(_i, j | _i, -j) ∝( 1 + _i, j ^2/ν + _i, -j_2^2 )^- (ν + p) / 2. It is easy to see that the conditional density h_j (_i, j | _i, -j) of the standard normal distribution satisfies that h_j(_i, j | _i, -j) ∝exp{ - _i, j ^2 / 2 }. Plugging the two expressions above into (<ref>) yields that K̂L̂_j = ∑_i = 1^n [ _i, j ^2/2 - ν + p/2log(1 + _i, j ^2/ν + _i, -j_2^2) - (_i, j^2/2 - ν + p/2log(1 + _i, j^2/ν + _i, -j_2^2) )]. Applying the basic inequality that |log (1 + x) - (x - x^2/2)| ≤ x^3 for each x > 0, we can obtain that K̂L̂_j = R_1, j + R_2, j + R_3, j, where R_1, j = ∑_i = 1^n [ _i, j ^2 (ν + p)/2 (ν + _i, -j_2^2 )( ν + _i, -j_2^2/ν + p - 1 ) - _i, j^2 /2 ( 1 - ν + p/ν + _i, -j_2^2) ], R_2, j = ∑_i = 1^n ν + p/4(_i, j^4/(ν + _i, -j_2^2)^2 - _i, j ^4/(ν + _i, -j_2^2)^2), R_3, j = ∑_i = 1^n ν + p/2(_i, j^6/(ν + _i, -j_2^2)^3 + _i, j ^6/(ν + _i, -j_2^2)^3). We now calculate the mean and variance of K̂L̂_j separately. Observe that _i, jd∼ N(0, 1), (p-1)^-1_i, -j_2^2 d∼ F_p-1, ν, _i, -j√(ν + p/ν + _i, -j_2^2)_i, j, and √(ν + p-1/ν + _i, -j_2^2)_i, jd∼ t_ν + p- 1 as shown in <cit.>. Using the properties of the multivariate t-distribution and F-distribution, some straightforward calculations show that (R_1, j) = n/2[ ν + p/ν + p - 3( ν (ν + p - 3)/(ν - 2)(ν + p) - 1 ) - ( 1 - (ν + 2) (ν + p)/ν (ν + p - 1)) ] = n ( p/ν (ν + p) + O(ν^-2) ), (R_2, j) = 3 n (ν + p)/4 [ 1/(ν + p - 3) (ν + p - 5) - ν + 2/ν (ν + p - 1)(ν + p + 1)] = O (n /ν (ν + p)), and (R_3, j) ≤ C n (ν + p)^-2. Combining (<ref>)–(<ref>) yields that when ν and p are large, (K̂L̂_j) = n p/ν (ν + p) + O(n ν^-2)≥n p/2 ν (ν + p). Next we analyze the variance of K̂L̂_j. Notice that (K̂L̂_j) = ( ( K̂L̂_j - K̂L̂_j )^2 ) ≤ C ∑_i = 1^n {[ _i, j ^2 (ν + p)/2 (ν + _i, -j_2^2 )( ν + _i, -j_2^2/ν + p - 1 ) - _i, j^2 /2 ( 1 - ν + p/ν + _i, -j_2^2) ]^2 } + C ∑_i = 1^n [ (ν + p)^2/16(_i, j^4/(ν + _i, -j_2^2)^2 - _i, j ^4/(ν + _i, -j_2^2)^2)^2 ] ≤Cn p/ν (ν + p), where in the last step above, we have used the facts that ( _i, j ^4 (ν + p)/ (ν + _i, -j_2^2 )^2) ≤ C, [ ( ν + _i, -j_2^2/ν + p - 1 )^2 ] = 2 p /ν (ν + p) + O(ν^-2), [ ( 1 - ν + p /ν + _i, -j_2^2)^2 ] = 2 p /ν (ν + p) + O(ν^-2). In view of the results on the mean and variance of K̂L̂_j shown in (<ref>) and (<ref>) above, we see that if np/ν (ν + p)≥ C for some constant C > 0, (K̂L̂_j ) ≥np/2ν (ν + p)≥ C /2 . Therefore, we can obtain through the one-sided Markov inequality that for a small constant α > 0 (noting that (K̂L̂_j) > 2 α√( (K̂L̂_j)) if α is small), ℙ (K̂L̂_j ≥ C/4) ≥ℙ (K̂L̂_j ≥ (K̂L̂_j )/2 ) ≥ℙ( K̂L̂_j≥ (K̂L̂_j) - α√((K̂L̂_j))) ≥ 1 - (K̂L̂_j )/(K̂L̂_j ) + α^2 (K̂L̂_j ) = α^2/1 + α^2, which establishes (<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Recall that G(t) = p_0^-1∑_j ∈ℋ_0ℙ (W_j ≥ t) and G(t) is a decreasing, continuous function. The main idea of the proof is to divide the continuous interval (0, G^-1 (c_1 q a_n/p)] into a diverging number of smaller intervals with end points { t_i }_i = 0^l_n such that t_0≥ t_1≥⋯≥ t_l_n and |G(t_i)/ G(t_i+1) - 1 | → 0 uniformly for 0 ≤ i ≤ l_n as l_n→∞. Then the supreme over the continuous interval (0, G^-1 (c_1 q a_n/p)] can be reduced to the supreme over the set of discrete points {t_i}_i= 0^l_n and hence, we can apply the union bound to establish the desired result. Similar arguments have also been used in <cit.>, <cit.>, and <cit.>. We detail only the proof of (<ref>) here since (<ref>) can be shown in a similar fashion. We start with defining a sequence 0 ≤ z_0 < z_1 < ⋯ < z_l_n = 1 and t_i = G^-1 (z_i), where z_0 = c_1 q a_n/p, z_i = c_1 q a_n/p + h_n e^i ^γ/p, and l_n = [log ((p - c_1 q a_n)/h_n)]^1/γ with 0 < γ < 1 and sequence h_n →∞ satisfying that h_n /a_n → 0. As long as m_n /a_n = o(1), we can choose h_n = a_n/(a_n / m_n)^η for some η∈ (0, 1). Then an application of similar technical analysis as in <cit.> shows that as a_n →∞, sup_0 ≤ i ≤ l_n|G(t_i)/G(t_i+1) - 1 | → 0. For t ∈ (0, G( c_1 q a_n/p)], there exists some 0 ≤ i ≤ l_n - 1 such that t ∈ [t_i+1, t_i]. It follows from the monotonicity of ℙ (W_j ≥ t) and 1 ( Ŵ_j ≥ t) that | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t) / p_0 G(t) - 1 | ≤max{| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i) - 1 |, | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i)/p_0 G(t_i+1) - 1 | } . The two terms within the brackets on the right-hand side of the expression above can be bounded similarly and we will provide only the details on how to bound the first term for simplicity. With the aid of the fact that | x y - 1 | ≤ | x -1| |y - 1| + |x - 1| + |y -1| for all x, y ∈ℝ, we can deduce that | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i) - 1 | ≤| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i+1) - 1 | ·sup_0 ≤ i ≤ l_n| G(t_i)/G(t_i+1) -1 | + | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i+1) - 1 | + sup_0 ≤ i ≤ l_n| G(t_i)/G(t_i+1) -1 | ≤| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i+1) - 1 |·(1+o(1)) + sup_0 ≤ i ≤ l_n| G(t_i)/G(t_i+1) -1 |, where the last step above is because of (<ref>) and the o(1) term is uniformly over all i. Combining the above two results and applying (<ref>) again lead to | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t) / p_0 G(t) - 1 | ≤max{| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i+1) - 1 |, | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i)/p_0 G(t_i) - 1 | } ×(1 + o(1) ) + o(1). Thus, to prove the desired result, it is sufficient to show that D_n := sup_0 ≤ i ≤ l_n| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i) / p_0 G(t_i) - 1 |=o_p(1). We now proceed with establishing (<ref>). Let us define an event ℬ_3 = {max_1 ≤ j ≤ p | Ŵ_j - W_j | ≤ b_n }. From Condition <ref>, it holds that ℙ (ℬ_3^c) → 0. Note that for any two events A and B, we have that ℙ(A) ≤ℙ(A∩ B) + P(B^c). Repeatedly using such inequality, the union bound, and the property that ℙ (ℬ_3^c) → 0, we can deduce that for each ϵ > 0, ℙ ( D_n ≥ϵ ) ≤∑_i = 0^l_nℙ( | ∑_j ∈ℋ_0 {1 (Ŵ_j ≥ t_i) - ℙ ( W_i ≥ t_i ) }/ p_0 G(t_i) | ≥ϵ, ℬ_3 ) + ℙ (ℬ_3^c) ≤∑_i = 0^l_nℙ( | ∑_j ∈ℋ_0 {1 (W_j ≥ t_i) - ℙ ( W_i ≥ t_i ) }/ p_0 G(t_i) | ≥ϵ /2 ) + ∑_i = 0^l_nℙ( | ∑_j ∈ℋ_0 [ 1 (Ŵ_j ≥ t_i) - 1 ( W_i ≥ t_i ) ] / p_0 G(t_i) | ≥ϵ /2, ℬ_3) + o(1) ≤∑_i = 0^l_n 4 [ {∑_j ∈ℋ_0 [ 1 (W_j ≥ t_i) - ℙ ( W_i ≥ t_i ) ] }^2 ] /ϵ^2 p_0^2 G^2 (t_i) + ∑_i = 0^l_n 2 ∑_j ∈ℋ_0ℙ( t_i - b_n ≤W_j ≤ t_i + b_n ) /ϵ p_0 G(t_i) + o(1), where the last step above is due to the Markov inequality and the fact that |1 (Ŵ_j ≥ t_i) - 1 ( W_i ≥ t_i )|≤1 (t_i-b_n≤W_j ≤ t_i+b_n) on event ℬ_3. We next bound the first two terms on the very right-hand side of (<ref>) above. For the first term, under Condition <ref> for the weak dependence between {W_j}, we have that ∑_i = 0^l_n 4 [ {∑_j ∈ℋ_0 [ 1 (W_j ≥ t_i) - ℙ ( W_i ≥ t_i ) ] }^2 ] /ϵ^2 p_0^2 G^2 (t_i) ≤ C ∑_i = 0^l_n m_n p_0 G(t_i) + o( (log p)^-1/γ [p_0 G(t_i)]^2 ) /ϵ^2 p_0^2 G^2 (t_i) = C ϵ^-2 m_n ∑_i = 0^l_n1/p_0 G(t_i) + C ϵ^-2 o (l_n (log p)^- 1/γ). Moreover, it holds that ∑_i = 0^l_n 1 / p_0 G (t_i) = p_0^-1∑_i = 0^l_n 1 / z_i = p/p_0∑_i = 0^l_n1/ c_1 q a_n + h_n e^i ^γ ≤ C h_n^-1, where the last inequality above is related to the proof of Theorem 3 in <cit.>. In light of the definition of h_n and the assumption of m_n / a_n → 0, we have that m_n / h_n = (m_n / a_n)^1 - η→ 0. Therefore, combining (<ref>)–(<ref>) and the fact that l_n = [log ((p - c_1 q a_n)/h_n)]^1/γ≤ (log p)^1/γ shows that the first term for the bound in (<ref>) tends to zero as n →∞. Moreover, since l_n ≤ (log p)^1/γ, the second term on the very right-hand side of (<ref>) above is bounded by 2/ϵ (log p)^1/γsup_t ∈ (0, G^-1 (c_1 q a_n/p) ] G(t - b_n ) - G(t + b_n) / G(t) , which converges to zero as n →∞ under Condition <ref>. Finally, we can obtain that for each ϵ > 0, ℙ ( D_n > ϵ ) → 0, which establishes the desired result in (<ref>). This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> We will show that with asymptotic probability one, it holds that for some 0 < c_1 < 1, 1 + ∑_j = 1^p 1( Ŵ_j < - G^-1 ( c_1 q a_n/ p ) ) ≤ q a_n ≤ q ∑_j = 1^p 1( Ŵ_j ≥ G^-1 ( c_1 q a_n/ p ) ). Then from the definition of T, we can obtain the desired result of the lemma. We aim to establish (<ref>). The main idea of the proof is to prove that the population counterpart of (<ref>) holds. Then with an application of Lemma <ref> to both left- and right-hand sides of (<ref>), we can connect it to the population counterpart and thus prove that (<ref>) holds with asymptotic probability one. First, it follows from the union bound and the fact that ℙ(A) ≤ℙ(A∩ B) + ℙ(B^c) for any two events A and B that under Conditions <ref>–<ref>, ℙ ( Ŵ_j < 3 δ_n   j ∈𝒜_n) ≤ℙ ( Ŵ_j < 3 δ_n   j ∈𝒜_n, max_1 ≤ j ≤ p |Ŵ_j - W_j | < b_n) + ℙ (max_1 ≤ j ≤ p |Ŵ_j - W_j | ≥ b_n ) ≤ℙ ( W_j < 3 δ_n + b_n    j ∈𝒜_n) + ℙ (max_1 ≤ j ≤ p |Ŵ_j - W_j | ≥ b_n ) ≤∑_j ∈𝒜_n ℙ ( W_j - w_j < 3 δ_n + b_n - w_j ) + o(1) ≤∑_j ∈𝒜_n ℙ ( | W_j - w_j | > δ_n )+ o(1) ≤∑_j = 1 ^p ℙ ( | W_j - w_j | > δ_n )+ o(1) → 0 . Then we have ℙ (∩_j ∈𝒜_n{Ŵ_j ≥ 3 δ_n}) → 1 and thus with asymptotic probability one, ∑_j = 1^p 1 ( Ŵ_j ≥ 3 δ_ n ) ≥ a_n , where a_n = |𝒜_n|. In addition, since w_j > - δ_n for 1 ≤ j ≤ p by assumption, we can deduce that ∑_j = 1^p ℙ ( Ŵ_j < - 3 δ_n) ≤∑_j = 1^p ℙ ( Ŵ_j < - 3 δ_n, max_1 ≤ j ≤ p |Ŵ_j - W_j | < b_n ) + ℙ (max_1 ≤ j ≤ p |Ŵ_j - W_j | ≥ b_n ) ≤∑_j = 1^p ℙ ( W_j < - 3 δ_n + b_n ) + o(1) ≤∑_j = 1^p ℙ (W_j - w_j ≤ - 3 δ_n + b_n - w_j ) + o(1) ≤∑_j = 1^p ℙ ( | W_j - w_j | > δ_n ) + o(1) → 0 , which yields ∑_j = 1^p ℙ ( Ŵ_j < - 3 δ_n)→ 0. Using similar arguments as for (<ref>), it holds that ∑_j = 1^p ℙ (W_j ≤ - 3 δ_n ) → 0. Then we can obtain that G( 3 δ_n ) = p_0^-1∑_j ∈ℋ_0ℙ ( W_j ≤ -3 δ_n ) ≤ p_0^-1∑_j = 1^p ℙ ( W_j ≤ -3 δ_n ) = o(p_0^-1) . Since a_n →∞, p_0 / p → 1, and G(t) is a nonincreasing, continuous function, it follows that G(3 δ_n) ≤ c_1 q a_n / p and thus G^-1 ( c_1 q a_n/ p ) ≤ 3 δ_n for some constant 0 < c_1 < 1 when n is sufficiently large. This together with (<ref>) entails that with asymptotic probability one, ∑_j = 1^p 1 ( Ŵ_j ≥ G^-1 ( c_1 q a_n/ p ) ) ≥ a_n. This completes the proof of the second inequality in (<ref>). It remains to establish the first inequality in (<ref>). From the definition of G(t) and Lemma <ref>, it holds that c_1 q a_n / p = p_0^-1∑_j ∈ℋ_0ℙ ( W_j ≤ - G^-1 ( c_1 q a_n/ p ) ) = (1 + o_p(1)) · p_0^-1∑_j ∈ℋ_01( Ŵ_j < - G^-1 ( c_1 q a_n/ p ) ). Then for some constant c_2 satisfying 0 < c_1 < c_2 < 1, we can obtain that with asymptotic probability one, 1 + ∑_j ∈ℋ_01( Ŵ_j < - G^-1 ( c_1 q a_n/ p ) ) ≤c_1 q a_n p_0/p (1 + o_p(1)) ≤ c_2 q a_n , where we have used the assumption of p_0/p → 1. Further, under (<ref>) in Condition <ref>, an application of the union bound yields that ℙ( ∑_j ∈ℋ_11( Ŵ_j < - G^-1 (c_1 q a_n/p) ) ≥ (1 - c_2) q a_n ) ≤ℙ( ∑_j ∈ℋ_11( W_j < - G^-1 (c_1 q a_n/p ) + b_n ) ≥ (1 - c_2) q a_n, max_1 ≤ j ≤ p |Ŵ_j - W_j | < b_n ) + o(1) ≤1/ (1 - c_2 ) q a_n∑_j ∈ℋ_1ℙ( W_j < - G^-1 (c_1 q a_n/p) + b_n ) + o(1) → 0, which together with (<ref>) implies that 1 + ∑_j = 1^p 1( Ŵ_j < - G^-1 ( c_1 q a_n/ p ) ) ≤ q a_n with asymptotic probability one. This proves the first inequality in (<ref>), which completes the proof of Lemma <ref>. New proof sketch for lower bound (need to revise to formalize). Now we have proved that with asymptotic probability 1, T∈ (0, G^-1 ( c_1 q a_n/ p )). Denote by this event ℬ_1. We will establish the lower bound now. By definition, we have T∈𝒮 with asymptotic probability one, where 𝒮 := {t∈ (0, G^-1 ( c_1 q a_n/ p )) : ∑_j = 1^p 1( Ŵ_j ≤ -t )/1⋁∑_j = 1^p 1( Ŵ_j ≥ t )≤ q}. Note that for any t∈𝒮, we have ∑_j ∈ℋ_0 1 ( Ŵ_j ≤ -t ) + ∑_j∈ℋ_1 1 ( Ŵ_j ≤ -t )≤ q+ q ∑_j ∈ℋ_0 1 ( Ŵ_j ≥ t ) + q∑_j∈ℋ_1 1 ( Ŵ_j ≥ t ). Denote by the event where the inequalities in Lemma 2 hold as ℬ_2ϵ. Then on ℬ_2ϵ, (1-ϵ)∑_j ∈ℋ_0ℙ( W_j ≤ -t ) ≤ q+qs + (1+ϵ)q∑_j ∈ℋ_0ℙ( W_j ≥ t ). That is, ∑_j ∈ℋ_0ℙ( W_j ≤ -t ) ≤q(1+s)/1-q-ϵ-qϵ, which yields t≥ G^-1(q(1+s)/(1-q-ϵ-qϵ)p). That is, on event ℬ_2ϵ∩ℬ, it holds that G^-1(q(1+s)/(1-q-ϵ-qϵ)p)≤ T≤ G^-1 ( c_1 q a_n/ p ). §.§ Proof of Lemma <ref> The proof of this lemma relies on the definitions of T_v and T_v, with the intuition that T_v resembles the vth order statistic of - W_j, while T_v resembles the vth order statistic of Ŵ_j. Intuitively, this means that if the distance between W_j and Ŵ_j is bounded by b_n, the distance between the corresponding order statistics should also be bounded by b_n. We will formalize such argument next. Let us define an event 𝒞 := {max_1 ≤ j ≤ p | Ŵ_j - W_j| ≤ b_n} . Condition <ref> assumes that ℙ (𝒞) → 1. Denote by Ŝ_v = { 1 ≤ j ≤ p : - Ŵ_j ≥T_v } and S_v = { 1 ≤ j ≤ p: - W_j ≥T_v } . Observe that | Ŝ_v | = v and | S_v | = v by the definitions of T_v and T_v. If j_0 ∈Ŝ_v, on event 𝒞 we have that - W_j_0 = - Ŵ_j_0 + ( Ŵ_j_0 - W_j_0 ) ≥T_v - b_n, which entails that ∑_j = 1^p 1 ( - W_j ≥T_n - b_n ) ≥ v. Moreover, since T_v satisfies ∑_j = 1^p 1( - W_j ≥T_v ) = v, it follows that T_v ≥T_v - b_n by the monotonicity of the indicator function. Similarly, we can also show that T_v ≥T_v - b_n on event 𝒞. Thus, (<ref>) is derived. This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Note that k is the number of failures before v successes in a binomial process with success probability 1/2. The major intuition of the desired result (<ref>) is that by the law of large numbers, the number of failures and successes should become asymptotically comparable as the number of trials tends to infinity. Let D_k + v - 1 be a binomial random variable with distribution B ( k + v - 1, 1/2 ) and L_v the negative binomial random variable with distribution NB(v, 1/2 ). Observe that (<ref>) is equivalent to ℙ ( L_v ≥ k ) ≤ q. According to the relationship between the negative binomial distribution and binomial distribution, we have that ℙ ( L_v ≥ k ) = 1 - ℙ ( L_v ≤ k - 1 ) = 1 - ℙ ( D_k + v - 1 ≥ v ) = ℙ ( D_k + v - 1 ≤ v - 1 ). By the central limit theorem, it holds that when k + v →∞, ℙ ( D_k + v - 1 ≤ v - 1 ) = Φ( v - 1 - k /√( k + v - 1 )) + o(1). Therefore, (<ref>) implies that v - 1 - k /√( k + v - 1 )≤Φ ^-1 (q - o(1) ). In addition, since v is the largest integer such that (<ref>) holds, we have that ℙ ( L_v +1≥ k) > q . Using similar arguments as for (<ref>), it follows that as k + v →∞, ℙ ( L_v + 1 ≥ k ) = ℙ ( D_k + v≤ v ) = Φ( v - k /√( k + v )) + o(1) and hence v - k /√(k + v )≥Φ^-1 ( q - o(1) ), which along with (<ref>) leads to (<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> The proof of this lemma consists of two steps. We will first establish the tight bounds below for T_v. In the second step, noting that T_v + M_v + 1 < T_v - 2 b_n ≤T_v + M_v by the definition of M_v in (<ref>), we will show that M_v is bounded as long as b_n is sufficiently small. For 0< < 1/8, under Conditions <ref>, <ref>, and <ref> we have that ℙ( G^-1( v(1+)/p_0) < T_v < G^-1(v(1 - )/p_0) ) → 1. Under Condition <ref>, we have that 2 b_n < G^-1( v(1 + )/p_0) - G^-1( v (1 + 3 ) (1 - ) /p_0). Using similar arguments as in the proof of Lemma <ref> below, we can show that under Conditions <ref>, <ref>, and <ref>, ℙ( G^-1( v (1 + 3 )(1 + )/p_0) < T_v (1 + 3 ) < G^-1((v (1 + 3 ))(1 - )/p_0) ) → 1. Then it follows that T_v (1 + 3 ) < G^-1 ( v (1 + 3 ) (1 - ) /p_0) < G^-1 ( v (1 + )/p_0 ) < T_v . Additionally, applying Lemmas <ref> and <ref> together with the definition of T_v gives that with asymptotic probability one, T_v + M_v ≥T_v - 2 b_n ≥ G^-1 ( v(1 + )/p_0 ) - [ G^-1 ( v(1 + )/p_0 ) - G^-1 ( v (1 + 3 ) (1 - )/p_0 ) ] = G^-1 ( v (1 + 3 ) (1 - )/p_0 ) > T_v (1 + 3 ). Therefore, we can obtain that ℙ (M_v < 3 v ) → 1 since T_v is decreasing with respect to v. This will conclude the proof of Lemma <ref>. We will present the formal proofs of Lemmas <ref> and <ref> below. Proof of Lemma <ref>. The main idea of the proof is to establish the convergence of the empirical distribution of {W_j} that ∑_j ∈ℋ_01(W_j ≥ t) is close to ∑_j ∈ℋ_0ℙ(W_j ≥ t). Using similar arguments as in the proof of Lemma <ref> in Section <ref>, we can obtain that when m_n/k → 0 (which combined with Lemma <ref> implies that m_n / v → 0), sup_t ∈ (G^-1(3k/2p), G^-1(k/2p) ) | ∑_j ∈ℋ_01(W_j ≤ - t) /∑_j ∈ℋ_0ℙ(W_j ≤ - t) - 1 | = o_p(1). Since ∑_j ∈ℋ_0ℙ(W_j ≤ - G^-1(v (1 + ) /p_0) = v (1 + ), we see from (<ref>) that ∑_j=1^p 1(-W_j ≥ G^-1(v (1 + ) /p_0)) ≥∑_j ∈ℋ_01(W_j ≤ - G^-1(v (1 + ) /p_0)) = v (1 + ) (1 + o_p(1)) > v holds with asymptotic probability one. Hence, from the definition of T_v, we have that ℙ(T_v > G^-1 ( v (1 + )/p_0 ) ) → 1. We next prove the upper bound for T_v. Note that ∑_j =1^p 1 (W_j ≤ - T_v) = v. We will aim to show that with asymptotic probability one, ∑_j ∈ℋ_11 ( W_j ≤ - T_v ) < v / 2. Then with asymptotic probability one, it holds that ∑_j ∈ℋ_01 (W_j ≤ - T_v) ≥ v (1 - /2). On the other hand, applying (<ref>) and similar argument as for (<ref>), we can obtain that with asymptotic probability one, ∑_j ∈ℋ_01(W_j ≤ - G^-1(v (1 - ϵ_n) /p_0) < v (1 - /2) . Combining the above two results shows that with asymptotic probability one, T_v≤ G^-1(v (1 - ) /p_0), which completes the proof for the upper bound. It remains to establish (<ref>). Since p_0/ p → 1 and v/k → 1 (cf. Lemma <ref>), we have that G^-1 (3 k /2 p) < G^-1 ( v (1 + )/p_0 ) when n and p are sufficiently large and 0 < < 1/8. Then from (<ref>), it holds that G^-1 (3 k /2 p) ≤T_v and hence with asymptotic probability one, ∑_j ∈ℋ_11 (W_j ≤ - T_v) ≤∑_j ∈ℋ_11 (W_j < - G^-1 (3 k /2 p)) . Moreover, an application of the Markov inequality, Lemma <ref>, and (<ref>) in Condition <ref> yields that as n →∞, ℙ(∑_j ∈ℋ_11 (W_j < - G^-1 (3 k /2 p)) > v /2 ) ≤2/ v ∑_j ∈ℋ_1ℙ(W_j < - G^-1 (3 k /2 p) ) → 0. Therefore, (<ref>) is derived in view of (<ref>). This completes the proof of Lemma <ref>. Proof of Lemma <ref>. Let us observe that v (1 + 3 ) (1 - )/p_0 - v(1 + )/p_0 = v/p_0 ( - 3 ^2). By the assumptions that p_0/p→ 1 and m_n/k→ 0, and applying Lemma <ref> and the observation above, it follows that when k and p are sufficiently large, v (1 + 3 ) (1 - )/p_0 - v(1 + )/p_0≥ k / 2 p . Note that assumption (<ref>) in Condition <ref> entails that sup_t ∈ ( G^-1 (3k/2p), G^-1(k/2p)) [ G(t - b_n ) - G(t + b_n) ] = o( k /p ). Combining the above two results and Lemma <ref>, we can obtain that v (1 + 3 ) (1 - )/p_0 - v(1 + )/p_0≫sup_t ∈ ( G^-1 (3k/2p), G^-1(k/2p)) [ G(t - b_n ) - G(t + b_n) ]. Notice that G^-1(v (1 + 3 ) (1 - )/p_0 ) ∈ ( G^-1 (3k/2p), G^-1(k/2p)) and G^-1 (v(1 + )/p_0 ) ∈ ( G^-1 (3k/2p), G^-1(k/2p)) when k and p are sufficiently large. Therefore, using proof by contradiction and the monotonicity of function G(·), we can establish the desired result of Lemma <ref>. This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Recall that the perfect and approximate knockoff statistics based on the marginal correlation are defined as W_j = (√(n)_2)^-1 ( | _j^⊤ | - |_j^⊤| ) and Ŵ_j = (√(n)_2)^-1 ( | _j^⊤ | - |_j^⊤| ), respectively. By the triangle inequality, it is easy to see that max_1 ≤ j ≤ p | Ŵ_ j - W_ j | ≤max_1 ≤ j ≤ p (√(n)_2)^-1 | (_j - _j)^⊤ |. Then an application of the Cauchy–Schwarz inequality gives that max_1 ≤ j ≤ p | Ŵ_ j - W_ j | ≤ (√(n))^-1max_1 ≤ j ≤ p _j - _j _2 . Thus, the conclusion of Lemma <ref> can be derived under Condition <ref>. This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> From the definitions of W_j and w_j and the triangle inequality, it holds that ℙ (| W_j - w_j | ≥δ_n ) ≤ℙ( ( n^-1^2_2 )^-1/2| n^-1 ( | _j^⊤ | - | _j^⊤ | ) - ( | (X_j Y)| - |(X_j Y)| ) | ≥δ_n / 2 ) + ℙ( | ( n^-1^2_2 )^-1/2 - ( Y^2)^-1/2| ·| | (X_j Y)| - |(X_j Y)| | ≥δ_n / 2 ) := P_1 + P_2. We will aim to show that for δ_n → 0, P_1 ≤ 4 exp{ - n δ_n^2 Y^2 / 256 X_j_ψ_2^2 Y _ψ_2^2 } + exp{ - n ( Y^2)^2 /8 Y^4 } and P_2 ≤ 2 exp{ - n δ_n^2 ( Y^2 )^2 / 64 | w_j |^2 Y _ψ_2^4 } + exp{ - n ( Y^2 )^2 /8 Y^4}. Then setting δ_n = √(log p/n)max_1 ≤ j ≤ p{ 16 √(2) X_j _ψ_2 Y _ψ_2/ ( Y^2)^1/2 8√(2) |w_j| Y _ψ_2^2 / Y^2 }, a combination of the above results leads to the desired conclusion of this lemma. We proceed with proving (<ref>). Since _2^2 = ∑_i = 1^n y_i^2 is the sum of i.i.d. random variables, an application of Bernstein’s inequality yields that ℙ ( n^-1_2^2 ≤[Y^2] /2 ) ≤exp{ - n ( Y^2)^2 /8 Y^4 }. It follows from the triangle inequality and (<ref>) that P_1 ≤ℙ( | n^-1 ( | _j^⊤ | - | _j^⊤ | ) - ( | (X_j Y)| - |(X_j Y)| ) | ≥δ_n ( Y^2)^1/2/2 √(2)) + ℙ ( n^1/2(_2)^-1≥√(2) ([Y^2])^-1/2 ) ≤ℙ( 1/n| ∑_i = 1^n [_i, j y_i - (X_j Y)] | ≥δ_n ( Y^2)^1/2/4 √(2)) + ℙ( 1/n| ∑_i = 1^n [_i, j y_i - (X_j Y)] | ≥δ_n ( Y^2)^1/2/4 √(2)) + exp{ - n ( Y^2)^2 /8 Y^4 }. We next bound the first two terms on the right-hand side of the expression above. Under Condition <ref>, we see that _i, j y_i and _i, j y_i are both sub-exponential random variables, with sub-exponential norms X_j _ψ_2 Y _ψ_2 and X_j _ψ_2 Y _ψ_2, respectively. Then we can obtain through applying Bernstein's inequality for sub-exponential random variables (see, e.g., Corollary 2.8.3 in <cit.>) that when δ_n = o(1), ℙ( 1/n| ∑_i = 1^n [_i, j y_i - (X_j Y)] | ≥δ_n ( Y^2)^1/2/4 √(2)) ≤ 2 exp{ - n δ_n^2 Y^2 / 256 X_j_ψ_2^2 Y _ψ_2^2 } and ℙ( 1/n| ∑_i = 1^n [_i, j y_i - (X_j Y)] | ≥δ_n ( Y^2)^1/2/4 √(2)) ≤ 2 exp{ - n δ_n^2 Y^2 / 256 X_j_ψ_2^2 Y _ψ_2^2 }. Thus, combining the above three inequalities establishes (<ref>). As for term P_2, noting that w_j = ( Y^2)^-1/2 ( | (X_j Y)| - |(X_j Y)| ) and | ( n^-1^2_2 )^-1/2 - ( Y^2)^-1/2| = | n^-1_2^2 - Y^2 | / n^-1/2_2 ( Y^2)^1/2 (( Y^2)^1/2 + n^-1/2_2 ) , we can deduce that P_2 = ℙ( |w_j| | n^-1_2^2 - Y^2 | / n^-1/2_2 (( Y^2)^1/2 + n^-1/2_2 ) ≥δ_n / 2 ) ≤ℙ( |w_j| | n^-1_2^2 - Y^2 |/ n^-1/2_2 ( Y^2)^1/2≥δ_n / 2 ) = ℙ( | n^-1_2^2 - Y^2 | ≥δ_n Y^2 /2 √(2) |w_j| ) + ℙ ( n^-1_2^2 ≤ Y^2 /2 ) . The very last term above can be bounded by applying (<ref>). Again we can see that under Condition <ref>, y_i^2 is a sub-exponential random variable with sub-exponential norm Y _ψ_2^2. With the aid of Bernstein's inequality for sub-exponential random variables (Corollary 2.8.3 in <cit.>), we can obtain that for δ_n = o(1), ℙ( 1/n| ∑_i = 1^n [ y_i^2 - ( Y^2 )] | ≥δ_n Y^2 /2 √(2) |w_j| ) ≤ 2 exp{ - n δ_n^2 ( Y^2 )^2 / 64 | w_j |^2 Y _ψ_2^4 }. Therefore, the bound for term P_2 in (<ref>) can be shown. This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> The main idea of the proof is to apply the law of total variance and decompose the total into two terms by conditioning on (_ℋ_1, ), where _ℋ_1= (_j)_j ∈ℋ_1 and = (ε_1, …, ε_n)^⊤. Specifically, it holds that ( ∑_j ∈ℋ_01 (W_j ≥ t) ) = {[ ( ∑_j ∈ℋ_01 (W_j ≥ t) - ∑_j ∈ℋ_0ℙ( W_j ≥ t | _ℋ_1, ) )^2 | _ℋ_1, ] } + {(∑_j ∈ℋ_0ℙ( W_j ≥ t | _ℋ_1, ) - ∑_j ∈ℋ_0ℙ( W_j ≥ t) )^2 } := V_1 + V_2. We will bound terms V_1 and V_2 above separately. Let us begin with the first term V_1. We can expand the square and obtain that V_1 = ∑_j ∈ℋ_0∑_ℓ∈ℋ_0{[ ( 1 (W_j ≥ t) - ℙ( W_j ≥ t | _ℋ_1, ) ) ×( 1 (W_ℓ≥ t) - ℙ( W_ℓ≥ t | _ℋ_1, ) ) | _ℋ_1, ] }. Observe that conditional on (_ℋ_1, ), it follows from model (<ref>) that is deterministic. In addition, W_j depends only on _j and _j besides . Thus, we need only to consider the conditional distribution of (_j, _j, _k, _k) | (_ℋ_1, ). We will aim to show that each W_j depends on at most m_n random variables in {W_k: k∈ℋ_0}. Indeed, it suffices to show that conditional on (_ℋ_1, ), the number of (_k, _k)'s that are dependent on (_j, _j) is at most m_n. Since the rows of (, ) are i.i.d. and are independent of , we need only to consider the distribution of a single row; that is, (X_j, X_j, X_k, X_k) | (X_ℋ_1, ε) d= (X_j, X_j, X_k, X_k) | X_ℋ_1. In view of the multinormal distribution in (<ref>), it follows that the conditional distribution (X_j, X_j, X_k, X_k) | X_ℋ_1 is still normal. We can obtain from the conditional distribution that {( [ X_j; X_j; ], [ X_k; X_k; ]) | X_ℋ_1} = [ _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k; _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k; ]. In particular, (X_j, X_j) and (X_k, X_k) are independent conditional on X_ℋ_1 if and only if _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k=0. Thus, to count the number of dependent pairs of (X_j, X_j) and (X_k, X_k) for j, k ∈ℋ_0, we need only to count the number of nonzero (_j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k)'s. Without loss of generality, let us assume that X = (X_ℋ_1, X_ℋ_0) and = [ _ℋ_1, ℋ_1 _ℋ_1, ℋ_0; _ℋ_0, ℋ_1 _ℋ_0, ℋ_0; ]. Using the formula for block matrix inverse, it holds that ^-1 = [ (^-1)_11 (^-1)_12; (^-1)_21 _ℋ_0, ℋ_0 - _ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0; ], where (^-1)_11 = _ℋ_1, ℋ_1^-1 + _ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0 (_ℋ_0, ℋ_0 - _ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0)^-1_ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1 , (^-1)_12 =- _ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0 (_ℋ_0, ℋ_0 - _ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0)^-1 , and (^-1)_21 = (^-1)_12^⊤. In addition, Condition <ref> assumes that max_1 ≤ j ≤ p (^-1)_j_0 ≤ m_n, which indicates that max_j ∈ℋ_0 ( _ℋ_0, ℋ_0 - _ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0 )_j _0≤ m_n since it is a submatrix of ^-1. Hence, we can obtain that for a given j ∈ℋ_0, ∑_k ∈ℋ_01( _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k = 0 ) ≤ m_n. Consequently, we see that conditional on (_ℋ_1, ), the number of k ∈ℋ_0 such that (_k, _k) is dependent on (_j, _j) is at most m_n. For j ∈ℋ_0, let us define N(j) := { k∈ℋ_0: W_k W_j | (_ℋ_1 , ) }. Then it holds that | N(j) | ≤ m_n. From (<ref>) and the fact that the indicator function takes values between 0 and 1, we can deduce that V_1 = ∑_j ∈ℋ_0∑_ℓ∈ N(j){[ 1 (W_j ≥ t) ·1 (W_ℓ≥ t) | _ℋ_1, ] } - ∑_j ∈ℋ_0∑_ℓ∈ N(j){[ ℙ( W_j ≥ t | _ℋ_1, ) ℙ( W_ℓ≥ t | _ℋ_1, ) ] } ≤∑_j ∈ℋ_0∑_ℓ∈ N(j){[ 1 (W_j ≥ t) ·1 (W_ℓ≥ t) | _ℋ_1, ] } ≤ m_n ∑_j ∈ℋ_0{[ 1 (W_j ≥ t) | _ℋ_1, ] } = m_n ∑_j ∈ℋ_0ℙ (W_j ≥ t ) = m_n p_0 G(t). We next proceed with showing the bound for term V_2. We can expand V_2 as V_2 = ∑_j ∈ℋ_0∑_ℓ∈ℋ_0{( ℙ ( W_j ≥ t | _ℋ_1, ) - ℙ ( W_j ≥ t ) ) ×( ℙ ( W_ℓ≥ t | _ℋ_1, ) - ℙ ( W_ℓ≥ t ) ) }. The key idea of the proof is to examine the conditional distribution ℙ (W_j ≥ t | _ℋ_1, ) and show that given j ∈ℋ_0, the number of dependent ℙ ( W_ℓ≥ t | _ℋ_1, ) is at most m_n. Since (X, X ) is multinormal, it holds that (X_j, X_j) | ( X_ℋ_1, )  d∼ N ( [ _j, ℋ_1_ℋ_1, ℋ_1 ^-1 X_ℋ_1; _j, ℋ_1_ℋ_1, ℋ_1 ^-1 X_ℋ_1; ], _cond), where _cond = [ _j, j - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j _j, j - r - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j; _j, j - r - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j _j, j - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j; ]. Since the rows of the augmented data matrix (, ) are i.i.d. and is deterministic given ( _ℋ_1, ), we can obtain that ( _j^⊤/√(n)_2, _j^⊤/√(n)_2) | ( _ℋ_1, ) d∼ N ((√(n)_2)^-1[ _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1^⊤; _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1 ^⊤; ], n^-1_cond). Note that when _ℋ_1, j = 0, the conditional distribution above does not depend on (_ℋ_1, ) and hence any term involving such j ∈ℋ_0 in the expansion of V_2 will disappear. Denote by N_dep = {j ∈ℋ_0: _ℋ_1, j≠ 0}. It follows from Condition <ref> that | N_dep | ≤ m_n. Then we have that V_2 = ∑_j ∈ℋ_0∑_ℓ∈ N_dep{( ℙ ( W_j ≥ t | _ℋ_1, ) - ℙ ( W_j ≥ t ) ) ×( ℙ ( W_ℓ≥ t | _ℋ_1, ) - ℙ ( W_ℓ≥ t ) ) } ≤∑_j ∈ℋ_0∑_ℓ∈ N_dep{ℙ ( W_j ≥ t | _ℋ_1, ) ℙ ( W_ℓ≥ t | _ℋ_1, ) } ≤∑_j ∈ℋ_0∑_ℓ∈ N_dep{ℙ ( W_j ≥ t | _ℋ_1, ) }≤ m_n p_0 G(t). Therefore, substituting (<ref>) and (<ref>) into (<ref>) yields (<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Proof of (<ref>). In the proof of Lemma <ref> in Section <ref> (cf. (<ref>)), we have shown that ( _j^⊤/_2, _j^⊤/_2) | ( _ℋ_1, )  d∼ N ( [ μ_j; μ_j; ], σ_j^2 [ 1 ρ_j; ρ_j 1; ]), where μ_j = _2^-1_j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1^⊤, σ_j^2 = _j, j - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j, ρ_j = 1 - r / σ_j^2, and r is as given in (<ref>). Recall the definition N_dep = {j ∈ℋ_0: _ℋ_1, j≠ 0} in the proof of Lemma <ref>. It holds that |N_dep| ≤ m_n in view of Condition <ref>. Furthermore, note that G(t) ≥ c_1 q a_n / p for t ∈ (0, G^-1 ( c_1 q a_n / p ) ]. Let us define R_n := sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] ∑_j ∈ℋ_0 ∩ N_dep^cℙ (t - Δ_n ≤W_j < t + Δ_n) /∑_j ∈ℋ_0 ∩ N_dep^cℙ (W_j ≥ t ) . Then we can write sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] G(t - Δ_n) - G(t + Δ_n) / G(t) = sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] ∑_j ∈ℋ_0 ∩ N_depℙ (t - Δ_n ≤W_j < t + Δ_n) / p_0 G(t) + R_n ≤m_n p / c_1 q a_n p_0 + R_n. From the assumptions that (log p)^1/γ m_n / a_n → 0 and p_0 / p → 1, we have that (log p )^1/γm_n p / c_1 q a_n p_0 → 0. It remains to establish (log p)^1/γ R_n → 0. A key observation is that when j ∈ℋ_0 ∩ N_dep^c, it follows that the conditional distribution ( _j^⊤/_2, _j^⊤/_2) | ( _ℋ_1, )  d∼ N ( [ 0; 0; ], [ _j, j^2 _j, j^2 - r; _j, j^2 - r _j, j^2; ]), which does not depend on ( _ℋ_1, ). Then we see that the distribution of W_j does not depend on ( _ℋ_1, ) and satisfies that ℙ ( √(n)W_j ≥ t ) = ℙ ( |Z_1| - |Z_2| ≥ t ), where (Z_1, Z_2)^⊤ is a two-dimensional multinormal random variable with mean (0, 0)^⊤ and covariance matrix [ _j, j^2 _j, j^2 - r; _j, j^2 - r _j, j^2; ] . For j ∈ℋ_0 ∩ N_dep^c and t > 0, the density function of √(n)W_j is given by f_√(n)W_j(t) = √(2)/√(π) c_2, j[ 1 - Φ( t /c_1, j) ] exp{- t^2 /2 c_2, j^2 } + √(2)/√(π) c_1, j[ 1 - Φ( t /c_2, j) ] exp{- t ^2 /2 c_1, j^2 } , where c_1, j = √(4 _j,j^2 - 2 r ) and c_2, j = √(2r). Based on the density function of √(n)W_t above and the basic inequality that 1 - Φ(x) ≤ e^-x^2/2 for x ≥ 0, it is easy to see that ℙ ( W_j ≥ t ) = ℙ (√(n)W_j ≥√(n) t ) ≤∫_√(n) t^∞√(2)/√(π) c_2, jexp{- x^2 /2 c_2, j^2 } dx +∫_√(n) t^∞√(2)/√(π) c_1, jΦ( -x /c_2, j) dx ≤(2 + 2 c_2, j/ c_1, j) [1 - Φ(√(n) t/c_2, j) ]. Then we can obtain that G(t ) ≤max_j ∈ℋ_0(2 + 2 c_2, j/ c_1, j) [1 - Φ(√(n) t/ c_2, j) ] . Setting t = G^-1 ( c_1 q a_n/ p ) in the inequality above yields that G^-1 ( c_1 q a_n/ p ) = O( √(log p/n) ) when C_1 < r < _j, j^2 < C_2 with some absolute constants C_1> 0 and C_2> 0 for each j ∈ℋ_0. We will bound the ratio in R_n by considering two ranges of t∈ (0, 4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j) and t∈ [4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j, G^-1(c_1qa_n/p)] separately. When t falls into the first range, in view of (<ref>) the denominator G(t) in the ratio in R_n is of a constant order, while the numerator is uniformly bounded from above by O(√(n)Δ_n) over all t in this range because the density f_√(n)W_j(t) is bounded from above by a constant. We now consider the ratio in R_n in the second range of t∈ [4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j, G^-1(c_1qa_n/p)]. We will bound the numerator and denominator in (<ref>) separately in this range. It follows from (<ref>) and the mean value theorem that there exists some ξ∈ (√(n) t - √(n)Δ_n, √(n) t + √(n)Δ_n) such that ℙ (√(n) t - √(n)Δ_n ≤√(n)W_j ≤√(n) t + √(n)Δ_n) = 2 √(n)Δ_n {√(2)/√(π) c_2, j[ 1 - Φ( ξ/c_1, j) ] exp{ - ξ^2 / 2 c_2, j^2 } + √(2)/√(π) c_1, jexp{- ξ^2 /2 c_1, j^2}[1 - Φ( ξ/c_2, j) ] }. Moreover, since √(n) t ≤√(n) G^-1 (c_1 q a_n/p) = O( √(log p) ) and Δ_n √(n log p)→ 0, we can obtain through some direct calculations that | 1 - Φ( ξ/c_1, j) / 1 - Φ( √(n) t /c_1, j) - 1 | ≤ C √(n) t ·√(n)Δ_n = O ( Δ_n √(n log p)). Similarly, it holds that | exp{- ξ^2 /2 c_1, j^2}/exp{- (√(n) t)^2 /2 c_1, j^2} - 1 | ≤ C √(n) t ·√(n)Δ_n = O ( Δ_n √(n log p)). Combining the above three inequalities yields that when Δ_n √(n log p)→ 0, ℙ (t - Δ_n ≤W_j < t + Δ_n) = ℙ ( √(n) t - √(n)Δ_n ≤√(n)W_j ≤√(n) t + √(n)Δ_n) ≤ C √(n)Δ_n [1 + O (√(n)Δ_n log p)] {√(2)/√(π) c_2, j[ 1 - Φ( √(n) t /c_1, j) ] exp{- (√(n) t )^2 /2 c_2, j^2 } + √(2)/√(π) c_1, j[ 1 - Φ( √(n) t /c_2, j) ] exp{- (√(n) t ) ^2 /2 c_1, j^2 }}. Next we need to deal with the denominator ℙ (√(n)W_j ≥ t). Via integration by parts, we can deduce that for t ∈ [4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j, G^-1(c_1qa_n/p)], ℙ (√(n)W_j ≥√(n) t) = 2 [ 1 - Φ( √(n) t / c_1, j) ] [ 1 - Φ( √(n) t/c_2, j) ] ≥C{ (√(n) t)^-1[ 1 - Φ( √(n) t / c_1, j) ] exp{ - (√(n) t )^2 /2 c_2, j^2 } + (√(n) t)^-1[ 1 - Φ( √(n) t / c_2, j) ] exp{ - (√(n) t )^2 /2 c_1, j^2 }} ≥C (√(n) t)^-1 f_√(n)W_j (√(n) t) , where we have used the definition of the density in (<ref>) and the fact that 1 - Φ(x) ≥ 0.75 x^-1 e^- x^2/ 2 for x ≥ 4, and C is some constant depending on c_1, j and c_2, j. Combining (<ref>) and (<ref>) and using some direct calculations, we can obtain the bound for the ratio in R_n in the second range sup_t∈ [4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j, G^-1(c_1qa_n/p))∑_j ∈ℋ_0 ∩ N_dep^cℙ (t - Δ_n ≤W_j < t + Δ_n) /∑_j ∈ℋ_0 ∩ N_dep^cℙ (W_j ≥ t ) ≤C√(n)Δ_n·√(n) G^-1 ( c_1 q a_n / p ) = O (√(n)Δ_n √(log p)). This together with the result for the first range proved previously leads to R_n = O (√(n)Δ_n √(log p)). Finally, plugging (<ref>) into (<ref>) yields (<ref>) because (log p)^1/γ m_n / a_n → 0 and √(n)Δ_n (log p)^1/2 + 1/γ→ 0. Proof of (<ref>). Recall from Condition <ref> that p_1^-1∑_j ∈ℋ_1ℙ ( W_j < - t ) ≤ G(t) for t ∈ (0, C √(n^-1log p)) with C some large constant. Also, note that Δ_n = o(G^-1 (c_1 q a_n /p)) since √(n)Δ_n → 0 by assumption and G^-1 (c_1 q a_n/p) = O(√(n^-1log p)) as shown in the proof of (<ref>). It follows from some direct calculations that a_n ^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + Δ_n ) ≤ a_n^-1 (p - p_0) G( G^-1 ( c_1 q a_n / p ) - Δ_n ) = c_1 q (p - p_0) / p + a_n^-1(p - p_0) | G' ( ξ ) |Δ_n, where ξ is some number lying between G^-1 ( c_1 q a_n / p ) and G^-1 ( c_1 q a_n / p ) - Δ_n. From (<ref>) and f_√(n)W_j (√(n)ξ )≤ C with C>0 some constant, we can deduce that | G' (ξ)| = ∑_j ∈ℋ_0 p_0^-1√(n) f_√(n)W_j (√(n)ξ ) ≤C √(n) m_n /p_0 + p_0^-1∑_j ∈ℋ_0 ∩ N_dep^c √(n) f_√(n)W_j (√(n)ξ ) ≤C √(n) m_n /p_0 + C p_0^-1√(n)·√(n) G( c_1 q a_n /p) ∑_H_0 ∩ N_dep^cℙ( W_j ≥ G( c_1 q a_n /p) ) ≤C √(n) m_n /p_0 + C p_0^-1√(n log p ) p_0 c_1 q a_n /p, where the second last step above is due to (<ref>). Therefore, substituting the bound above into (<ref>) gives that a_n ^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p + Δ_n ) ≤c_1 q (p - p_0)/p + C Δ_n √(n) m_n (p - p_0 )/a_n p_0 + C Δ_n √(n log p ) q (p - p_0)/p → 0, where we have used the assumption that p_0 / p → 1, Δ_n √(n log p )→ 0, and m_n / a_n → 0. This derives (<ref>), which concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> The main intuition of the proof is that when the approximate augmented data matrix ^ is close to its perfect counterpart ^, the corresponding Lasso estimators would be close as well. From the definitions of _j in (<ref>) and _j in (<ref>), it holds that max_1 ≤ j ≤ 2p | β_j - β̂_j | ≤max_1 ≤ j ≤ 2p | β_j^ - β̂_j^ | + max_1 ≤ j ≤ 2p | _j^⊤( - ^^) /_j^⊤^_j - _j^⊤( - ^^) /_j^⊤^_j|. We will aim to prove that for some large enough constant C, ℙ( ^ - ^_2 ≤ C Δ_n s √(log p/n)) → 1, ℙ( max_1 ≤ j ≤ 2p | _j^⊤( - ^^) /_j^⊤^_j - _j^⊤( - ^^) /_j^⊤^_j|≤ C Δ_n s√(log p/n))→ 1. Then combining the two results above can establish the desired conclusion of Lemma <ref>. We next proceed with proving (<ref>) and (<ref>). Proof of (<ref>). It follows from the Karush–Kuhn–Tucker (KKT) condition that n^-1 [^]^⊤^ (^ - ^ ) = n^-1 [^]^⊤ - λζ, n^-1 [^]^⊤^ (^ - ^ ) = n^-1 [^]^⊤ - λζ̂, where ζ = (ζ_1, …, ζ_2p) and ζ̂ = (ζ̂_1, …, ζ̂_2p) with ζ_j = {[ (β^_j) β_j^≠ 0,; ∈ [-1, 1] β_j^ = 0, ]. ζ̂_j = {[ (β̂_j^) β̂_j^≠ 0,; ∈ [-1, 1] β̂_j^ = 0. ]. Taking the difference between (<ref>) and (<ref>) above leads to n^-1 [^]^⊤^ (^ - ^ ) + n^-1([^]^⊤^ - [^]^⊤^) (^ - ^) = - n^-1([^]^⊤^ - [^]^⊤^) ( ^ - ^ ) + n^-1( ^ - ^)^⊤ - λ (ζ - ζ̂). Furthermore, multiplying both sides of the equation above by (^ - ^ )^⊤ yields that n^-1^ (^ - ^ )_2^2 = n^-1( ^ - ^)^⊤([^]^⊤^ - [^]^⊤^) ( ^ - ^) - n^-1 ( ^ - ^)^⊤([^]^⊤^ - [^]^⊤^) ( ^ - ^ ) + n^-1 ( ^ - ^)^⊤( ^ - ^)^⊤ - λ ( ^ - ^)^⊤ (ζ - ζ̂). We claim that the last term on the right-hand side of the expression above satisfies that ( ^ - ^)^⊤ (ζ - ζ̂) ≥ 0 . To understand this, observe that when both β_j^ and β̂_j^ are nonzero or zero, it is easy to see that (β_j^ - β̂_j^) (ζ_j - ζ̂_j ) ≥ 0. When either of β_j^ and β̂_j^ is zero, without loss of generality let us assume that β_j^ = 0 and β̂_j^≠ 0. When β_j^ = 0 and β̂_j^ > 0, it follows that ζ_j ≤ 1 = ζ̂_j and hence (β_j^ - β̂_j^) (ζ_j - ζ̂_j) = - β̂_j^ ((ζ_j - ζ̂_j)) ≥ 0. Similarly, we can show that (β_j^ - β̂_j^) (ζ_j - ζ̂_j) ≥ 0 when β_j^=0 and β̂_j^ < 0. Thus, the last term on the right-hand side of (<ref>) above satisfies that -( ^ - ^)^⊤ (ζ - ζ̂) ≤ 0 . We next examine the three terms on the right-hand side of the earlier expression above separately. First, let us observe that n^-1 [^]^⊤^ - [^]^⊤^_max ≤ n^-1 [^]^⊤(^ - ^) _max+ n^-1 ( ^ - ^)]^⊤^_max ≤max_jn^-1/2_j^_2max_jn^-1/2(_j^ - _j^)_2 + max_jn^-1/2_j^_2max_jn^-1/2(_j^ - _j^)_2. Under Condition <ref> and the sub-Gaussian assumption for , it can be shown that ℙ( n^-1 [^]^⊤^ - [^]^⊤^_max≥ C Δ_n ) → 0 for some constant C > 0. From the sparsity of and in Condition <ref>, we have that with probability 1 - o(1), the first term on the right-hand side of (<ref>) can be bounded as n^-1|( ^ - ^)^⊤([^]^⊤^ - [^]^⊤^) ( ^ - ^)| ≤ C Δ_n s ^ - ^_2^2. By the Cauchy–Schwarz inequality, we can bound the second term on the right-hand side of (<ref>) as |n^-1 ( ^ - ^)^⊤([^]^⊤^ - [^]^⊤^) ( ^ - ^ )| ≤^ - ^_2 n^-1([^]^⊤^ - [^]^⊤^) ( ^ - ^ ) _2. Finally, with the aid of Condition <ref> on sparsity and Condition <ref> on the restrictive eigenvalues, the left-hand side of (<ref>) can be lower bounded by c_1^ - ^_2^2. Combining all the results above and applying the Cauchy–Schwarz inequality to the second and third terms on the right-hand side of (<ref>), we can deduce that as Δ_n s → 0, the representation in (<ref>) entails that with probability 1 - o(1), ^ - ^_2 ≲ n^-1([^]^⊤^ - [^]^⊤^) ( ^ - ^) _2 + max_J: |J| ≤ C s n^-1( ^_J - ^_J )^⊤_2 := I_1 + I_2. We will bound the two terms I_1 and I_2 above separately. It follows from (<ref>), the sparsity of and ^, and Lemma <ref> that with probability 1 - o(1), I_1 ≤ C Δ_n s^1/2^ - ^_2 ≤ C Δ_n s √(log p/n). As for term I_2, conditional on (^, ^ ) we have that for each 1 ≤ j ≤ 2p, n^-1/2( ^_j - ^_j )^⊤ d∼ N (0, n^-1^_j - ^_j _2^2 ). Thus, it holds that ℙ( I_2≥ C σΔ_n √(s log n /n )) ≤ℙ( s max_1 ≤ j ≤ 2p ( n^-1/2( ^_j - ^_j )^⊤)^2 ≥ C^2 σ^2 Δ_n^2 s log n ) = ℙ( max_1 ≤ j ≤ 2p n^-1/2^_j - ^_j _2 |Z| ≥ C σΔ_n √(log n)), where Z d∼ N(0, σ^2) is independent of ^ and ^. Moreover, Condition <ref> implies that max_1 ≤ j ≤ 2p^_j - ^_j _2 ≤Δ_n with probability 1 - o(1). Then using the union bound, we can obtain that for some constant C > √(2), ℙ( I_2 ≥ C σΔ_n √(s log n /n )) ≤ℙ( |Z| > C σ√(log n)) + ℙ(max_1 ≤ j ≤ 2p^_j - ^_j _2 ≥Δ_n) → 0. Consequently, substituting (<ref>) and (<ref>) into (<ref>) leads to (<ref>). Further, applying (<ref>) again with the bounds in (<ref>), (<ref>), (<ref>), and (<ref>) yields that ℙ(n^-1/2^ (^ - ^ ) _2 ≤ C Δ_n s √(log p/n)) → 1. Proof of (<ref>). Let us first state three results (<ref>), (<ref>), and (<ref>) below that will be used repeatedly in our proof. With similar arguments as for (<ref>) and (<ref>) and the union bound, we can deduce that under Conditions <ref>–<ref>, ℙ( max_1 ≤ j ≤ 2p _j - _j _2 ≤ C ( m_n^1/2Δ_n + Δ_n m_n √(log p/n)) ≤ C m_n^1/2Δ_n )→ 1, ℙ(n^-1/2max_j _-j^ ( _j - _j )_2 ≤ C Δ_n m_n √(log p/n)) → 1, where we have used √(m_n log p/n)→ 0 for showing (<ref>). Observe that for 1 ≤ j ≤ 2p, n^-1/2 ( _j - _j ) _2 ≤ n^-1/2 ( _j^ - _j^) _2 + n^-1/2_-j^ ( _j - _j ) _2 + n^-1/2 (_-j^ - _-j^) _j _2 + n^-1/2 (_-j^ - _-j^) (_j - _j) _2. Then it follows from the sparsity of 𝒮_j = (_j) ∪(_j) ∪(_j), the sub-Gaussianity of X_j, and the bound in (<ref>) that with probability 1 - o(1), max_1 ≤ j ≤ 2p n^-1/2 ( _j - _j ) _2 ≤ C (Δ_n + Δ_n m_n √(log p/n) + Δ_n m_n^1/2max_1 ≤ j ≤ 2p_j_2 + m_n Δ_n √(log p/n)) ≤ C Δ_n m_n^1/2max_1 ≤ j ≤ 2p_j_2 ≤ C Δ_n m_n^1/2. We are now ready to establish (<ref>). In particular, we have the decomposition for the main term in (<ref>) max_1 ≤ j ≤ 2p | _j^⊤( - ^^) /_j^⊤^_j - _j^⊤( - ^^) /_j^⊤^_j| ≤max_1 ≤ j ≤ 2p | (_j - _j )^⊤( - ^^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | _j^⊤(^^ - ^^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | _j^⊤( - ^^) ( 1/_j^⊤^_j - 1/_j^⊤^_j )| := P_1 + P_2 + P_3. We will investigate the three terms P_1, P_2, and P_3 above separately. Let us first deal with term P_1. Note that P_1 ≤max_1 ≤ j ≤ 2p | (_j - _j )^⊤^( ^ - ^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | (_j - _j )^⊤/_j^⊤^_j|. Since d∼ N( 0, I_n) and is independent of design matrix , it holds that conditional on design matrix , (_j - _j )^⊤/_j^⊤^_jd∼ N ( 0, _j - _j _2^2 / [_j^⊤^_j ]^2 ). This together with the bounds in (<ref>) and (<ref>) leads to ℙ( max_1 ≤ j ≤ 2p | (_j - _j )^⊤/_j^⊤^_j| > C m_n^1/2Δ_n √(log p/n)) = ∑_j = 1^2pℙ( _j - _j _2 / |_j^⊤^_j | · | Z | > C m_n^1/2Δ_n √(log p/n)) ≤∑_j = 1^2pℙ( _j - _j _2 / n · | Z | > C m_n^1/2Δ_n √(log p/n)) + o(1) ≤∑_j = 1^2pℙ (|Z| > C √(log p)) + o(1) = o(1), where Z d∼ N(0, σ^2) is independent of ^ and ^, and C is some large constant that may take different value at each appearance. In addition, from (<ref>), the Cauchy–Schwarz inequality, Lemma <ref>, and (<ref>), we can deduce that with probability 1 - o(1), max_1 ≤ j ≤ 2p | (_j - _j )^⊤^( ^ - ^) /_j^⊤^_j| ≤max_1 ≤ j ≤ 2p _j - _j _2 ^ (^ - ^) _2 / | _j^⊤^_j | ≤ C Δ_n m_n^1/2√(s log p/n). Substituting (<ref>) and (<ref>) into (<ref>) yields that with probability 1 - o(1), P_1 ≤ C Δ_n m_n^1/2√(s log p/n). We next turn to the bound for term P_2. It is easy to see that P_2 ≤max_1 ≤ j ≤ 2p | _j^⊤^(^ - ^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | _j^⊤ ( ^ - ^ ) ^/_j^⊤^_j| + max_1 ≤ j ≤ 2p | (_j - _j )^⊤^(^ - ^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | (_j - _j )^⊤ ( ^ - ^ ) ^/_j^⊤^_j| := P_21 + P_22 + P_23 + P_24. Regarding term P_21, in view of (<ref>) and the definition of _j, we have that with probability 1 - o(1), P_21 ≤max_1 ≤ j ≤ 2p | β^_j - β̂_j^ | + max_1 ≤ j ≤ 2p | _j^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C Δ_n s √(log p/n) + max_1 ≤ j ≤ 2p | ( _j + ^_-j ( _j - _j ) ) ^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C Δ_n s √(log p/n) + max_1 ≤ j ≤ 2p | _j^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | [^ ( _j - _j ) ]^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j|. We will bound the last two terms on the very right-hand side of the expression above separately. Since for ℓ≠ j, n^-1 [ _j^⊤^_ℓ] = 0 due to zero correlation between _j and _-j^, and _j and _ℓ^ both have i.i.d. sub-Gaussian entries, we can show that for ℓ≠ j, ℙ( max_1 ≤ j ≤ 2p max_ℓ≠ j n^-1 | _j^⊤_ℓ^ | ≥ C √(log p/n)) ≤ C p^-1→ 0. This combined with (<ref>), the sparsity assumption that |J| = |() ∪() ∪()| ≲ s, and the result in (<ref>) yields that with probability 1 - o(1), the second term on the very right-hand side of (<ref>) above can be bounded as max_1 ≤ j ≤ 2p | _j^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C n^-1max_1 ≤ j ≤ 2p max_J': | J' | ≲ s _j^⊤_ J'∖{j}_2 ·^_J'∖{j} - ^_J'∖{j}_2 ≤ C √( s log p/n)·Δ_n s √(log p/n)≤ C Δ_n s √(log p/n), where the last inequality above holds due to the assumption that √( s log p/n)→ 0. By the Cauchy–Schwarz inequality, we can deduce that with probability 1 - o(1), the third term on the very right-hand side of (<ref>) above can be bounded as max_1 ≤ j ≤ 2p | [^ ( _j - _j ) ]^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C n^-1max_1 ≤ j ≤ 2p ^ ( _j - _j ) _2 ·^_-j(^_-j - ^_-j)_2. An application of Lemma <ref>, (<ref>), and the sub-Gaussian assumption of X_j gives that with probability 1 - o(1), the second term on the right-hand side above can be bounded as max_1 ≤ j ≤ 2p n^ -1/2^_-j(^_-j - ^_-j)_2 ≤ n^ -1/2^(^ - ^)_2 + max_1 ≤ j ≤ 2p n^ -1/2^_j _2 | β_j - β̂_j| ≤ C Δ_n s √(log p/n). Then plugging (<ref>) into (<ref>) yields that max_1 ≤ j ≤ 2p | [^ ( _j - _j ) ]^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C √(m_n log p/n)· C Δ_n s √(log p/n)≤ C Δ_n s √(log p/n), where the last inequality above is due to the assumption that √(slog p/n)→ 0 and m_n ≲ s. Hence, it follows from substituting (<ref>) and (<ref>) into (<ref>) that with probability 1 - o(1), P_21≤ C Δ_n s √(log p/n). We next proceed with considering term P_22 introduced in (<ref>). Observe that ^ - ^ = [0, - ] and ^ = (^⊤ , 0^⊤)^⊤. Then it holds that ( ^ - ^ ) ^ = 0. From (<ref>) and the Cauchy–Schwarz inequality, we can deduce that P_22 ≤max_1 ≤ j ≤ 2p | _j^⊤ ( ^ - ^ ) ^/_j^⊤^_j| + max_1 ≤ j ≤ 2p | _j^⊤ ( ^ - ^ ) (^ - ^ ) /_j^⊤^_j| ≤ C n^-1max_1 ≤ j ≤ 2p_j_2 · ( ^ - ^ ) (^ - ^ ) _2. Moreover, we have _j = _j + _-j^ (_j - _j). Since the components of _j are i.i.d. sub-Gaussian random variables, it is easy to see that ℙ (max_1 ≤ j ≤ 2pn^-1/2_j _2 ≥ C) → 0 for some large enough constant C > 0. Further, it follows from the sub-Gaussianity of X_j and the sparsity of _j and _j that max_1 ≤ j ≤ 2p n^-1/2_-j^ (_j - _j) _2 ≤ C m_n^1/2√(m_n log p/n) ≤ C m_n √(log p/n)→ 0. Thus, when m_n √(log p/n)→ 0 we have ℙ ( n^-1/2max_1 ≤ j ≤ 2p _j _2 ≥ C) → 0 . Similarly, based on Lemma <ref> and the sparsity of ^ and ^, it holds that with probability 1-o(1), n^-1/2 ( ^ - ^ ) (^ - ^ ) _2 ≤max_J': |J'| ≲ s( ∑_j ∈ J' n^-1^_j - ^_j _2^2 )^1/2·^_J' - _J'^_2 ≤ C s^1/2Δ_n · ( √(s log p/n) + Δ_n s √(log p /n) ) ≤ C Δ_n s √(log p/n), where the last inequality above holds due to Δ_n s^1/2→ 0. Consequently, combining the above three inequalities shows that with probability 1 - o(1), P_22≤ C Δ_n s√(log p/n). We now deal with term P_23 in (<ref>). In view of the Cauchy–Schwarz inequality, (<ref>), (<ref>), and (<ref>), we can obtain that with probability 1 - o(1), P_23 ≤max_1 ≤ j ≤ 2p _j - _j _2 /_j^⊤^_j·^(^ - ^ ) _2 ≤ C √(m_n log p/n)·Δ_n s √(log p/n) ≤ C Δ_n s √(log p/n). As for term P_24, since ( ^ - ^ ) = 0 it follows that with probability 1 - o(1), P_24 = max_1 ≤ j ≤ 2p | (_j - _j )^⊤ ( ^ - ^ ) (^ - ^) /_j^⊤^_j | ≤max_1 ≤ j ≤ 2p _j - _j _2 /_j^⊤^_j· ( ^ - ^ ) (^ - ^ ) _2 ≤ C √(m_n log p/n)·Δ_n s √(log p/n) ≤ C Δ_n s √(log p/n), where we have applied the bounds in (<ref>), (<ref>), and (<ref>). Consequently, plugging (<ref>), (<ref>), (<ref>), and (<ref>) into (<ref>) yields that with probability 1 - o(1), P_2≤ C Δ_n s √(log p/n). Now we proceed with dealing with term P_3. Note that P_3 ≤max_1 ≤ j ≤ 2p | _j ^⊤ ( - ^^ ) | ·| _j^⊤^_j - _j^⊤^_j | /| _j^⊤^_j | ·| _j^⊤^_j | . From (<ref>) and (<ref>), we can see that with probability 1 - o(1), max_1 ≤ j ≤ 2p n^-1/2_j _2 ≤max_1 ≤ j ≤ 2p n^-1/2_j _2 + max_1 ≤ j ≤ 2p n^-1/2_j - _j _2 ≤ C + C m_n Δ_n ≤ C. It follows from (<ref>), Condition <ref>, and the sub-Gaussian distribution of _j^ that with probability 1 - o(1), n^-1 | (_j - _j)^⊤^_j | ≤ C Δ_n m_n^1/2, n^-1 | _j^⊤ (^_j - _j^)| ≤ C Δ_n. Then with the aid of (<ref>), we can show that with probability 1 - o(1), min_1 ≤ j ≤ 2p n^-1 | _j^⊤^_j | ≥min_1 ≤ j ≤ 2p n^-1| _j^⊤^_j | - max_1 ≤ j ≤ 2p( n^-1 | (_j - _j)^⊤^_j | - n^-1 | _j^⊤ (^_j - _j^)| ) ≥ C - Cm_n Δ_n - C Δ_n ≥ C as m_n Δ_n → 0. As for the second component on the right-hand side of (<ref>) above, combining the results in (<ref>), (<ref>), and (<ref>) gives that with probability 1 - o(1), max_1 ≤ j ≤ 2p| _j^⊤^_j - _j^⊤^_j | /| _j^⊤^_j | ·| _j^⊤^_j | ≤max_1 ≤ j ≤ 2p| ( _j - _j )^⊤^_j | /| _j^⊤^_j | ·| _j^⊤^_j | + max_1 ≤ j ≤ 2p| _j^⊤ (_j^ - _j^) | /| _j^⊤^_j | ·| _j^⊤^_j | ≤ C n^-1 ( m_n^1/2Δ_n + Δ_n ). Regarding the first component on the right-hand side in (<ref>), from (^ - ^) = 0 we can deduce that max_1 ≤ j ≤ 2p n^-1| _j ^⊤ ( - ^^ ) | ≤max_1 ≤ j ≤ 2p n^-1| _j ^⊤ | + max_1 ≤ j ≤ 2p n^-1| _j ^⊤^ ( ^ - ^ ) | + max_1 ≤ j ≤ 2p n^-1| _j ^⊤ (^ - ^) ^| . Since d∼ N( 0, σ^2I_n ), it is easy to see that for the standard normal random variable Z, ℙ(max_1 ≤ j ≤ 2p n^-1| _j ^⊤ | > C √(log p/n)) = ℙ(max_1 ≤ j ≤ 2p n^-1_j_2 · |Z| > C √(log p/n)) ≤ℙ (|Z| > C √(log p)) → 0. Further, by Lemma <ref>, the sub-Gaussianity of X_j, and the sparsity of ^ and ^, we can obtain that with probability 1 - o(1), n^-1 | _j^⊤^ ( ^ - ^ ) | ≤ n^-1 | _j^⊤^ ( ^ - ^ ) | + n^-1 | _j^⊤^ ( ^ - ^ ) | ≤ C( √( slog p /n) + Δ_n s √(log p /n)) ≤ C √( slog p /n). Similarly, since (^ - ^) = 0, it holds that with probability 1 - o(1), n^-1 | _j^⊤ (^ - ^) ^ | = n^-1 | _j^⊤ (^ - ^) ( ^ -^ ) | ≤ C Δ_n s^1/2·√( s log p/n) ≤ C √( s log p/n). Consequently, by m_n≲ s in Condition <ref> we have that with probability 1 - o(1), P_3 ≤ C m_n^1/2Δ_n ·√(slog p/n)≤ C Δ_n s √(log p/n). Finally, a combination of (<ref>), (<ref>), (<ref>), and (<ref>) establishes (<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Using the definitions of W_j and w_j and the triangle inequality, we see that ∑_j = 1^p ℙ (| W_j - w_j | ≥ C √(n^-1log p )) ≤∑_j = 1^p ℙ( √(n)| |β_j - β_j | - |β_j + p - β_j + p| | ≥ C √(log p )) ≤∑_j = 1^p [ ℙ( √(n) |β_j - β_j | ≥ C √(log p ) / 2 ) + ℙ( √(n) |β_j + p - β_j + p| ≥ C √(log p ) /2 ) ]. The main idea of the proof is to exploit the decomposition in (<ref>) and the observation that the main term therein follows the normal distribution. Let us start with bounding the error term in (<ref>). We claim that with probability 1 - O (p^-3), max_1 ≤ j ≤ 2p| ∑_k ≠ j√(n)_j^⊤^_k (β_k^ - β_k^)/_j^⊤^_j | ≤C m_n^1/2 s log p /√(n). From the fact that β^_j + p = 0 for 1 ≤ j ≤ p and the bound in (<ref>), since m_n^1/2 s log p /√(n)≪√(log p) we can deduce through the union bound that ∑_j = 1^p ℙ( √(n) |β_j - β_j | ≥ C √(log p ) / 2 ) ≤∑_j = 1^p ℙ( |_j^⊤ | /_j _2 ·√(n)τ_j ≥ C √(log p ) / 3 ) + ∑_j = 1^p ℙ( max_1 ≤ j ≤ 2p| ∑_k ≠ j√(n)_j^⊤^_k (β_k - β_k^)/_j^⊤^_j | > C m_n^1/2 s log p /√(n)) ≤∑_j = 1^p ℙ( |_j^⊤ | /_j _2 ·√(n)τ_j ≥ C √(log p ) / 3 ) + O(p^-1). Recall the result (<ref>) in Lemma <ref> and that _j^⊤/_j _2 ∼ N(0, σ^2). As m_n log p/n = o(1), it holds that for some large constant C> 0, ∑_j = 1^p ℙ( _j^⊤/_j _2 ·√(n)τ_j ≥ C √(log p ) / 3 ) ≤∑_j = 1^p ℙ( _j^⊤/_j _2 ≥C√(log p )) = p exp{ - C^2 log p / 2 }→ 0. Similarly, we can show that ∑_j = 1^p ℙ( _j + p^⊤/_j+p_2 ·√(n)τ_j ≥ C √(log p )) → 0. Plugging the two inequalities above into (<ref>) leads to the desired result in Lemma <ref>. It remains to establish (<ref>). Proof of (<ref>). Observe that for k ≠ j, n^-1_j^⊤^_k = n^-1(^_j - ^_-j_j) ^⊤^_k = n^-1_j ^⊤^_k - n^-1 (_j - _j )^⊤ (^_-j )^⊤^_k. Since _j and _k^ are uncorrelated, it follows from the sub-Gaussian assumption in Condition <ref> that for some constant C > 0, ℙ( n^-1 | _j ^⊤^_k | ≥ C √(log p /n)) ≤ 2 p^-3. In light of lemma <ref> and the sub-Gaussian assumption on _j, we can deduce that with probability 1 - O(p^-3), | n^-1 (_j - _j )^⊤ (^_-j )^⊤^_k | ≤ n^-1/2^_-j (_j - _j) _2 n^-1/2_k _2 ≤ C √(m_n log p/n). Plugging the above two results into (<ref>), when m_nlog p = o(n) an application of the union bound shows that with probability 1 - O( p^-1), max_ 1 ≤ j ≤ p max_k ≠ j n^-1 | _j^⊤^_k | ≤ C √(log p /n) + C √(m_nlog p /n) ≤ C √( m_n log p /n). Similarly, when √(log p /n) = o(1), we can show that there exists some constant C>0 such that with probability 1 - O(p^-1), min_1 ≤ j ≤ p n^-1_j^⊤^_j ≥ C. Consequently, plugging (<ref>), (<ref>), and (<ref>) into Lemma <ref> yields that with probability 1 - O (p^-3), max_1 ≤ j ≤ p| ∑_k ≠ j√(n)_j^⊤^_k (β_k - β_k^)/_j^⊤^_j | ≤√(n)max_1 ≤ j ≤ pmax_k ≠ j | _j^⊤^_k | /min_1 ≤ j ≤ p | _j^⊤^_j | ·^ - ^_1 ≤ C √( m_n log p )· s √(log p/n) = C m_n^1/2 s log p /√(n), which establishes (<ref>). This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> The intuition of the proof is that the sparsity of ^A implies the weak dependence among the components of the knockoff statistic vector = (W_1, …, W_p), which entails the weak dependence among the indicator functions 1 (W_j > t)'s. For 1 ≤ j ≤ p, let us define N_j = { l ∈ℋ_0: _j, l^A ≠ 0 }. From the sparsity assumption on ^A in Condition <ref>, we see that |N_j| ≤ m_n for any 1 ≤ j≤ p. Then we can obtain through expanding the variance that ( ∑_j ∈ℋ_01 (W_j > t) ) = ∑_j ∈ℋ_0∑_l ∈ N_j^c ∩ℋ_0 l ≠ j( ℙ (W_j ≥ t, W_l ≥ t) - ℙ (W_j ≥ t) ℙ (W_l ≥ t) ) + ∑_j ∈ℋ_0∑_l ∈ N_j ∪{ j }( ℙ (W_j ≥ t, W_l ≥ t) - ℙ (W_j ≥ t) ℙ (W_l ≥ t) ) := V_1(t) + V_2(t). We will deal with terms V_1(t) and V_2(t) above separately. Regarding the second term V_2(t), it follows from |N_j ∪{ j}| ≤ m_n + 1 that sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] V_2 (t) /p_0 G(t) ≤sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] ∑_j ∈ℋ_0∑_l ∈ N_j ∪{ j }ℙ (W_j ≥ t) /∑_j ∈ℋ_0ℙ (W_j ≥ t) ≤sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] ∑_j ∈ℋ_0 (m_n + 1) ℙ (W_j ≥ t) /∑_j ∈ℋ_0ℙ (W_j ≥ t) ≤ m_n + 1. We claim that as m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0, (log p)^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] V_1 (t) / [p_0 G (t)]^2 → 0. Therefore, combining (<ref>), (<ref>), and (<ref>) leads to the desired result of Lemma <ref>. It remains to establish (<ref>). Proof of (<ref>). Let {η_j}_j = 1^p be a sequence of independent random variables with η_j having density function given by h_j (t) = √(2)/√(π) a_j[1 - Φ( b_j^-1 t ) ] exp{ - t^2 / (2 v_ j^2) } + √(2)/√(π) b_j[1 - Φ( v_ j^-1 t ) ] exp{ - t^2 / (2 b_ j^2) }, where v_j = √( 2 ( e_j^2)^-1 (1 - (e_j, e_j+p)) ) and b_j =√( 2 ( e_j^2)^-1 (1 + (e_j, e_j+p)) ). The essential step in the proof is to show that for l ∈ N_j^c∩ℋ_0, ( |ξ_j| - |ξ_j+p|, |ξ_l| - |ξ_l+p|) d→ (η_j, η_l). We proceed with proving such result. Define δ_n = C m_n^1/2 s log p/√(n). We claim that for l ≠ j and l ∈ N_j^c∩ℋ_0, ℙ ( W_j ≥ t, W_l ≥ t ) ≤ℙ ( η_j ≥√(n) t - δ_n ) ℙ ( η_l ≥√(n) t - δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3), ℙ ( W_j ≥ t, W_l ≥ t ) ≥ℙ ( η_j ≥√(n) t + δ_n ) ℙ ( η_l ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3), ℙ ( W_j ≥ t) ≥ℙ ( η_j ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3), ℙ ( W_j ≥ t) ≤ℙ ( η_j ≥√(n) t - δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3). The proofs for (<ref>)–(<ref>) above are analogous. Without loss of generality, we will present only the proof of (<ref>) and postpone it to the end of the proof for Lemma <ref>. In view of (<ref>)–(<ref>) above and the definition of V_1 (t) in (<ref>), we can deduce that V_1 (t) = ∑_j ∈ℋ_0∑_l ∈ N_j^c ∩ℋ_0 l ≠ j( ℙ (W_j ≥ t, W_l ≥ t) - ℙ (W_j ≥ t) ℙ (W_l ≥ t) ) ≤∑_j ∈ℋ_0∑_ l ≠ j{ℙ ( η_j ≥√(n) t - δ_n ) ℙ ( η_l ≥√(n) t - δ_n ) (1 + O ( √(m_n (log p)^3/n)) - ℙ ( η_j ≥√(n) t + δ_n ) ℙ ( η_j ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n))) } + O(p^-1) = ∑_j ∈ℋ_0∑_ l ≠ jℙ (√(n) t - δ_n ≤η_j ≤√(n) t + δ_n) ℙ ( η_l ≥√(n) t - δ_n ) + ∑_j ∈ℋ_0∑_ l ≠ jℙ ( η_j ≥√(n) t - δ_n ) ℙ (√(n) t - δ_n ≤η_l ≤√(n) t + δ_n) + ∑_j ∈ℋ_0∑_ l ≠ jℙ ( η_j ≥√(n) t - δ_n ) ℙ ( η_l ≥√(n) t - δ_n ) · O( √(m_n (log p)^3 /n)) + O(p^-1) := V_11(t) + V_12(t) + V_13(t) + O(p^-1). Recall that p_0 G(t) = ∑_j ∈ℋ_0ℙ (W_j ≥ t). Then it follows from the definition of V_11(t) and (<ref>) that V_11(t)/ [p_0 G(t)]^2 ≤∑_j ∈ℋ_0∑_ l ≠ jℙ (√(n) t - δ_n ≤η_j ≤√(n) t + δ_n) ℙ ( η_l ≥√(n) t - δ_n ) /[ ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n) )) + O(p^-2) ]^2. We will consider two ranges t ∈ (0, 4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)) and t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j), G^-1 (c_1 q a_n/p) ] separately. For the first range t ∈ (0, 4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), we can see that √(n) t is upper bounded by a constant. Since δ_n = o(1) by the assumption that m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0, it follows that √(n) t + δ_n and √(n) t - δ_n are both of a constant order. Hence, by the definition of the density function h_j(·) of η_j shown in (<ref>), max_1 ≤ j ≤ p h_j(u) is bounded by a constant for u ∈ [√(n) t - δ_n, √(n) t + δ_n], and C_1 ≤min_1≤ j ≤ pℙ (η_j ≥√(n) t + δ_n) ≤max_1 ≤ j ≤ pℙ (η_j ≥√(n) t - δ_n) ≤ C_2 for some positive constants C_1 < C_2. Thus, it is easy to see that sup_t ∈ (0, 4 n^-1/2max_1 ≤ j ≤ p (v_j b_j))V_11(t)/ [p_0 G(t)]^2 ≤ C p_0^2 δ_n max_1≤ j ≤ psup_u ∈ [√(n) t - δ_n, √(n) t + δ_n] h_j (u) max_1 ≤ j ≤ pℙ (η_j ≥√(n) t - δ_n) / p_0^2 [min_1 ≤ j ≤ pℙ (η_j ≥√(n) t + δ_n) ]^2 ≤ C δ_n = C m_n^1/2 s log p/√(n). We proceed with considering the second range t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j), G^-1 (c_1 q a_n/p) ). An application of similar arguments as for (<ref>) shows that max_1 ≤ j ≤ psup_t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)]∑_j ∈ℋ_0ℙ ( √(n) t - δ_n ≤η_j ≤√(n) t + δ_n) /∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) ≤ C √(n) G^-1 (c_1 q a_n/p) ·δ_n. Moreover, it follows from plugging t = G^-1(c_1 q a_n/p) into (<ref>) and taking summation over j ∈ℋ_0 that c_1 q a_n p_0/p ≤∑_j ∈ℋ_0ℙ (η_j ≥√(n) G^-1(c_1 q a_n/p) - δ_n) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3). Then from the density function h_j(t) for η_j, we can obtain through some direct calculations that ℙ ( η_j ≥ t) = 2 [1 - Φ(v_j^-1 t)] [1 - Φ(b_j^-1 t)]. Further, combining (<ref>) and (<ref>) yields that G^-1(c_1 q a_n/p) = O(√(log p/n) ). Substituting this bound into (<ref>) implies that max_1 ≤ j ≤ psup_t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)]∑_j ∈ℋ_0ℙ ( √(n) t - δ_n ≤η_j ≤√(n) t + δ_n) /∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) ≤ C m_n^1/2 s (log p)^3/2/√(n), where in the last inequality above we have utilized the definition of δ_n. Thus as m_n^1/2 s (log p)^3/2/√(n)→ 0, it holds that max_1 ≤ j ≤ psup_t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)]| ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t - δ_n ) /∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) - 1 | ≤ C m_n^1/2 s (log p)^3/2/√(n)→ 0. Since p_0 G(t) ≥ c_1 q a_n p_0 / p →∞ for 0≤ t ≤ G^-1 (c_1 q a_n/p), it follows from taking summation over j ∈ℋ_0 on both sides of (<ref>) that as m_n^1/2 (log p)^3/2 / √(n)→ 0, ∑_j ∈ℋ_0ℙ (η_j ≥√(n) t - δ_n) ≥ C ( c_1 q a_n p_0/ p + O(p^-2 ) ) →∞, which along with (<ref>) implies that ∑_j ∈ℋ_0ℙ (η_j ≥√(n) t + δ_n) ≥ C ( c_1 q a_n p_0/ p + O(p^-2 ) ) →∞. Combining this with (<ref>), we can further bound the ratio in (<ref>) in the second range of t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)) as sup_t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)]V_11(t)/ [p_0 G(t)]^2 ≤{[∑_j ∈ℋ_0ℙ ( √(n) t - δ_n ≤η_j ≤√(n) t + δ_n) ]^2 /[ ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) ]^2 + ∑_j ∈ℋ_0ℙ ( √(n) t - δ_n ≤η_j ≤√(n) t + δ_n) /∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) } ×(1 + O ( √(m_n (log p)^3/n) + p^-2)) ≤ C m_n^1/2 s (log p)^3/2/√(n). Hence, we see from the above result and (<ref>) that sup_t ∈ (0, G^-1(c_1 q a_n/p))V_11(t)/ [p_0 G(t)]^2 ≤ C m_n^1/2 s (log p)^3/2/√(n). In a similar manner, we can deduce that sup_t ∈ (0, G^-1(c_1 q a_n/p))V_12(t)/ [p_0 G(t)]^2 ≤ C m_n^1/2 s (log p)^3/2/√(n) and sup_t ∈ (0, G^-1(c_1 q a_n/p))V_13(t)/ [p_0 G(t)]^2 ≤ C √(m_n (log p)^3/n). Combining (<ref>) and (<ref>)–(<ref>) yields (<ref>) as m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0. This completes the proof of (<ref>). It remains to establish (<ref>). Proof of (<ref>). Note that for j ∈ℋ_0, it holds that β_j^ = β_j + p^ = 0 under the setting of the linear model. Then it follows that W_j = | β_j | - |β_j+p| = | β_j - β_j^ | - | β_j + p - β_j+p^ |. For 1 ≤ j ≤ 2p, let us define ξ_j = √(n)τ_j ·_j^⊤/_j _2. In view of the expression in (<ref>) and the bound of the remainder term established in (<ref>), an application of the total probability inequality gives that ℙ( W_j ≥ t, W_l ≥ t ) ≤ℙ( | ξ_j | - | ξ_j + p| ≥√(n) t - δ_n, | ξ_l | - | ξ_l + p| ≥√(n) t - δ_n ) + ℙ( max_1 ≤ j ≤ 2p| ∑_k ≠ j√(n)_j^⊤^_k (β_k - β_k^)/_j^⊤^_j | > δ_n ) = ℙ( | ξ_j | - | ξ_j + p| ≥√(n) t - δ_n, | ξ_l | - | ξ_l + p| ≥√(n) t - δ_n ) + O (p^-3). It suffices to consider probability ℙ( | ξ_j | - | ξ_j + p| ≥ t - δ_n, | ξ_l | - | ξ_l + p| ≥ t - δ_n ) for t ∈ (0, √(n) G^-1 (c_1 q a_n/p) ]. A useful observation is that ℙ( | ξ_j | - | ξ_j + p| ≥ t - δ_n, | ξ_l | - | ξ_l + p| ≥ t - δ_n ) ≤ℙ( | ξ_j | - | ξ_j + p| ≥ t - δ_n, | ξ_l | - | ξ_l + p| ≥ t - δ_n, max{ | ξ_j |, | ξ_j + p|, | ξ_l |, | ξ_l + p| }≤ C √(log p )) + ℙ( max{ | ξ_j |, | ξ_j + p|, | ξ_l |, | ξ_l + p| } > C √(log p)) := P_1 + P_2. We will consider terms P_1 and P_2 above separately. Let us first deal with term P_2. From the definition of ξ_j, (<ref>) in Lemma <ref>, and the fact that _j^⊤/_j _2 d∼ N(0, 1), we can obtain through the union bound that as m_n log p /n→ 0 and for some large constant C > 4 ( e_j^2)^-1/2, ℙ ( |ξ_j| ≥ C √(log p) ) ≤ℙ( | _j^⊤/_j _2 | ≥ 2 C ( e_j^2)^1/2√(log p) / 3 ) + ℙ (√(n)τ_j ≥ 3 ( e_j^2)^-1/2 /2 ) = O (p^-3). Hence, the inequality above implies that P_2 = O( p^-3). We next proceed with analyzing term P_1. Given ^, denote by f_ξ, ξ_j+p (x, y) the density of (ξ_i, ξ_j+p) and f_ξ_l, ξ_l+p |(ξ_j, ξ_j+p) (u, w | x, y) the conditional density of (ξ_l, ξ_l+p ) | (ξ_j, ξ_j+p). Then probability P_2 can be written as ℙ( | ξ_j | - | ξ_j + p| ≥ t - δ_n, | ξ_l | - | ξ_l + p| ≥ t - δ_n, max{ | ξ_j |, | ξ_j + p|, | ξ_l |, | ξ_l + p| ≤ C √(log p )) = _^[∫_|x| - |y| ≥ t - δ_n |x| ≤ C √(log p) |y| ≤ C √(log p) f_ξ, ξ_j+p (x, y) ·∫_|u| - |w| ≥ t - δ_n |u| ≤ C √(log p) |w| ≤ C √(log p) f_ξ_l, ξ_l+p |(ξ_j, ξ_j+p) (u, w | x, y) du dv dx dy ]. Since d∼ N( 0, I_n) and is independent of ^, it is easy to see that for j ≠ l, conditional on ^ we have (ξ_j, ξ_j + p, ξ_l, ξ_l + p)^⊤| ^d∼ N ( 0, ), where the covariance matrix is given by = [ _11_12; _21_22 ] with _11 = [ n τ_j^2 n _j^⊤_j + p/ |_j^⊤_j^| |_j + p^⊤_j + p^|; n _j^⊤_j + p/ |_j^⊤_j^| |_j + p^⊤_j + p^| n τ_j + p^2 ], _12 = _21^⊤ = [ n _j^⊤_l/ |_j^⊤_j^| |_l^⊤_l^| n _j^⊤_l + p/ |_j^⊤_j^| |_l + p^⊤_l + p^|; n _l^⊤_j + p/ |_l^⊤_l^| |_j + p^⊤_j + p^| n _l +p^⊤_j + p/ |_l + p^⊤_l +p^| |_j + p^⊤_j + p^| ], _22 = [ n τ_l^2 n _l^⊤_l + p/ |_l^⊤_l^| |_l + p^⊤_l + p^|; n _l^⊤_l + p/ |_l^⊤_l^| |_l + p^⊤_l + p^| n τ_l + p^2 ]. It follows from the conditional distribution of the multivariate normal distribution that given ^, f_ξ_l, ξ_l+p |(ξ_j, ξ_j+p) (u, v | x, y) = 1 /2π | _22 - _21_11^-1_12 |^1/2× exp{ - 1/2[ ( u v ) - _21_11^-1( x y ) ]^⊤ (_22 - _21_11^-1_12)^-1 ·[ ( u v ) - _21_11^-1( x y ) ] }. For l ≠ j and l ∈ N_j^c, it holds that (e_j, e_l) = _j, l^A/_j, j^A _l, l^A = 0. Since _j, l^A = _j, l+p^A = _j + p, l^A = _j+p, l+p^A due to the symmetric structure of , we also have (e_j, e_l+p) = (e_j+p, e_l) = (e_j+p, e_l+p ) = 0 for l ≠ j and l ∈ N_j^c. Then it follows from (<ref>) in Lemma <ref> that for l ≠ j and l ∈ N_j^c, with probability 1- O(p^-3) n^-1_j^⊤_l ≤ C √(m_n log p/n), n^-1_j^⊤_l+p≤ C √(m_n log p/n), n^-1_j+p^⊤_l≤ C √(m_n log p/n), n^-1_j+p^⊤_l+p≤ C √(m_n log p/n). Similarly, for 1 ≤ j ≤ 2p we can show that with probability 1 - O(p^-3), n^-1_j^⊤^_j ≥ C. _j^⊤_l = (_j^ - _-j^_j)^⊤ (_l^ - _-l^_l) = ( _j + ^_-j (_j - _j) )^⊤ ( _l + _-l^ (_l - _l) ) = _j^⊤_l + _j^⊤_-l^ (_l - _l) + _l^⊤_-j^ (_j - _j) + [ _-j^ (_j - _j) ]^⊤_-l^ (_l - _l). For l ≠ j and l ∈ N_j^c, we know that [e_j e_l] = (e_j, e_l) = _j, l^A/_j, j^A _l, l^A = 0. In addition, since e_j and e_l are sub-Gaussian random variables, we can obtain by applying Bernstein’s inequality for sub-exponential random variables that ℙ (n^-1 | _j^⊤_l | ≥ C √(log p/n)) = O(p^-3). Moreover, in view of Lemma <ref>, sparsity of _j and _j and the fact that (e_j, X_l) = 0 for j ≠ l, we have with probability 1 - O(p^-3) n^-1| _j^⊤_-l^ (_l - _l) | ≤ n^-1| _j^⊤_j^ (_l, j - _l) | + n^-1| ∑_k ≠ j, l_j^⊤_k^ (_l,k - _l, k) | ≤ n^-1| _j^⊤_j^ (_l, j - _l) | + n^-1[max_J': |J'| ≲ m_n∑_k ≠ j, l k ∈ J' (_j^⊤_k^ )^2 ∑_k ≠ j, l k ∈ J' (_l,k - _l, k)^2 ] ^2 ≤ C√(m_n log p/n) +C √(m_n log p/n)·√(m_n log p/n)≤ C √(m_n log p/n). Similarly, it holds that with probability 1 - O(p^-3), n^-1| _l^⊤_-j^ (_j - _j) | ≤ C √(m_n log p/n). Additionally, we have that with probability 1 - O(p^-3), | [ _-j^ (_j - _j) ]^⊤_-l^ (_l - _l) | ≤ C √(m_n log p/n)·√(m_n log p/n)≤ C √(m_n log p/n). Therefore, for l ≠ j and l ∈ N_j^c, it follows that with probability 1 - O(p^-3), n^-1_j^⊤_l ≤ C √(m_n log p/n). Then from (<ref>), (<ref>), and the definition of _12, we can obtain that with probability 1 - O(p^-3), _12_max≤ C √(m_n log p/n). We have shown in (<ref>) that _12_max≤ C √(m_n log p/n) with probability 1 - O(p^-3). Similarly, when e_j^2 e_j+p^2 - ([e_j e_j+p] )^2 > C for some constant C > 0, it can be shown that | V_11 | ≥ C and |V_22| ≥ C with probability 1 - O(p^-3). Let us define an event 𝒞 = {^: _12_max≤ C_1 √(m_n log p/n), |_22| ≥ C_2, |_11| ≥ C_2, _11_max≤ C_3, _22_max≤ C_3}. We have shown that ℙ (𝒞) ≥ 1- O(p^-3). Then it is straightforward to see that conditional on event 𝒞, we have 1 /2π | _22 - _21_11^-1_12 |^1/2 = 1/2 π | V_22 |^-1/2(1 + O ( m_n log p/n) ) and _22^-1 - (_22 - _21_11^-1_12)^-1_max≤ C m_n log p/n. In addition, given event 𝒞 and the range that |x| ≤ C √(log p) and |y| ≤ C √(log p), it holds that _21_11^-1( x y ) _2 ≤ C √(m_n/n)log p. Further, given event 𝒞 and that max{|u|, |w|, |x|, |y|}≤ C √(log p), it follows from (<ref>)–(<ref>) that as m_n (log p)^3/n = o(1), | [ ( u w ) - _21_11^-1( x y ) ]^⊤ (_22 - _21_11^-1_12)^-1[ ( u w ) - _21_11^-1( x y ) ] - ( u w )^⊤_22^-1( u w ) | ≤ C √(m_n (log p)^3/n). Hence, substituting the bounds in (<ref>) and (<ref>) into (<ref>) yields that as m_n (log p)^3/n = o(1), f_ξ_l, ξ_l+p |(ξ_j, ξ_j+p) (u, w | x, y) = 1 /2π | _22 |^1/2exp{ - 1/2( u w )^⊤_22^-1( u w ) }·(1 + O ( √(m_n (log p)^3/n))) = f_ξ_l, ξ_l+p (u, w) (1 + O ( √(m_n (log p)^3/n))), which entails that (ξ_l, ξ_l+p) is asymptotically independent of (ξ_j, ξ_j+p) for l ≠ j and l ∈ N_j^c. By plugging (<ref>) into (<ref>), we can deduce that P_1 ≤𝔼{1 (𝒞) ℙ( |ξ_j| - |ξ_j+p| ≥ t - δ_n, max{ |ξ_j|, |ξ_j+p|}≤ C √(log p) | ^) ×ℙ( |ξ_l| - |ξ_l+p| ≥ t - δ_n, max{ |ξ_l|, |ξ_l+p|}≤ C √(log p) | ^)} ×(1 + O ( √(m_n (log p)^3/n))) + ℙ (𝒞^c ), where ℙ (𝒞^c ) = O(p^-3). We next show that given ^, |ξ_j| - |ξ_j+p| converges in distribution to η_j. Given ^, we see that (ξ_j, ξ_j+p) d∼ N( 0, _11). Without ambiguity, let us denote by _11 = [ σ_1, n^2 ρ_n σ_1, nσ_2, n; ρ_n σ_1, nσ_2, n σ_2, n^2 ] for simpler notation, where σ_1, n^2 = n τ_j^2, σ_2, n^2 = n τ_j + p^2, and ρ_n = _j^⊤_j+p / (_j_2 _j+p_2 ). We define an event ℰ = {|σ_1, n^2 - ( e_j^2)^-1 | ≤ C √(m_n log p/n), |σ_2, n^2 - ( e_j+p^2)^-1 | ≤ C √(m_n log p/n), | ρ_n - (e_j, e_j+p) | ≤ C √(m_n log p/n)}. It follows from Lemma <ref> that ℙ (ℰ) ≥ 1 - O(p^-3). Some straightforward calculations show that for t > 0, given ^ the density of |ξ_j| - |ξ_j+p| can be written as f_|ξ_j| - |ξ_j+p| (t) = √(2)/√(π)a_1, n[1 - Φ( a_2, n^-1 t ) ] exp{ - t^2 / (2 a_1, n^2) } + √(2)/√(π)a_3, n[1 - Φ( a_4,n^-1 t ) ] exp{ - t^2 / (2 a_3, n^2) }, where a_1, n = √(σ_1, n^2 + σ_2, n^2 - 2 ρ_n σ_1, nσ_2, n), a_2, n = σ_1, nσ_2, n a_1, n√( (1 - ρ_n^2))/σ_2, n^2 - ρ_n σ_1, nσ_2, n, a_3, n = √(σ_1, n^2 + σ_2, n^2 + 2 ρ_n σ_1, nσ_2, n), a_4, n = σ_1, nσ_2, n a_3, n√( (1 - ρ_n^2))/σ_2, n^2 + ρ_n σ_1, nσ_2, n . Recall the notation that v_j = √( 2 ( e_j^2)^-1 (1 - (e_j, e_j+p)) ) and b_j =√( 2 ( e_j^2)^-1 (1 + (e_j, e_j+p)) ). It holds that (e_j^2 ) = (_j, j^A)^-1 = (^A_j+p, j+p)^-1 = (e_j+p^2) due to the symmetry of ^A. On event ℰ, we have that | a_1, n / v_j - 1 | ≤ C √(m_n log p/n), | a_2, n / b_j - 1 | ≤ C √(m_n log p/n), |a_3, n / b_j - 1 | ≤ C √(m_n log p/n), | a_4, n / v_j - 1 | ≤ C √(m_n log p/n). Thus, in view of the definition of h_j(t) in (<ref>) and (<ref>), it follows that as |t| ≤ C√(log p), f_|ξ_j| - |ξ_j+p| (t) = h_j (t) (1 + O ( √(m_n (log p)^3/n))). With the aid of the above result, we can deduce that on event ℰ, ℙ( |ξ_j| - |ξ_j+p| ≥ t - δ_n, max{ |ξ_j|, |ξ_j+p|}≤ C √(log p) | ^) ≤ℙ( |ξ_j| - |ξ_j+p| ≥ t - δ_n, |ξ_j| - |ξ_j+p| ≤ C √(log p) | ^) ≤(∫_ t - δ_n ^ C √(log p) h_j (u) du ) (1 + O ( √(m_n (log p)^3/n))) = ℙ ( t - δ_n ≤η_j ≤ C √(log p) ) (1 + O ( √(m_n (log p)^3/n))) = [ℙ ( η_j ≥ t - δ_n ) - ℙ (η_j > C √(log p) ) ] (1 + O ( √(m_n (log p)^3/n))). Moreover, in light of (<ref>) it is easy to see that ℙ (η_j > C √(log p) ) = O(p^-3) for some large constant C, which together with (<ref>) leads to ℙ( |ξ_j| - |ξ_j+p| ≥ t - δ_n, max{ |ξ_j|, |ξ_j+p|}≤ C √(log p) | ^) ≤ℙ ( η_j ≥ t - δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O (p^-3). Plugging (<ref>) into (<ref>) shows that P_1 ≤ℙ ( η_j ≥ t - δ_n ) ℙ ( η_l ≥ t - δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3). Finally, combining (<ref>), (<ref>), (<ref>), and (<ref>) yields (<ref>). Similarly, we can also establish (<ref>)–(<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Let us first prove (<ref>). In the proof of Lemma <ref> in Section <ref>, we have established the lower bound and upper bound for ℙ (W_j ≥ t) in (<ref>) and (<ref>), respectively. Recall the definitions that δ_n = C m_n^1/2 s log p/√(n) and b_n = C Δ_n s √(log p/n). For the numerator and denominator in (<ref>), we can write that p_0( G(t - b_n) - G(t + b_n) ) = ∑_j ∈ℋ_0[ ℙ (W_j ≥ t - b_n ) - ℙ (W_j ≥ t + b_n ) ] ≤∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t - √(n) b_n -δ_n ) (1 + O ( √(m_n (log p)^3/n))) - ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + √(n) b_n + δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-2) ≤∑_j ∈ℋ_0ℙ ( √(n) t - √(n) b_n - δ_n ≤η_j ≤√(n) t + √(n) b_n + δ_n ) + ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t - √(n) b_n -δ_n ) · O ( √(m_n (log p)^3/n)) + O(p^-2) and p_0 G(t) ≥∑_j ∈ℋ_0ℙ (η_j ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-2), respectively. It follows from (<ref>)–(<ref>), similar arguments as for (<ref>), and G^-1 (c_1 q a_n/p) = O(√(log p/n)) in the proof of Lemma <ref> that as √(n) G^-1 (c_1 q a_n/p) (√(n) b_n+ δ_n ) → 0, sup_t ∈ (0, G^-1 (c_1 q a_n/p)] G(t - b_n ) - G(t + b_n) / G(t) ≤ C √(log p) (√(n) b_n + δ_n) + C √(m_n (log p)^3/n) ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ). Thus, we see that when m_n^1/2 s (log p)^3/2 + 1/γ/√(n) + Δ_n s (log p)^1 + 1 /γ→ 0, the desired result (<ref>) holds. We next proceed with establishing (<ref>). In view of Condition <ref>, it holds that p_1^-1∑_j ∈ℋ_1ℙ ( W_j < - t ) ≤ G(t) for t = O(√(n^-1log p)). Moreover, we have b_n = C Δ_n s √(log p/n) = o(G^-1(c_1 q a_n /p)) due to the assumption Δ_n s → 0 and G^-1 (c_1 q a_n /p) = O (√(log p/n) ). Then it follows that a_n ^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + b_n ) ≤ a_n^-1 (p - p_0) G( G^-1 ( c_1 q a_n / p ) - b_n ) = c_1 q (p - p_0) / p + a_n^-1(p - p_0) [ G ( G^-1 ( c_1 q a_n / p ) - b_n ) - G ( G^-1 ( c_1 q a_n / p ) ) ]. For notational simplicity, let us define t_n = G^-1 ( c_1 q a_n / p ). With the aid of the upper and lower bounds for ℙ (W_j ≥ t) given in (<ref>) and (<ref>), we can deduce that G ( t_n - b_n ) - G ( t_n ) ≤ p_0^-1∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t_n - √(n) b_n - δ_n ) (1 + O( √(m_n (log p)^3 /n)) ) - p_0^-1∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t_n + δ_n ) (1 + O( √(m_n (log p)^3 /n)) ) + O(p^-2) = p_0^-1∑_j ∈ℋ_0ℙ (√(n) t_n - √(n) b_n - δ_n ≤η_j ≤√(n) t_n + δ_n ) + p_0^-1∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t_n - √(n) b_n - δ_n ) · O( √(m_n (log p)^3 /n)) + O(p^-2). An application of similar arguments as for (<ref>) leads to ℙ (√(n) t_n - √(n) b_n - δ_n ≤η_j ≤√(n) t_n + δ_n ) /ℙ (η_j ≥√(n) t_n + δ_n ) ≤ C √(t_n) ( √(n) + δ_n ) ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ) and | ℙ ( η_j ≥√(n) t_n - √(n) b_n - δ_n ) /ℙ (η_j ≥√(n) t_n + δ_n ) - 1 | ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ). It follows from the lower bound in (<ref>) and G(t_n) = G(G^-1(c_1 q a_n/p)) = c_1 q a_n/p that as m_n (log p)^3/n→ 0, p_0^-1∑_j ∈ℋ_0ℙ (η_j ≥√(n) t_n + δ_n ) ≤ C (c_1 q a_n/p + O(p^-3) ) ≤ C c_1 q a_n/p. Therefore, combining (<ref>)–(<ref>) shows that G ( t_n - b_n ) - G ( t_n ) ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ) ·c_1 q a_n /p + C √(m_n (log p)^3 /n)·c_1 q a_n /p + O(p^-2) ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ) ·c_1 q a_n /p + O(p^-2). Finally, substituting the above bound into (<ref>) yields that as m_n^1/2 s (log p)^3/2 /√(n) + Δ_n s (log p) → 0, a_n ^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p + Δ_n ) ≤c_1 q (p - p_0)/p + C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ) ·c_1 q (p - p_0)/p + O( p - p_0 / a_n p^2) → 0, where we have used the assumption that p_0/p → 1. This establishes (<ref>), which concludes the proof of Lemma <ref>.
http://arxiv.org/abs/2307.05686v1
20230711180012
Unconventional Dicke model: Multistabilities and nonequilibrium dynamics
[ "Farokh Mivehvar" ]
quant-ph
[ "quant-ph", "cond-mat.quant-gas", "nlin.AO" ]
[Corresponding author: ][email protected] Institut für Theoretische Physik, Universität Innsbruck, A-6020 Innsbruck, Austria The Dicke model describes the collective behavior of a sub-wavelength–size ensemble of two-level atoms (i.e., spin-1/2) interacting identically with a single quantized radiation field of a cavity. Across a critical coupling strength it exhibits a zero-temperature phase transition from the normal state to the superradian phase where the field is populated and and the collective spin acquires a nonzero x-component, which can be imagined as ferromagnetic ordering of the atomic spins along x. Here we introduce a variant of this model where two sub-wavelength–size ensembles of spins interact with a single quantized radiation field with opposite strengths. In addition to the common x-ferromagnetic superradiance, we analytically find an exotic superradiant state with x-ferrimagnetic ordering, coexisting with x-ferromagnetically ordered superradiant state in large parameter regimes. The stability and dynamics of the system in the thermodynamic limit are then examined using a semiclassical approach, which predicts non-stationary behaviors due to the multistabilities. At the end, we also perform small-scale full quantum-mechanical calculations, with results consistent with the semiclassical ones. Unconventional Dicke model: Multistabilities and nonequilibrium dynamics Farokh Mivehvar Indian Institute of Technology Kharagpur ======================================================================== Introduction.—The nonequilibrium Dicke model is one of the most celebrated models in quantum optics <cit.>. It describes the collective behavior of N two-level atoms (i.e., spin-1/2) cooperatively interacting with a single quantized radiation field. The model has a fairly simple Hamiltonian (we set ħ=1 throughout the paper) <cit.>, Ĥ_ D=ω_câ^†â+ω_aŜ_z+λ(â^†+â)Ŝ_x, where ω_c is the frequency of the cavity mode, ω_a the transition frequency of one of the atoms, and λ the single-atom–field coupling strength. Here, â is the bosonic annihilation operator of the cavity radiation field and 𝐒̂=(Ŝ_x,Ŝ_y,Ŝ_z) is the collective atomic spin operator of length N/2 with components Ŝ_α=∑_j=1^N ŝ_j,α, where α={x,y,z}, defined in terms of the common single-atom spin operators ŝ_j,α. Despite its simple form, the Dicke Hamiltonian (<ref>) is expected to exhibit a variety of interesting phenomena, most notably the zero-temperature phase transition from the normal (N) state with the field being in the vacuum state to the superradiant (SR) phase where the field acquires a nonzero photon number ⟨â^†â⟩≠ 0 across the critical coupling strength √(N)λ_c=√(ω_cω_a) <cit.>. Correspondingly, the collective atomic spin completely in the spin-down state ⟨Ŝ_z ⟩ = N/2 initially—i.e., ferromagnetically (Fo) ordered along the negative z direction, referred to as “-zFo-N” in this work—obtains a nonzero x-component ⟨Ŝ_x ⟩≠ 0. The collective atomic spin orients almost completely in the positive or negative x direction ⟨Ŝ_x ⟩→± N/2, depending on the broken ℤ_2 parity symmetry of the Dicke Hamiltonian on the onset of the superradiant phase transition, in the deep superradiant phase λ→∞. Therefore, we refer to this superradiant phase with its asymptotic ferromagnetic ordering along the positive or negative x direction as “+xFo-SR" and “-xFo-SR", respectively, designated collectively by “±xFo-SR" (the “+" and “-” signs specify the direction of the total spin along a desired axis). The Dicke model has been successfully implemented using cavity-assisted two-photon Raman transitions between atomic momentum states for both bosonic <cit.> and fermionic <cit.> atoms as well as using cavity-assisted two-photon Raman transitions between atomic hyperfine ground states <cit.>, bypassing the no-go theorems <cit.>. Generalized implementations of it including both momentum and hyperfine internal states have also been proposed <cit.> and realized experimentally <cit.> in quantum-gas–cavity platforms. There is also great interest in realizing Dicke-type models and superradiance in waveguide-QED setups <cit.> and cavity quantum materials <cit.>. Motivated by the recent progress in quantum-gas–cavity QED <cit.>, in this Letter we introduce a variant of the Dicke model, which we coin the name “non-standard Dicke model” for it, where two independent ensembles of spin-1/2 atoms are coupled to a single mode of a cavity, but with opposite coupling-strength signs. The system exhibits a wealth of intriguing phenomena. In particular, the semi-classical approach reveals the existence of multistable steady-state phases as shown in Figs. <ref> and <ref>; especially, a bistable superradiant regime. This bistable region contains ±xFo-SR states where the total spins in the both ensembles orient in the same x direction (i.e., both in either positive or negative x direction, respectively) in the strong coupling limit. However, it also includes more exotic superradiant phases where the total spins in the two ensembles point in opposite x directions (i.e., one in the positive and another in the negative x directions) in the strong coupling limi, thus forming ferrimagnetic (Fi) order “±xFi-SR”. This order evolves into antiferromagnetic (aF) order, “±xaF-SR”, when two ensembles have an equal number of spins. The linear-stability analysis as well as the nonequilibrium dynamics of the system assert the stability of the steady states of the system. Furthermore, we find initial states in the multistable regimes where the ensuing nonequilibrium dynamics from them do not lead to any steady state of the system, rather give rise to non-stationary oscillating trajectories due to competing fixed points in these regimes; see Fig. <ref>. Finally, the full quantum-mechanical calculations also confirm the coexistence of ±xFo-SR and ±xFi-SR as shown in Fig. <ref>. Model and Hamiltonian.—Consider two independent ensembles of N_1,2 spins-1/2 located at negative and positive maxima of a cavity field and coupled to it with strengths ±λ, respectively. The system is described by a non-standard Dicke Hamiltonian, Ĥ_ nsD=ω_câ^†â+ω_aŜ_z+λ(â^†+â)ΔŜ_x, where Ŝ_α≡Ŝ_1,α+Ŝ_2,α=∑_j=1^N_1ŝ_j,α^(1)+∑_j=1^N_2ŝ_j,α^(2) is the total collective spin of the two ensembles and ΔŜ_α≡Ŝ_1,α-Ŝ_2,α is a staggered collective spin. Unlike the Dicke model (<ref>), the total collective spin here is not a constant of motion as 𝐒̂^2=Ŝ_x^2+Ŝ_y^2+Ŝ_z^2 does not commute with Ĥ_ nsD. This means that the non-standard Dicke Hamiltonian Ĥ_ nsD mixes manifolds with different total spins <cit.>. That said, the total spin for each ensemble is conserved. As we show in supplemental material (SM) <cit.>, remarkably the non-standard Dicke model also describes nonequilibrium dynamics of the combined density-spin self-ordering of transversely-driven two-component quantum gases inside single-mode cavities deep in the superradiant phase <cit.>. Due to the symmetry, in the following we restrict ourselves solely to N_2/N_1≤1. Mean-field approach: steady states, stability, and dynamics.—In order to obtain insight into the system, we start with the mean-field approach that omits quantum fluctuations and replaces quantum operators with classical variables, namely, â→ a=⟨â⟩ and 𝐒̂_l→𝐒_l=⟨𝐒̂_l⟩ with l=1,2, justified for large ensembles of spins <cit.>. The system is then described by a set of seven coupled differential equations obtained from Heisenberg equations of motion, ȧ =-(iω_c+κ)a-iλ(S_1,x-S_2,x), Ṡ_l,x = -ω_a S_l,y, Ṡ_l,y = ω_a S_l,x + (-1)^l λ (a^*+a)S_l,z, Ṡ_l,z = (-1)^l+1λ (a^*+a)S_l,y, and endowed by two spin-conservation constraints S_l=|𝐒_l|=√(S_l,x^2+S_l,y^2+S_l,z^2)=N_l/2. Here, κ is the cavity-field decay rate. The fixed-point solutions are obtained by setting ȧ=0 and Ṡ_l,α=0 in Eq. (<ref>). This immediately implies that in a steady state, S_l,y^ (ss)=0 for any ω_a≠0. After some algebraic manipulation <cit.>, one obtains two distinct classes of nontrivial solutions: S_1,x^ (ss)=N_1/N_2S_2,x^ (ss)= ±N_1/2√(1-(λ_c^( xFo)/λ)^4), with corresponding S_l,z^ (ss)=(-1)^l√((N_l/2)^2-[S_l,x^ (ss)]^2) determined through the normalization constraints, and S_1,x^ (ss)=-N_1/N_2S_2,x^ (ss)= ±N_1/2√(1-(λ_c^( xFi)/λ)^4), with corresponding S_l,z^ (ss)=-√((N_l/2)^2-[S_l,x^ (ss)]^2). Here, we have introduced the critical couplings as λ_c^( xFo/xFi)=√(ω_a(ω_c^2+κ^2)/(N_1∓ N_2)ω_c). The choice of the sign for S_l,x^ (ss), i.e., the “±” sings outside of the square roots in the right-hand sides of Eq. (<ref>), indicates two possible ℤ_2 solutions originating from the invariance of the Hamiltonian (<ref>) under the parity transformation â→-â and Ŝ_l,x→ -Ŝ_l,x. The two nontrivial classes of the solutions, Eqs. (<ref>) and (<ref>), have fundamentally different properties. The first class corresponds to superradiant states with x-ferromagnetic ordering where both S_1,x^ (ss) and S_2,x^ (ss) point in the same direction, while the second class corresponds to exotic superradiant states with x-ferrimagnetic ordering where S_1,x^ (ss) and S_2,x^ (ss) point in the opposite directions. We refer to these superradiant states, respectively, as “±xFo-SR” and “±xFi-SR”, corresponding to their asymptotic spin-ordering behaviors at λ→∞; see the table in the right side of Fig. <ref> for the schematic representation of these states at finite λ. To re-express it plainly, in the +xFo-SR (-xFo-SR) state both S_1,x^ (ss) and S_2,x^ (ss) have the same plus (minus) ℤ_2 parity-symmetry sign, while in the +xFi-SR (-xFi-SR) state S_1,x^ (ss) has the plus (minus) sign and S_2,x^ (ss) the minus (plus) sign. The ±xFi-SR states cross over into x-antiferromagnetic superradiant states, “±xaF-SR”, for N_1=N_2. In addition to the four superradiant fixed points discussed above, there are four trivial fixed points corresponding to a_ ss=S_l,x^ (ss)=0 and either S_z^ (ss)=± (N_1+N_2)/2 referred to as “±zFo-N” or S_z^ (ss)=± (N_1- N_2)/2 designated by “±zFi-N”; see the table in the right side of Fig. <ref>. Once again, the ±zFi-N states cross over into z-antiferromagnetic normal states, “±zaF-N”, for N_1=N_2. The superradiant thresholds are obtained by setting S_l,x^ (ss)=0 in Eq. (<ref>), yielding λ=λ_c^( xFo/xFi). The fixed points ±xFo-SR (±xFi-SR) only emerge beyond the threshold λ_c^( xFo) [λ_c^( xFi)]. Note, however, that the xFo-SR threshold diverges for N_2→ N_1 as α_ ss∝ΔS_x^ (ss)→0. We now turn our attention to the linear stability of the fixed points of the system obtained above. To this end, we write a=a_ ss+δ a and S_l,α=S_l,α^ (ss)+δ S_l,α in the mean-field equations of motion (<ref>) and subsequently linearize them to obtain ∂_tδ𝐗=𝐉δ𝐗, where δ𝐗 is a vector of fluctuations δ a and δ S_l,α, and 𝐉 the Jacobian matrix given explicitly in SM <cit.>. A fixed point is stable provided all eigenvalues of the Jacobian matrix for that given fixed point have negative real parts <cit.>. We analyze eigenvalues of the Jacobian matrix for all the fixed points. In particular, we find that all ±xFo-SR and ±xFi-SR states are stable in their entire corresponding parameter regimes, i.e., beyond the λ_c^( xFo/xFi) thresholds, respectively. On the other hand, the trivial fixed points -zFo-N and -zFi-N are solely stable below the λ_c^( xFi) and λ_c^( xFo) thresholds, respectively, and lose their stability beyond these thresholds. The other two trivial fixed points +zFo-N and +zFi-N are always unstable. This means the system undergoes a supercritical pitchfork bifurcation from -zFo-N into ±xFi-SR at the threshold λ_c^( xFi), and from -zFi-N into ±xFo-SR at the threshold λ_c^( xFo). The steady-state phase diagram of the system in the parameter plane of {N_2/N_1,λ/κ} is shown in Fig. <ref>. The two white curves are the analytical boundaries for the xFo-SR and xFi-SR transitions, obtained from Eq. (<ref>). The total number of fixed points in each parameter regime is indicated in the phase diagram. However, only the stable fixed points in each parameter regime are indicated explicitly. The phase diagram features regions of multistability <cit.>, in particular, a region with multiple coexistent superradiant phases. The steady-state behavior of the {x,z}-components of the total spin S_x,z^( ss), the x-component of the staggered spin ΔS_x^( ss), and the energy of the system E_0=⟨Ĥ_ nsD⟩ as a function of the atom-field coupling λ at a fixed N_2/N_1=0.3 are depicted in Fig. <ref> for the fixed points -zFo-N, -zFi-N, ±xFo-SR, and ±xFi-SR. As expected, ΔS_x^( ss) acquires nonzero values across the λ_c^( xFi/xFo) thresholds in the ±xFi-SR (blue curves) and ±xFo-SR (red curves) states, respectively; see Fig. <ref>(c). The two branches in each case correspond to the two possible ℤ_2 solutions. Accordingly, the field amplitude α_ ss∝ΔS_x^( ss) also grows from zero and the system enters superradiant phases, implying that ΔS_x^( ss) can be identified as the order parameter of the system. Note that ΔS_x^( ss) is zero in both -zFo-N and -zFi-N states [green and orange lines, respectively; obstructed somewhat by each other in Fig. <ref>(c)]. The x component of the total spin, S_x^( ss), exhibits a similar behavior, though it grows much faster in the ±xFo-SR states compared to the ±xFi-SR states; see Fig. <ref>(a). From Fig. <ref>(b) and (d), one sees that the z-component S_z^( ss) of the total spin and the energy E_0 change continuously from the -zFo-N to ±xFi-SR states and from the -zFi-N into ±xFo-SR states, signaling second-order phase transitions. This is consistent with Fig. <ref>(c) and the linear-stability analysis predicting supercritical pitchfork bifurcations. The behavior of S_x,z^( ss), ΔS_x^( ss), and E_0 in the full parameter space of {N_2/N_1,λ/κ} are given in SM <cit.>. From the energy point of view, the fixed points -zFo-N and ±xFi-SR in their corresponding stability regimes are the lowest-energy statets, implying that the other equilibrium points are metastable states. However, in open dynamical systems, dynamical stability is more important than the minimal energy condition <cit.>. Therefore, there is an essential distinction between the dynamics of an open system with κ≠0 and its equilibrium counterpart with κ=0. We now examine the semiclassical dynamics of the system by numerically integrating the mean-field equations of motion (<ref>). As expected, the stable steady states are the attractors of the long-time dynamics, while a small perturbation in the unstable steady states leads to phase-space trajectories being repelled from these points into one of the stable fixed points. We study these by adding a small perturbation δ a to the steady-state field amplitude a_ ss in each fixed point and looking at the ensuing dynamics. In particular, this reveals that both ±zFo-N (±zFi-N) are unstable towards one of the parity-symmetric pair ±xFi-SR (±xFo-SR) above the threshold λ_c^( xFi) [λ_c^( xFo)]; see SM for these types of phase-space dynamics. While below the threshold, +zFo-N (+zFi-N) evolves to the corresponding low-energy state -zFo-N (-zFi-N). Note that due to the multistability, the long-time dynamics depend crucially on the initial state and can exhibit intriguing features. In particular, in some parameter regimes we find initial states where the following phase-space dynamics from them do not lead to any of the stable fixed points, rather exhibit oscillatory behaviors owing to competing fixed points <cit.>. This is especially interesting in the parameter regime where all ±xFo-SR and ±xFi-SR are the stable fixed points of the system. Starting from the initial state 𝐒_1=(1/√(2),1/√(2),0)N_1/2 and 𝐒_2=(1/√(2),0,-1/√(2))N_2/2 with a small field a=δ a, the ensuing trajectories in the Bloch spheres of 𝐒_1 and 𝐒_2 encircle the two fixed points -xFo-SR and -xFi-SR and exhibit limit cycles as shown in Fig. <ref>(b) <cit.>. That said, the corresponding trajectory in the phase space of the cavity-field amplitude a as shown in Fig. <ref>(a) exhibits an inward spiral behavior toward an emergent focus lying between the two fixed points -xFo-SR and -xFi-SR. The dynamics of the system in the Bloch spheres of the total 𝐒 and staggered Δ𝐒 spins with radius (N_1+N_2)/2 are displayed in the insets of Fig. <ref>(a). Note that although the dynamics in the Bloch spheres 𝐒_1 and 𝐒_2 are strictly restricted to the Bloch surfaces, in the 𝐒 and Δ𝐒 phase spaces the dynamics move inside the Bloch spheres, confirming that the total spin 𝐒 is not conserved. Quantum description.—Finally, we address briefly the full quantum description of the system via the master equation for the density-matrix operator, ρ̂̇̂=i[ρ̂,Ĥ_ nsD]+ℒ̂[ρ̂]. In the Born-Markov approximation, the Liouvillian can be expressed in the Lindblad form as ℒ̂[ρ̂]=κ(2âρ̂â^†-â^†âρ̂-ρ̂â^†â), where we have ignored the decay of the atomic excited states. We perform small-scale, full-quantum calculations with N_1=4 and N_2=3 and find that indeed depending on the initial state, ±xFo-SR or ±xFi-SR can be the final state of the long-time quantum dynamics of the system in appropriate parameter regimes. Remarkably, we also find initial states and parameter regimes where the ensuing quantum dynamics from them lead to a state which is a superposition of all ±xFo-SR and ±xFi-SR. This can be best seen in the Q-representation of the cavity field <cit.>. The Q-function is presented in Fig. <ref> and comprised of four, partially disjoint lobes, an indication of the coexistence of multiple superradiant states, i.e., ±xFo-SR and ±xFi-SR [cf. Fig. <ref>(a)]. Conclusions.—We have introduced a variant of the Dicke model, which exhibits a wealth of steady-state and non-stationary phenomena. Our model can readily be implemented in state-of-the-art experiments <cit.> and opens a new avenue for studying various nonequilibrium magnetic ordering <cit.> and dynamical phenomena <cit.> in cavity-QED experimental setups. We acknowledge inspiring discussions with Natalia Masalaeva, Karol Gietka, Arkadiusz Kosior, and Helmut Ritsch. F. M. is supported financially by the Stand-alone project P 35891-N of the Austrian Science Fund (FWF), Tyrolean Science Promotion Fund (TWF), and the ESQ-Discovery Grant of the Austrian Academy of Sciences (ÖAW). § SUPPLEMENTAL MATERIAL In the supplemental material, we present the details of the connection of the non-standard Dicke model to the combined density-spin self-ordering in transversely-driven two-component quantum gases inside cavities, the analytical derivation of the steady states, their linear-stability analysis, and the dependence of the semi-classical dynamics of the system on the initial state. §.§ The combined density-spin self-ordering in quantum gases: The non-standard Dicke model An effective two-component Bose-Einstein condensate (BEC) coupled to a single mode of a standing-wave cavity and driven in a transverse direction by two pump lasers as depicted in Fig. <ref>(a) is described by the Hamiltonian (ħ is set to one) Ĥ=-Δ_câ^†â+∫Ψ̂^†(x)ℋ̂_0Ψ̂(x)dx, where Δ_c is the pump-cavity detuning, and â and Ψ̂=(ψ̂_↑,ψ̂_↓)^⊤ are the photonic and two-component atomic bosonic annihilation field operators, respectively; see also Refs. <cit.> for more details. The single-particle Hamiltonian density reads, ℋ̂_0= [p_x^2/2M+1/2Û_↑(x)+1/2Û_↓(x) ] 𝕀_2×2 +1/2[ δ+Û_↑(x)-Û_↓(x) 2Ω̂_ R(x); 2Ω̂^†_ R(x) -δ-Û_↑(x)+Û_↓(x) ], with the potential operators Û_↑(x)= U_0↑â^†âcos^2(k_cx) and Û_↓(x)=U_0↓â^†âcos^2(k_cx), and the two-photon Raman-Rabi coupling operator Ω̂_ R(x)=λ(â^†+â)cos(k_cx). Here, M is the atomic mass, p_x the atomic center-of-mass momentum operator, δ the effective detuning between the atomic ground states, and 𝕀_2×2 the identity matrix in the atomic ground-state space. The parameters U_0↑, U_0↓, and λ are proportional to atom-cavity and atom-pump coupling strengths as well as the pump-atom detuning <cit.>. By defining the local pseudospin operator 𝐬̂(x)=Ψ̂^†(x)σΨ̂(x) (σ is the vector of Pauli matrices), the Hamiltonian of the system can be recast as Ĥ=-Δ_câ^†â +∫Ψ̂^†(x) [p_x^2/2M+1/2Û_↑(x)+1/2Û_↓(x) ] 𝕀_2×2Ψ̂(x) dx +∫{1/2[δ+Û_↑(x)-Û_↓(x) ] ŝ_z(x) +Ω̂_ R(x)ŝ_x(x)}dx. The first term in Eq. (<ref>) is the bare cavity Hamiltonian, the second term describes the self-ordering of the total atomic density, and the last term is associated with the spin self-ordering. We numerically solve Eq. (<ref>) in the mean-field regime â→ a=⟨â⟩ and Ψ̂(x)→Ψ(x)=⟨Ψ̂(x)⟩ [and consequently 𝐬̂(x)→𝐬(x)=⟨𝐬̂(x)⟩]. For small λ the BEC is uniform and the cavity field is in the vacuum state, while for strong enough λ the system enters the superradiant phase exhibiting density and spin self-ordering; see Fig. <ref>(b). The self-ordered density and spin texture are localized at x_j=jπ/k_c with 2π/k_c being the wavelength of the cavity mode and j=0,±1,±2,⋯. Therefore, it is justified to discretize the system into a lattice with lattice sites x_j, specially deep in the self-ordered phase. One can see that s_j,z≡ s_z(x_j) always point down (for the chosen positive δ), while s_j,x≡ s_x(x_j) alternates its direction. Note also that s_j,y≡ s_y(x_j) is zero everywhere. In order to explain these results, we consider the spin part of the Hamiltonian (<ref>) (i.e., the last term) and re-write it in the discrete space, Ĥ_ spin= ∑_j=0,±1,±2,⋯(ĥ_j,zŝ_j,z+ĥ_j,xŝ_j,x) = ĥ_0,z∑_j=0,±1,±2,⋯ŝ_j,z + ĥ_0,x(∑_j=0,±2,⋯ŝ_j,x - ∑_j=±1,±3,⋯ŝ_j,x), where ĥ_j,z=[δ+(U_0↑-U_0↓)â^†âcos^2(k_cx_j)]/2 and ĥ_j,x=λ(â^†+â)cos(k_cx_j). In the last equality, we have made use of the fact that ĥ_j,z is site independent, while ĥ_j,x has opposite values in even and odd sites. Defining two sub-lattices consisting only of even and odd sites, respectively, and assuming that U_0↑=U_0↓, we obtain Ĥ_ spin= δ/2Ŝ_z + λ(â^†+â) (Ŝ_1,x - Ŝ_2,x), where Ŝ_z=∑_j=0,±1,±2,⋯ŝ_j,z is the z component of the total spin in both sub-lattices, and Ŝ_1,x=∑_j=0,±2,⋯ŝ_j,x and Ŝ_2,x=∑_j=±1,±3,⋯ŝ_j,x are the x components of the spin in each sub-lattices. Adding the bare cavity Hamiltonian into Ĥ_ spin (<ref>) and replacing δ/2→ω_a and -Δ_c→ω_c, we arrive at the non-standard Dicke Hamiltonian Ĥ_ nsD, given in Eq. (<ref>) in the main text. We note that in the deep self-ordered regime the atomic density is almost frozen and its Hamiltonian [i.e., the second term in Eq. (<ref>)] has been, therefore, omitted. The non-standard Dicke Hamiltonian hence describes the physics of the combined density-spin self-ordering of transversely-driven two-component Bose gases inside single-mode cavities deep in the superradiant phase. Note that here |𝐒_1|=|𝐒_2|. However, this restriction can be relaxed by adding further elements into the model. §.§ Steady-state solutions In this section we derive the steady-state solutions in more details. By setting ȧ=0 and Ṡ_l,α=0 in Eq. (<ref>) in the main text, one obtains a_ ss =λ/-ω_c+iκ[S_1,x^ (ss)-S_2,x^ (ss)], S_l,x^ (ss) =(-1)^l+1λ/ω_a(a_ ss+a_ ss^*)S_l,z^ (ss) with l=1,2. Dividing the two equations in Eq. (<ref>) leads to a relation between S_1,x/z^ (ss) and S_2,x/z^ (ss), S_1,x^ (ss)/S_2,x^ (ss)=-S_1,z^ (ss)/S_2,z^ (ss). Here, S_1,z^ (ss) and S_2,z^ (ss) can have the same or opposite signs, which lead to two distinct classes of nontrivial solutions. In particular, when S_1,z^ (ss) and S_2,z^ (ss) have the same sign (opposite signs), S_1,x^ (ss) and S_2,x^ (ss) must have opposite signs (the same sign), in order for Eq. (<ref>) to be satisfied. These correspond, respectively, to the ±xFi-SR and ±xFo-SR ordering. By noting that in the steady states |S_l,z^ (ss)|=√((N_l/2)^2-[S_l,x^ (ss)]^2), Eq. (<ref>) can be expressed in terms of only S_l,x^ (ss) as, S_1,x^ (ss)/S_2,x^ (ss) =±√((N_1/2)^2-[S_1,x^ (ss)]^2)/√((N_2/2)^2-[S_2,x^ (ss)]^2). Equation (<ref>) can be simplified to yield the relation S_1,x^ (ss)=±(N_1/N_2)S_2,x^ (ss), where the upper plus (lower minus) sign corresponds to the xFo-SR (xFi-SR). Using this relation, the steady-state field amplitude a_ ss [Eq. (<ref>)] can be expressed in terms of one of the ensemble spins [say, S_1,x^ (ss)]. Substituting this steady-sate field amplitude back in the corresponding equation for the spin S_1,x^ (ss) [Eq. (<ref>)], one obtains 1 =-2ω_cλ^2/ω_a(ω_c^2+κ^2)(1∓N_2/N_1) √((N_1/2)^2-[S_1,x^ (ss)]^2). This equation can readily be solved for S_1,x^ (ss) and subsequently S_2,x^ (ss), S_1,x^ (ss)= ±√((N_1/2)^2-[ω_a(ω_c^2+κ^2)/2(1∓N_2/N_1)ω_cλ^2]^2), S_2,x^ (ss)= ±√((N_2/2)^2-[ω_a(ω_c^2+κ^2)/2(N_1/N_2∓1)ω_cλ^2]^2), yielding, after defining the critical couplings λ_c^( xFo/xFi)=√(ω_a(ω_c^2+κ^2)/(N_1∓ N_2)ω_c), Eq. (<ref>) in the main text. After obtaining S_l,x^ (ss), the steady-state S_l,z^ (ss) can be obtained from the normalization constraints, S_l,z^ (ss)=±√((N_l/2)^2-[S_l,x^ (ss)]^2). As discussed above, S_1,z^ (ss) and S_2,z^ (ss) have the same sign (opposite signs) in the ±xFi-SR (±xFo-SR). In order to determine the exact signs, however, we numerically solve the steady-state equations. We find that in the ±xFi-SR states the minus sign is always picked up for both S_l,z^ (ss), that is, S_l,z^ (ss)=-√((N_l/2)^2-[S_l,x^ (ss)]^2). On the other hand, for the ±xFo-SR states the minus sign is picked up for S_1,z^ (ss) and the plus sign for S_2,z^ (ss), i.e., S_l,z^ (ss)=(-1)^l√((N_l/2)^2-[S_l,x^ (ss)]^2). Indeed, the linear-stability analyses confirm that with these choices, all ±xFi-SR and ±xFo-SR are stable in their entire corresponding parameter regimes, while the other choices lead to instabilities in some parameter regimes. Furthermore, Figs. <ref>(b) and (d) also show that with these choices, the z-component S_z^ (ss) of the total spin and the energy E_0 of the system change continuously between the -zFo-N and ±xFi-SR states, and the -zFi-N and ±xFo-SR states across the phase transitions. In Fig. <ref>, we show the steady-state behavior of the x and z components of the total spin, S_x^(ss) and S_z^(ss), and the x component of the staggered spin, Δ S_x^(ss), in the parameter plan of N_2/N_1 vs. λ/κ in accordance with the phase diagram of Fig. <ref> in the main text. Note that +xFi-SR and -xFi-SR, and +xFo-SR and -xFo-SR are the parity-symmetric pairs, related to one another by the parity transformation a→-a and S_l,x→-S_l,x (and equivalently S_x→-S_x and Δ S_x→-Δ S_x). §.§ Linear-stability analysis We begin by writing the mean-field equations of motion (<ref>) in the form, ∂𝐗/∂ t=𝐅(𝐗), where 𝐗=(a,a^*,𝐒_1,𝐒_2)^⊤ is a vector of the mean-field variables and 𝐅(𝐗) an 8× 8 matrix obtained from the right-hand side of Eq. (<ref>). The fixed points 𝐗̇_ ss=0 of the system are obtained by solving the coupled algebraic equations 𝐅(𝐗_ ss)=0, as discussed in the previous section. Taking a Taylor expansion (up to the linear term) of the right-hand side of Eq. (<ref>) around the fixed point 𝐗_ ss and noting that ∂_t𝐗_ ss=𝐅(𝐗_ ss)=0 yields ∂/∂ tδ𝐗=𝐉δ𝐗, where δ𝐗=𝐗-𝐗_ ss=(δ a,δ a^*,δ𝐒_1,δ𝐒_2)^⊤ is a vector of fluctuations, and 𝐉 the Jacobian matrix 𝐉=∂𝐅(𝐗)/∂𝐗|_𝐗_ ss= [ -(iω_c+κ) 0 -iλ 0 0 iλ 0 0; 0 -(-iω_c+κ) iλ 0 0 -iλ 0 0; 0 0 0 -ω_a 0 0 0 0; -λ S_1,z^ (ss) -λ S_1,z^ (ss) ω_a 0 -λ (a_ ss+a_ ss^*) 0 0 0; λ S_1,y^ (ss) λ S_1,y^ (ss) 0 λ (a_ ss+a_ ss^*) 0 0 0 0; 0 0 0 0 0 0 -ω_a 0; λ S_2,z^ (ss) λ S_2,z^ (ss) 0 0 0 ω_a 0 λ (a_ ss+a_ ss^*); -λ S_2,y^ (ss) -λ S_2,y^ (ss) 0 0 0 0 -λ (a_ ss+a_ ss^*) 0 ]. Recall that S_l,y^ (ss)=0 in all steady states. §.§ Semi-classical dynamics Here in Fig. <ref> we present the semi-classical dynamics of the system prepared in the initial state (a,b) -zFo-N, i.e., 𝐒_1=(0,0,-1)N_1/2 and 𝐒_2=(0,0,-1)N_2/2, and (c,d) +zFi-N, i.e., 𝐒_1=(0,0,1)N_1/2 and 𝐒_2=(0,0,-1)N_2/2, with a small fluctuation seed for the cavity field a=δ a. The system is attracted to the -xFi-SR fixed point in the former case, and to +xFo-SR in the latter case. These indicate that the fixed points -zFo-N and +zFi-N are unstable in this parameter regime (recall, however, that +zFi-N is always unstable) and the attractor of the long-time dynamics depends crucially on the initial state due to the multistability and the coexistent fixed points.
http://arxiv.org/abs/2307.10195v1
20230710200730
ChatGPT for Digital Forensic Investigation: The Good, The Bad, and The Unknown
[ "Mark Scanlon", "Frank Breitinger", "Christopher Hargreaves", "Jan-Niclas Hilgert", "John Sheppard" ]
cs.CR
[ "cs.CR", "cs.AI", "cs.CL" ]
add1]Mark Scanlonfirstcorr [email protected] [firstcorr]Corresponding author [add1]Forensics and Security Research Group, School of Computer Science, University College Dublin, Ireland add2]Frank Breitinger [email protected] [add2]School of Criminal Justice, University of Lausanne, Lausanne, Switzerland add3]Christopher Hargreaves [email protected] [add3]Department of Computer Science, University of Oxford, United Kingdom add4]Jan-Niclas Hilgert [email protected] [add4]Fraunhofer FKIE, Bonn, Germany add5]John Sheppard [email protected] [add5]Department of Computing and Mathematics, South East Technological University, Waterford, Ireland The disruptive application of ChatGPT (GPT-3.5, GPT-4) to a variety of domains has become a topic of much discussion in the scientific community and society at large. Large Language Models (LLMs), e.g., BERT, Bard, Generative Pre-trained Transformers (GPTs), LLaMA, etc., have the ability to take instructions, or prompts, from users and generate answers and solutions based on very large volumes of text-based training data. This paper assesses the impact and potential impact of ChatGPT on the field of digital forensics, specifically looking at its latest pre-trained LLM, GPT-4. A series of experiments are conducted to assess its capability across several digital forensic use cases including artefact understanding, evidence searching, code generation, anomaly detection, incident response, and education. Across these topics, its strengths and risks are outlined and a number of general conclusions are drawn. Overall this paper concludes that while there are some potential low-risk applications of ChatGPT within digital forensics, many are either unsuitable at present, since the evidence would need to be uploaded to the service, or they require sufficient knowledge of the topic being asked of the tool to identify incorrect assumptions, inaccuracies, and mistakes. However, to an appropriately knowledgeable user, it could act as a useful supporting tool in some circumstances. ChatGPT Digital Forensics Artificial Intelligence Generative Pre-trained Transformers (GPT) Large Language Models (LLM) § INTRODUCTION The emergence of Generative Artificial Intelligence (GAI) has sparked significant interest and scrutiny across various disciplines, including its potential impact on scientific research and writing <cit.>. In particular, Large Language Models (LLMs), such as ChatGPT – released in November 2022 (<openai.com/blog/ChatGPT>), have been identified as having numerous beneficial use cases and risks in various fields including digital forensic (DF) investigation <cit.>. These encompass automated script generation, gaining technical or procedural knowledge, multilingual analysis, automated sentiment analysis, etc. However, as LLMs are language models in the first instance, they are focused on generating an answer and do not always prioritise generating the correct answer. OpenAI state that ChatGPT's latest LLM from March 2023, GPT-4, “is not fully reliable (it hallucinates facts and makes reasoning errors)” and that “care should be taken when using the outputs of GPT-4, particularly in contexts where reliability is important” <cit.>. Consequently, despite its potential, the use of AI models involves various risks. For instance, some risks of using LLMs in digital forensics include: training data biases/errors, hallucinations, legal and ethical concerns, explainability/investigator over-reliance, and technical limitations. At the time of submission, there are no original research publications focused on the application of LLMs to the domain of digital forensics. This paper aims to assess the impact that ChatGPT could have, positive and negative, – specifically focusing on GPT-4. The contributions of this work can be summarised as follows: 0em * GPT-4 is evaluated in various contexts, including learning about digital forensics topics, identifying relevant artefacts, assisting in searching for artefacts, generating code for forensic activities, detecting anomalies in log files, incident response, and creating storyboards for teaching scenarios. * For each of these areas, it showcases both the risks and occasional benefits of the technology in its current state. * Based on the results in these specific areas, the study draws general conclusions and proposes future directions for the utilisation of LLM-based AI in digital forensics. The remainder of the paper is structured as follows: Section  <ref> provides the background to the technology and an overview of the related work. The methodology is discussed in Section  <ref>, followed by Sections  <ref> to <ref> which provide a discussion of the focus areas for the included experimentation. A discussion of the good, bad, and unknown can be found in Setion  <ref>. Limitations of the work are highlighted in Section  <ref>. The last section concludes the paper and points out future directions. § BACKGROUND AI applications in digital forensics have predominantly centred around data classification and identification tasks, including network forensics, malware investigation, child sexual exploitation material investigation, facial recognition and biometric trait estimation, device triage, timeline reconstruction, and device fingerprinting <cit.>. With the advancements of LLMs, new applications are possible. §.§ Large Language Models LLMs are built using neural networks with typically billions of parameters and corresponding weights, and are trained on very large quantities of unlabelled text. Generative pre-training is a long-established technique used in machine learning <cit.> whereby a Natural Language Processing (NLP) neural network based model is trained (unsupervised) to predict the next word in a sentence from a large corpus of text leveraging the statistical properties of the language itself and subsequently fine-tuned for a specific task. In 2017, Google released the Transformer architecture <cit.>, which uses an attention mechanism to weigh the importance of different words in understanding a piece of text. The Transformer architecture has proven successful in NLP tasks and was foundational for the first iterations of LLMs, including BERT in 2018 <cit.> and XLNet in 2019 <cit.> (both non-generative pre-trained transformers). §.§ Generative Pre-trained Transformers GPTs are one family of LLMs created by OpenAI in 2019, and are used as a framework for creating GAI applications. ChatGPT is a chatbot application built on top of OpenAI's GPT-3.5 and GPT-4. At the time of launch, ChatGPT exclusively used GPT-3.5, and continues to do so for the freely-accessible tier. Paid subscribers, or Plus members, have access to the GPT-4 model. Figure <ref> summarises the different performance characteristics between the two versions of GPT according to OpenAI. In addition, GPT-4 also facilitates several additional plugins, including web browsing (live up-to-date data retrieval), code optimisation, etc., – made available through a limited alpha program. OpenAI does not declare much detail about GPT-4's architecture, model size, hardware, training compute, dataset construction, or training methods for commercial competitiveness reasons <cit.>. § METHODOLOGY To assess the applicability of ChatGPT for digital forensic investigations, a selection of areas within this domain was identified. Although these domains do not provide full coverage of all possible uses of LLMs for digital forensic, they are representative. They provide a variety of possible uses and are derived by considering existing uses of ChatGPT that have been discussed, e.g., code generation and creative writing <cit.>, and applying this in the context of digital forensics. In total, six representative areas have been identified. For each topic area, a brief explanation is given, followed by a series of specific illustrative examples of conducted experiments. An experiment is defined as a conversation on a particular thread and consists of one or more prompts that were given to ChatGPT attempting to achieve a specific aim. All experiments were chosen to highlight the strengths, limitations, and dangers of the technology. Example subsets of the experiments performed as part of this work are provided in the text of this paper. Since ChatGPT responses are non-deterministic given identical prompts, a static repository of the prompts used and corresponding responses can be found in a GitHub repository associated with this paper <https://github.com/markscanlonucd/ChatGPT-for-Digital-Forensics>. This repository has a folder structure corresponding to each of the experimentation sections of this paper, i.e., Sec. <ref> to <ref>. The given answers were evaluated and validated to draw appropriate conclusions for each topic area. This was done based on fact-checking where possible, as well as the authors' experience in digital forensic processing, programming, and teaching. Each section concludes with a summary of these findings, from which general results are extrapolated and presented in Sec. <ref>. § CHATGPT FOR ARTEFACT IDENTIFICATION Operating system artefacts are vital for investigators, as they provide valuable insights into the activities of a device, including communication history, data origins, and overall device usage. These artefacts not only help investigators tell a comprehensive story, but also serve as corroborating evidence. §.§ File Downloads ChatGPT was prompted for assistance in determining if a file had been downloaded to a Windows 10 PC by a particular user. The generated text highlighted several possible places to examine such as the associated metadata, the browser history, the user's downloads folder, the Windows Event logs, network logs as well as using the third-party tools EnCase, FTK, or X-Ways Forensics. The response also included a warning at the end, stating “Keep in mind that it's essential to follow proper forensic procedures and maintain a chain of custody to ensure that the evidence you gather is admissible in court”. When the prompt was refined to state that the investigator suspected the file was downloaded through Skype rather than through a browser, ChatGPT refined its answer, specifying the location of the Skype conversation history database and the Skype downloaded file's location. It repeated the Windows Event logs, network logs and tools list but with more focus on Skype such as “Use Event Viewer to check for any relevant events related to Skype or file transfers during the timeframe in question” and “Look for Skype-related traffic, e.g., the IP address and ports used by Skype, and file transfer events”. §.§ File Execution A query was submitted about how to determine if a file had been executed on a Windows 10 machine by a particular user. The response to this focused on the Windows Security Event logs and the event ID 4688 for process creation, prefetch files, UserAssist registry keys and the NTFS filesystem metadata. When asked “are there any other artefacts that I should consider” the prompt supplied the names of other artefacts such as the Windows Task Scheduler, LNK files, Shellbags, Windows PowerShell History, Windows Search Index (WSI), System Resource Usage Monitor, Browser History and Cache, and logs created by the operating system, applications, or security software. When prompted again with “are there any other artefacts that I should consider” it this time added Amcache, Shimcache, UserActivity cache, jumplists, network artefacts as well as looking at memory forensics, external devices and filesystem journaling. When it was again prompted with the same query, it presented more artefacts. Among these were links to tools that resolved in some instances, but in other cases produced 404 errors. Two examples of this included links to Eric Zimmerman tools called SuperFetch Parser and ShimDBExtractor neither of which are tools available on Zimmerman's GitHub page. Tools created by Zimmerman that are available with similar names are the prefetch parser PECmd, the shim database parser named SDB, and the shim cache parser AppCompatCacheParser. §.§ Cloud Interaction Posing as a law enforcement agent looking for evidence on a Windows computer that had interacted with a cloud storage platform, items of evidence identified for examination included web browser history and cache, log files, prefetch, registry hives, cloud storage platform clients, WSI, email clients, RAM artefacts and deleted or encrypted files. When ChatGPT was prompted that the investigator suspected the cloud platform was Google Drive, the response had some overlap such as looking at browser artefacts and email content, Windows registry and network traffic, as well as some more specific items such as the Google Drive desktop application, to look for Google account information and the Google Drive app on the associated mobile device if it is available. When pushed further on finding and interpreting the client's settings, local cache, and any synchronised files or folders for Google Drive for Desktop, it presented paths to the locations of configuration files and databases. This was also done for Dropbox and AWS S3 buckets. In some cases, the paths given resolved correctly, while in other cases there were similar names and some did not resolve. §.§ Summary While it can specify some interesting and important artefacts to look at, ChatGPT seems to focus heavily on the use of Windows Event Logs as its primary location for evidence. Though Windows Event Logs are extremely important and useful to an investigator, ChatGPT does not immediately highlight other important artefacts that should be examined. If an investigator was not aware of important artefacts already, these may be missed, meaning that the full story would not be told. There is a variance in terms of the depth of response that is supplied regarding different artefacts. In some instances, it gives a brief description of the usefulness of a subset of a particular artefact, such as in the Windows Event logs or in the Registry, and does not comprehensively identify all aspects of that artefact that should be looked at. For some artefacts, it explains what data is within them based on fields, keys or values that are present. In other instances, it gives detailed and thorough step-by-step guidance on how to locate and extract evidence from the operating system. There are also links to tools which do not seem to exist but are based on tools with a similar name and function but not quite the same. For the examination of cloud-related artefacts, the results were mixed. The areas to look at for determining cloud account information were reasonable, however the paths to the default locations on the machine were not always consistent with what they should be. § CHATGPT FOR SELF-DIRECTED LEARNING OF DIGITAL FORENSICS This section assesses how suitable ChatGPT is for self-directed learning, i.e., can it educate users in a similar way as current real-world educational offerings can? While there are many different possibilities in the real-world, it is appropriate to differentiate between the following offerings: (O1) introductory level, e.g., one lesson/course within another class/degree <cit.>; (O2) specialised courses, e.g., to obtain unique skills or proposed by vendors to showcase tools; (O3) digital forensic degrees, i.e., a B.Sc. or M.Sc. degree consisting of many modules; and (O4) research conferences and workshops, i.e., experts informing themselves about latest trends and developments. To assess the performance of ChatGPT for these scenarios, objectives from real-world offerings were examined and ChatGPT was assessed to see if it can help learners reach these objectives. Note that objectives are frequently described using Bloom's taxonomy <cit.>. Bloom's taxonomy is a hierarchical framework that classifies educational learning objectives into six levels, ranging from lower-order thinking skills such as remembering and understanding to higher-order skills like analysing, evaluating, and creating. §.§ Introductory Level (O1) Frequently included objectives[For instance, see here <https://study.com/academy/course/computer-science-320-digital-forensics.html#information>.] are memorising basic principles and the forensic process, naming sub-disciplines, explaining the chain-of-custody, or describing computer crime. Consequently, a series of questions were formulated to learn more about these general aspects. Example questions are: What is digital forensics? Is there a common process model? Are there sub-disciplines, and if so, which ones? Generally, the answers were correct and provided a short but sufficient overview. ChatGPT described a five-phase model (identification, preservation, collection, analysis, and reporting), summarised well the goals of the chain-of-custody, and identified seven sub-disciplines. Answers also included aspects that are often taken for granted, such as ethical standards needing to be maintained, or that the field is rapidly evolving and it is essential to stay up-to-date. A downside was that it could not provide the name/author of the process model. It also provided incorrect authors and references to the literature when requested. Nevertheless, it can be employed as a starting point to learn about the domain, if a lot of detail is not needed or desired. §.§ Advanced/Expert Level (O2,O3) University degrees or expert commercial courses deliver in-depth knowledge. Most offerings include sophisticated hands-on activities to apply and practice gained knowledge. To assess ChatGPT's suitability for this level of training, it was asked questions related to gaining hands-on experience, such as “can you propose exercises/tools to become an expert”, or “can you provide step-by-step descriptions for scenarios”? ChatGPT agreed that it requires hands-on experience and started by proposing general exercises such as examining memory dumps using volatility or analysing disk images using autopsy. It also recommended participation in online challenges such as CTFs, National Collegiate Cyber Defense Competition (CCDC), or SANS NetWars, which require significant experience and are more suitable for experts. In contrast, the proposed scenarios (creation and solution) were basic and included only 3-4 steps. To obtain more intermediate exercises, additional details were requested about one of the scenarios (file recovery on FAT32) using several follow-up questions. While the responses were detailed, the explanations were difficult to understand and learners may not be able to follow them. There were also occasional errors in the answers. For example, there was an error in the Attribute byte () in one of the provided hexdumps: was provided, which according to <cit.> is invalid. §.§ Research and Workshops (O4) These venues provide the latest trends and developments. Workshops can vary from more general discussions over highly technical works requiring expert-level knowledge. As the model is not constantly updated, i.e., at the time of writing this paper, the knowledge cut-off of the model is September 2021, it will not be able to inform about these latest developments. §.§ Tool Explanation Given that digital forensics frequently involves utilizing tools, the potential of employing ChatGPT as an alternative to a traditional user manual was examined. In this assessment, Wireshark (GUI) and (CLI) were selected as representative tools, and ChatGPT was queried for specific commands, guidance on particular settings, and explanations regarding the interpretation of the output. The responses were useful, and exploring a tool in an interactive session was more engaging than reading a page. Especially for the CLI, it provided correct commands facilitating the filtering of certain elements and correctly explaining the output. With respect to the GUI, it was able to highlight the correct settings/locations to use the tool. §.§ Summary ChatGPT serves as an effective tool for acquiring a general understanding of a domain, particularly for individuals who already possess some existing knowledge. It acts as a valuable refresher, albeit one with a few limitations. Notably, it relies exclusively on textual and code listings for explanations, which may be less effective in certain contexts where diagrams or graphics could better convey the information. The process of acquiring in-depth knowledge, however, is hindered if the user lacks a prior understanding of the field. This limitation necessitates follow-up questions and manual validation to counter the instances of AI-generated `hallucinations.' Furthermore, the lack of accompanying exercises or practical tasks inhibits the application of acquired knowledge, a crucial step in learning and higher-level objectives in Blooms taxonomy. It does not provide exercises or labs for practical application and also showed weaknesses when it came to helping create them. § CHATGPT-ASSISTED KEYWORD SEARCHING The concept of searching is fundamental to digital forensics, and much of that is based on keyword searching <cit.>. Given that ChatGPT is “a large multimodal model capable of processing image and text inputs and producing text outputs” <cit.>, there does seem to be potential for it to assist in keyword searching. The section below discusses some current and future applications within the search domain. §.§ Generating Regular Expressions Experiments were conducted to generate regular expressions for common entities. For example, a regular expression for credit card numbers was very detailed and included an explanation of its constituent parts, the specific start digital for cards from various providers, and included a disclaimer that it did not validate the checksum using the Luhn algorithm. However, the expression generated would not have taken into account any white space between number groups. Interestingly, asking for examples that could be used for testing, despite the claims “These numbers should match the regular expression provided in the previous answer”, did not match the generated regular expression since they contained whitespace. A regular expression for UK car registration plates was successfully generated, with an accurate description highlighting that it only covered the newer scheme in use since September 2001. A disclaimer was also provided describing that the specific letter combinations for the area code were not validated and there may be false positives. Email addresses were also tested, and the regular expression generated was described as a “simple regular expression for matching most email addresses”, with a caveat of “Please note that this regular expression does not cover all possible email address formats allowed by the RFC 5322 standard. It works for most common email addresses, but may produce false negatives or false positives in some cases.” Unfortunately, it fails on simple tests such as as it only specified the upper case character set for the top-level domain. Again, the examples it provided for testing did not match. It was however possible to request a regular expression matching a simple custom policy number format that was invented and provided to the tool. For example, the prompt was supplied “a policy number takes the format of AB, AF, or AZ, followed by between 3 and 5 numeric digits, a hyphen and then 3-5 digits. Can you generate a regex for that?”, which produced the correct regular expression. Three examples were also provided, which did match. §.§ Generating Keyword Lists Another interesting area is the potential for ChatGPT and similar tools to be used to generate keyword lists. This has been extensively discussed in the areas of Search Engine Optimisation (SEO), with Udemy courses already available on the topic, e.g., “ChatGPT for SEO”[<https://www.udemy.com/course/using-chat-gpt-for-seo-search-engine-optimization/>]. In the context of digital forensics, <cit.> discuss some of the challenges in keyword searching. For example, straight keyword searches fail to match variants of that word, missing typos or misspellings, or missing abbreviations. It also describes the use of wildcards to attempt to mitigate some of this, with an example of a sexual harassment case and the use of the term `sex*' to catch sex, sexual, sexuality, sexist, sexism. This is however quite limited as an approach and would not match associated words. Testing within this area certainly provided long lists of keywords associated with a main term supplied. One example shows synonyms for cannabis generated, and with further prompting provided associated words rather than direct synonyms, and even emojis that might be related. This goes beyond simple synonym generation, which could be done using existing technology. In other examples, requesting common misspellings of a word was also possible, as were abbreviations. As an additional example, a scenario provided for a sexual harassment investigation is used, asking ChatGPT “If I was conducting a digital investigation into sexual harassment generate a list of keywords that could be used”, it first generated a list of words that could formally describe sexual harassment e.g. `sexual comments', `hostile work environment', `catcalling'. With further prompting, e.g. “What about terms that a victim might include in a message to someone else if they were describing that someone was sexually harassing them?” provided more terms such as `creepy behaviour', `felt humiliated', `powerless', `unwanted compliments'. Also, an alternative prompt of “what about terms to search for that might be in messages from someone that was conducting the sexual harassment” generated another set of keywords that could feed into an investigation, e.g., `sexy', `dirty', `fantasize', `undress', etc. This highlights the need for careful prompt engineering to refine the output. Regarding the quality of this output, no methods were found in the literature on evaluating the effectiveness of keyword lists in a digital forensic investigation, so evaluation of the lists generated is difficult. Further work could engage in studies with investigators to see if they believe terms would result in additional hits, or running these lists over historical cases to determine if additional artefacts could be located with different keyword lists. §.§ Other Searching Topics Within the GPT-4 Technical Report <cit.>, one of the main goals is described as being able to “understand and generate natural language text, particularly in more complex and nuanced scenarios”. This can facilitate some other potential uses of LLMs – specifically finding relevant material without the use of keywords and instead detecting specific types of content. This already exists in some commercial products, e.g., Magnet Axiom has an AI feature that attempts to identify grooming/luring chat content <cit.>. In the context of ChatGPT, given that it has summarising capabilities, there is the potential for a more generalised solution, although at present this is a theoretical exercise since this could not be used due to the need to upload evidence to the online service. However, there are many datasets that could be used to evaluate this, for example, a small sample from the Chat Sentiment Dataset[<https://www.kaggle.com/datasets/nursyahrina/chat-sentiment-dataset>] was supplied and ChatGPT was able to respond by describing whether it was a positive, negative or neutral statement, although it differed in some places from the tagged value, e.g., the statement “The price is a bit high” is tagged as neutral in the dataset, but ChatGPT reported that it “has a slightly negative connotation, as it suggests that the speaker finds the price to be somewhat excessive or more than expected”. An extensive review of accuracy against such datasets is not within the scope of this paper, especially since the tools could not be used in any real case, but if a local model was available or there was interest in such an evaluation, regardless of current real-world application, then future work could make use of the ChatGPT API to evaluate the sentiment analysis capabilities quantitatively, including on other datasets such as `Hate Speech and Offensive Language Dataset' [<https://www.kaggle.com/datasets/mrmorj/hate-speech-and-offensive-language-dataset>]. Aggressive content, grooming, manipulative language, or attempted fraud could all be pursued as types of content to identify and flag within a digital investigation. Also, models which can ingest images as well as text provide additional potential capability to digital forensic tools. For example, if an image can be described in text, then that text summary could be processed using traditional keyword searching, which allows for multi-modal searches for evidence to take place. Models such as these could also be used for machine translation, where either content from the data source is translated into the search language, or the keyword terms are translated into the target language, however machine translation was not specifically evaluated as part of this paper, but could be considered as future work. §.§ Summary There are some potential uses of ChatGPT already within the context of searching in digital forensics. Generating regular expressions and enhancing keyword search lists, either with additional terms, or suggesting abbreviations or misspellings, have all been found to be reasonably effective, although the former requires validation and testing of those regular expressions generated. There are also clearly some potential uses for the technology in future; the ability to summarise documents and answer questions about the nature of the content in a user-friendly manner has extensive potential for digital forensic applications. Unfortunately, the inability to upload evidence to such a service prevents this from being useful in its current form. § CHATGPT AND PROGRAMMING IN DIGITAL FORENSICS Digital forensic investigation often necessitates unique functionalities that may not be available in current software or must be rapidly deployed in resource-limited, live forensic scenarios. The capacity to swiftly create a script for a particular duty is essential in various digital forensic cases. This section examines GPT-4's potential to assist digital forensic investigators by generating scripts for a set of common tasks. Although numerous interactions with ChatGPT were conducted, each subsequent subsection focuses on a representative example, showcasing GPT-4's code generation performance in that area. §.§ File Carving The initial experiment tested GPT-4's capability to generate a script to extract files from a captured disk image (either or raw images). The model was prompted to craft a Python script to retrieve PNG files. It produced a script employing Python libraries: (The Sleuth Kit's Python wrapper), (for processing Expert Witness Format, or , files), and (for dealing with image files). The generated script utilised the function from to navigate the filesystem. Thus, it did not engage in file carving and relied purely on the file extension and filesystem metadata. The model improved the proposed method's efficiency by adopting a more pragmatic file carving approach, leveraging the PDF header and the end-of-file byte signature. The revised script replaces the weighty with Python's library, reading the raw disk image as a byte sequence – independent of the filesystem. The script scans the disk image for the PDF header, carves files until it finds an end-of-file marker, matching many file carving tools' performance. Partially overwritten PDF files, if found post-header, would lead to the extraction of large junk files. The script does not restart file carving if it encounters a second PDF header before an EOF, nor does it handle file fragmentation or potential false positives. §.§ RAID Disk Acquisition The next experiment simulated the acquisition from a series of SSD drives that were part of a RAID and mounted to the workstation using USB write blockers. In the prompt, GPT-4 was advised that the level of RAID used was unknown and that this should be determined and dealt with in the first instance. It wrote a python solution to this problem for a Linux based system, leveraging the python library to leverage and tools. This script had two functions; a function that used the command to determine the level of RAID in question, and a function that acquired a raw disk image from each connected disk. The script generated presumed four SSDs mounted at through . No verification was established to ensure RAID level consistency across the disks, or to confirm the disk count matches the number required by the detected RAID level. The RAID level, ascertained from the first disk image, is taken to be the level for all four disks without further validation with the other disks. The RAID level discovery prompts the command, assembling a RAID volume from the write-blocked, mounted SSDs. The script subsequently generates a raw disk image and dismounts the RAID volume using the command. §.§ Password Cracking Password protected and encrypted content is frequently a hindrance to lawful investigation <cit.>. This scenario involves using GPT-4 in a digital forensic investigation with an encrypted, password-protected zip file – asking it to develop a password cracking script. Its initial response was, “As an AI language model, I am not allowed to assist in any activities that could be considered illegal or unethical, including providing code or guidance for cracking passwords”. Despite assurances of legal and authorised activity, the model maintained it could not generate a script, instead suggesting alternative means of accessing file content. These included examining the device's storage or backups with forensic tools, persuading the owner to divulge the password, or undertaking cryptanalysis against the algorithm and/or key (typically computationally infeasible). It was then asked to recommend libraries and sample code: “Certainly! Here are some libraries and sample code to help you work with encrypted zip files in a legal and ethical manner”. It then provided Python code using and libraries, both presuming pre-existing knowledge of the password. It was then prompted that a list of plaintext passwords, named “rockyou.txt" was available for testing. The generated scripts were updated to iterate over this file until a password successfully extracted data, or the list ended. Furthermore, alternate password candidate dictionaries were requested. ChatGPT suggested four viable dictionaries[<https://github.com/danielmiessler/SecLists>, <https://crackstation.net>, <https://www.openwall.com/wordlists/>, <https://wiki.skullsecurity.org/Passwords>]. The password cracking scripts were then successfully modified to include these dictionaries, sequentially testing each until data was successfully extracted, or no password candidates were left – this completed the task that was initially resisted. §.§ Memory Forensics - Recovering Encryption Keys GPT-4 was prompted to script an analysis of a memory dump, using Python, to locate potential AES and RSA encryption keys. Presuming interest in only AES keys of 16, 24, or 32 bytes and RSA keys of 128, 256, or 384 bytes, it developed two Python functions: and . The former function scans a binary file for a specified byte pattern, while the latter measures the entropy of a given byte sequence. GPT-4 arbitrarily set the entropy threshold to 7.5, indicating the value can be adjusted. It then inspected a file for any byte sequences of the stated lengths with entropy exceeding 7.5. When asked to search for BitLocker encryption keys, the entropy-based search was narrowed to detect 16 or 32 byte sequences (128-bit or 256-bit) having an entropy level of 7.5 or higher. The script was modified to find the Windows-specific Full Volume Encryption Key (FVEK) or Volume Master Key (VMK) patterns in the memory dump. However, the updated script did not significantly differ from the initial one and lacked Windows-specificity. On further prompting to use a specific tool, namely Volatility[<https://www.volatilityfoundation.org/>], the script and the corresponding command line example were revised to search for a profile with Volatility. This tool was invoked using the Python library. §.§ Summary In the tested scenarios, GPT-4 effectively generated scripts for various digital forensic tasks. The scripts were well-commented, had adequate error checking, and combined different technologies, e.g., integrating Linux tools into a Python script. The system could also provide detailed explanations of the code's functionality and the decisions behind its creation. However, user-level knowledge of scripting languages and digital forensics is essential for application and to spot any unreasonable assumptions, such as limiting encryption key size or assuming only text files contain sought-after regular expressions. Generated code can not be used blindly. However, any identified limitations can often be rectified by prompting the model what the concerns are. Interestingly, GPT-4 initially refused to help create code for “gatekept” operations, e.g. potentially unethical or illegal use cases, such as password cracking. However, with further interaction and breakdown of the request into constituent parts, it provided step-by-step advice on techniques, sources, and tools for the restricted task. Ultimately, it generated and optimised the desired code – while emphasising that it should only be used in an authorised and legal context. Users can cleverly bypass some system protections through prompt engineering, while OpenAI continually works to prevent such “jailbreaking” of their built-in protections. § CHATGPT FOR INCIDENT REPONSE Crucial steps, especially during incident response, are identifying anomalies, finding suspicious activity and discovering possible attacks. It also implies a certain understanding of existing attack vectors as well as the way they have been exploited. This section considers if ChatGPT can be used to facilitate this process. Source Identification Before conducting the main experiments, ChatGPT's capability to identify input sources that are typically encountered during incident response investigations was assessed. Textual artefacts were examined, such as output from commands or content of log files, and converted non-textual artefacts such as Windows Event Logs or the Registry file to textual representations, since ChatGPT only processes text. Additionally, ChatGPT was prompted to identify the output of the command to test possibilities of providing network capture information. While there were occasional instances of uncertainty regarding the exact source of certain artefacts, ChatGPT consistently interpreted the data correctly, laying a strong foundation for further experiments. §.§ Anomalies In the initial experiments, ChatGPT's ability to detect anomalies in a system was evaluated. For the purpose of this paper, an anomaly was defined as any deviation from a predefined, ordinary, and benign system behaviour. This task may involve identifying unusual processes, log entries, or files. One immediate challenge was the limited amount of data that could be provided to ChatGPT for processing. A default process list created by on a clean Ubuntu 22.04 release consists of roughly 200 lines, which had to be split into multiple parts for ChatGPT to accept as a prompt. A possible workaround for this issue is filtering the information, such as providing only process names or selecting specific lines of the output. However, since incident response is often performed without prior knowledge, this method could lead to the loss of potentially crucial information when parent process IDs or process arguments are excluded. In the experiments, ChatGPT was given process listings from Ubuntu 22.04 and asked to identify atypical processes. While it correctly detected most third-party applications and a custom script, it misclassified default applications such as and Firefox. Additionally, its responses were non-deterministic for identical prompts. As another example, ChatGPT was provided with the content of an SSH file that is used by attackers to gain persistence on a system. Without any context, not even an incident responder is able to distinguish between a legitimate and a malicious key. In the experiment, ChatGPT acknowledged this difficulty and offered helpful best practices for ensuring SSH security. However, in one instance, the comment field of a specific key was altered to include the word “hacker”. Although this field is irrelevant as it is meant only for comments, ChatGPT was triggered by the keyword and incorrectly flagged the corresponding key as suspicious, referring to a non-existent “username hacker”. When later asked for clarification, the model correctly explained that the comment field is intended solely for comments and should not impact the key's legitimacy. §.§ Suspicious Activity We expanded the experiments to a level where anomalies appeared as red flags to any experienced incident responders. This involved creating a reverse shell using , which connects to an attacker's system and executes shell commands. In this example, ChatGPT failed to detect the obvious reverse shell process within the process listing when prompted for suspicious activity, or even for any specifically reverse shells. Only when was mentioned did it recognize the process as a strong compromise indicator, providing advice on handling the situation, such as terminating the process and consulting a cybersecurity professional. In another scenario, an unsuccessful SSH brute force attack was conducted and ChatGPT was provided with log entries from . Due to size limitations, the entire log file could not be uploaded. Nevertheless, ChatGPT identified the failed login attempts, detected an SSH brute force attack, and extracted the IP address involved from the log extract. §.§ Attacks In the final series of experiments for this section, ChatGPT's capacity to identify genuine attacks was evaluated, which were classified as behaviours that are not only suspicious but also executed with malicious intent. First, its response to the Follina exploit CVE-2022-30190 was examined, which leverages the Microsoft Support Diagnostic Tool (MSDT) via a Word document <cit.>. In , this results in a log entry for the spawned MSDT, with Microsoft Word identified as its parent, which is most likely a clear indicator of an exploit being used. The corresponding log file was provided to ChatGPT. Although ChatGPT does not recognize the Follina exploit due to its training data ending in 2021, it successfully interpreted the log file and highlighted potential indicators for further examination. In another example, ChatGPT was prompted to analyse a output of an ARP spoofing attack <cit.>, in which a MAC address claims to be responsible for a multitude of IP addresses, which is usually not the case. ChatGPT was unable to identify this anomaly but offered explanations for the behaviour, including ARP spoofing, when explicitly asked. Furthermore, ChatGPT's ability to parse and interpret data was tested. For network packets, this task is easily performed by tools such as Wireshark, which identifies protocols, analyses them, and presents the results in a way it can be interpreted by a human. ChatGPT was evaluated against the Heartbleed vulnerability (CVE-2014-0160), a bug in the TLS implementations' heartbeat protocol, which enables memory extraction from a server by sending a malformed heartbeat request <cit.>. Since this vulnerability was discovered in 2014, ChatGPT can provide a detailed explanation and detection methods. However, when given a single malformed packet of a heartbeat request, ChatGPT only parsed and presented basic information like IP addresses and ports. Upon being prompted to interpret the packet as a TLS packet, it parsed the content as TLS fields. However, inconsistencies were observed in the TLS record type identified by ChatGPT across multiple outputs. To investigate further, this experiment was executed 100 times, asking ChatGPT to report only the identified TLS record type. The results are shown in Table <ref>. These findings demonstrate that ChatGPT's non-deterministic nature led to varying responses. It is important to note that record type 0x14 was spelt differently in three instances. More significantly, none of the provided record types were correct. The actual record type should have been Heartbeat 0x18. Further manual analysis revealed that ChatGPT correctly extracted the field defining the type, but misinterpreted it entirely. Consequently, ChatGPT failed to detect the exploited heartbeat vulnerability in this packet. §.§ Summary ChatGPT demonstrates the capacity to aid in the detection of deviations from known, typical behaviours, such as the default configuration of an operating system. However, the experiments revealed inconsistent results, as well as some apparent non-default processes being overlooked in certain runs. Moreover, ChatGPT's performance suffers when contextual knowledge is necessary. Since it lacks training on specific organizational processes, users, logs, or procedures, it cannot effectively analyse information unique to a particular organization or system. In identifying suspicious activity, ChatGPT seems to perform better when provided with input that includes a textual description of an event, such as a failed password login attempt. This observation held true for both Linux and Windows logs, which typically contain additional descriptions. When such information is absent, ChatGPT may overlook critical details, like a reverse shell. A similar pattern emerges in the detection of specific attacks. Beyond the evident limitation of lacking real-time information, which hampers its ability to identify current threats, ChatGPT also struggled to deduce an attack like ARP spoofing based on the provided data. This challenge is particularly pronounced for binary representations, where incorrect and inconsistent assumptions were made during data parsing. § CHATGPT FOR GENERATING TEACHING SCENARIOS When teaching digital forensics, the importance of practical exercises cannot be overstated and the challenges are discussed in <cit.>. Specifically, referencing <cit.> which differentiates “skill specific case studies” and “holistic skill case studies”. It is the latter that requires substantially more effort to create and is described in <cit.> as “Data generation for this type of exercise usually involves construction of a scenario, a storyboard, and simulating the user’s actions over the course of several months”. There are attempts to simplify and automate the process of carrying out a series of actions over a long period of time to provide background activity <cit.>. However, the scenario specifics still require the construction of storyboards, users, and content. Given the impact that ChatGPT has made in the art world for both images[<https://www.theguardian.com/technology/2023/apr/17/photographer-admits-prize-winning-image-was-ai-generated>], poetry and stories[<https://towardsdatascience.com/using-ChatGPT-as-a-creative-writing-partner-part-1-prose-dc9a9994d41f>], this does seem something that ChatGPT could assist with. §.§ Storyboarding It was very easy to prompt ChatGPT to generate an overall storyboard for an intellectual property theft scenario. For example “generate an outline timeline of a scenario where someone within a workplace starts a new job and slowly becomes discontent over a few months and begins to steal intellectual property” produced a 6-month summary of activity that went from the employee joining the company in month 1 and being excited about the opportunity, to month 3 where discontent starts to grow and they “realize that company values don't align with personal beliefs”, through to month 6 where the “Employee's discontent reaches a peak” and there is “Increased resentment toward the company and coworkers” and they are “considering quitting or finding a new job” Further prompting also generated ideas for their internet history over the course of those months, ranging from company related information in month 1, through to “Techniques for bypassing security measures” and “Online forums discussing illicit activities” in month 4. Further prompting provided specific websites and Internet search terms that could be used to generate a synthetic scenario. For different scenarios involving stalking, it was also possible to request suggestions for potential digital evidence that would be available on the iPhone of the suspect and with further prompting it was possible to produce a very rich set of scenario notes including innocuous activity, as well as actions related to the scenario. This could inform data generation, either manually, or with automated tools. §.§ Character Profiles and Interests During scenario synthesis, it is often necessary to build characters and identities that will either be victims or perpetrators of a crime. Inspired by the use of ChatGPT in the arts fields, prompts were constructed to generate characters for the use in digital forensic teaching scenarios. For example, “generate a persona for an adult male in his 20s that is achieving low grades and university and might turn to crime” produced a summary of a 23-year-old male with a background, education history, personality, financial situation, criminal tendencies, and goals and ambitions. Subsequent prompting was able to generate high-level topics summarising his internet history that would include “academic, entertainment, and potentially incriminating search terms” followed by five themes and example search terms within each. §.§ Synthetic Content Considering the need in teaching scenarios to have data in the generated disk images that includes both activities related to the crime under investigation and realistic background activity, additional content was requested. For example, it was possible to generate a chat conversation with several of the character's classmates, his brother, to generate an email from the university stating that his assignment was late and would not be marked, and a response. A list of sample sociology assignments was also generated. These could all add realism to the scenario. Regarding the aforementioned stalking scenario, a set of anonymised messages could be generated, along with internet history suggestions for the suspect. However, asking for a list of cell towers that the suspect connected to resulted in a message that it was not possible as it required access to real-world data, but a fictionalised list could be created. §.§ Summary Considering the value of ChatGPT for this digital forensics use case, the results were extremely well constructed and potentially very useful. Since this is not in an investigative context and there is no incorrect answer, there is little issue with the results generated in this way. Some responses generated less convincing scenarios, e.g., another scenario with a university student turning to crime involved an art student getting involved with a criminal gang and creating counterfeit artwork or forging documents. This is not bad for a teaching scenario but is not as realistic. However, this was easily corrected by suggesting that the alternative drug dealing suggestion was better, and the scenario was updated. Other potential risks exist if this was fed into a system that auto generated content, which could result in material that educators may not want in their scenario disk images. This would need to be manually checked so that nothing inappropriate was added. Nevertheless, for creative generative applications, ChatGPT offers significant potential. Other issues in the storyboarding arose when asking to create a detailed summary of the activities that would need to be carried out on the device to generate the synthetic dataset, as some aspects were missed. However, with further prompting, this was corrected and a new list was generated. Finally, GAI tools that create images and videos, could also add to the richness of synthetic scenario data generated. § DISCUSSION The experiments outlined in this work assessed the effectiveness of ChatGPT for various aspects of digital forensic investigations. The overall results are discussed below. The Good Through this work's experiments, three major strengths were identified: creativity, reassurance, and avoidance of the blank page syndrome. ChatGPT has proven itself useful for tasks where it cannot be wrong, which with respect to digital forensics, are creative tasks such as forensic scenario creation, as outlined in Sec. <ref>, or creating inputs, e.g., keyword lists, which may serve as input for further analysis. Secondly, it provides reassurance, i.e. if an examiner has prior knowledge, it may be cross-compared with ChatGPT. However, it is important to note that prior knowledge is required to identify hallucinations. It was found helpful for code generation and explanation, refreshing a learner's memory on a specific topic, or doing a rudimentary analysis of evidence, e.g. finding suspicious activity log files or other listings. Lastly, ChatGPT is excellent to obtain a starting point and to avoid the blank page syndrome. For instance, it was used to create basic code snippets which then can be used further. While the generated code was not perfect, it was documented and provided a solid starting point. In most cases, it is better to have an existing skeleton instead of starting with an empty project. The Bad Naturally, ChatGPT also has some weaknesses requiring it to be used with caution: quality and age of training data, handling highly specialised and uncommon tasks, and interacting with ChatGPT. As a language model, it is trained on data and thus it may be biased and outdated. This means it cannot be questions about the newest artefacts, e.g., to learn about them or where they are located. Generally, the digital forensic community, compared to some other communities, is rather small and therefore the amount of training data is relatively small too. The more specialised a scenario was, the less reliable ChatGPT's answer, which makes sense as these scenarios are likely not contained in the training data. ChatGPT is text-based, whereas many challenges in digital forensics require the analysis of various kinds of data, e.g., network packets. While it is always a possibility to provide the information in hex, the experiments outlined as part of this paper demonstrate that it works less reliably. In addition, there is also a limitation in terms of input and output length, e.g., one cannot provide a complete log file but must prefilter it first. Lastly, the output is not deterministic, which is not desired in digital forensics where a principle is to be reproducible. The Unknown Obviously, one cannot upload real evidence to ChatGPT and thus usage is still limited. However, LLMs may be included in forensic products in the future which could then open a variety of new use cases, perhaps to the extent that a basic analysis does not require comprehensive training. For instance, this may allow queries such as: “Find all text messages that may be considered bullying or scan the hard drive and see if you find any GPS coordinates (e.g., in EXIF data) that indicate that the suspect was at location X”. In other words, interacting with forensic software may become more natural and thus could be performed to some extent by a non-technical investigator. The experiments showed that not all outputs from ChatGPT are reliable and have to be used with caution, especially as `hallucinations' make it difficult to identify if an answer is correct. On the other hand, similar problems are encountered when relying on information found online in non-peer-reviewed sources such as blogs (which likely have been used by ChatGPT as a training basis). This means, regardless of the source, an examiner is required to understand it before making use of the knowledge. Questions that need to be looked at include: which sources are the least error-prone, and which information is easier to comprehend for an examiner? Summary This paper's findings indicate that, while ChatGPT has significant potential in the digital forensic investigation field, human expertise remains essential. A critical question arising from this research is how to strike the right balance between leveraging the strengths of AI and maintaining the role of human expertise. § LIMITATIONS While this study provides valuable insights into the potential applications of ChatGPT in digital forensic investigation, it is crucial to acknowledge the limitations that may impact the generalisability and applicability of the findings of this paper. Firstly, the experiments conducted in the study do not cover all aspects of digital forensic investigation and have been conducted in a controlled environment. There are many more examples and use cases that could be tested, but could not be considered and performed as part of this study (due to space constraints). In addition, the experiments might not fully represent the complexity and challenges faced in real-world digital forensic investigations. Results strongly depend on the prompt, i.e., a minor modification in the prompt has led to a very different result. Moreover, given the nondeterministic behaviour of ChatGPT, the results discussed in this paper are not directly reproducible, which is why the interactions analysed as part of this paper are provided statically in the associated GitHub repository[<https://github.com/markscanlonucd/ChatGPT-for-Digital-Forensics>]. § CONCLUSIONS AND FUTURE WORK The paper described a series of eight experiments to explore the potential applications of ChatGPT for digital forensics and provides valuable insights. Many of the limitations identified are consistent with findings from other studies and existing system documentation. In particular, the phenomenon of `hallucination', which nicely disguises the alternative term `incorrect' is a recurring theme. This obfuscation makes the use of ChatGPT in digital forensics a precarious endeavour and underlines the importance of caution and close scrutiny. Nonetheless, ChatGPT shows potential in certain areas. For example, it can serve as an effective assistant in the area of code generation, provided the user has sufficient knowledge to evaluate, interpret, and correct the results. This operator-dependent effectiveness mirrors that of other automated tools commonly used in digital forensics. Other possibilities are the generation of keyword lists and the creation of storyboards for test scenarios. In terms of further work, there are other areas in digital forensics that could be explored but are not suited to an online service model and require a locally deployable model. If such a requirement was met, it would be interesting to explore tasks such as summarising case notes created during an examination, further evaluation of machine translation, image-to-text translation, and more extensive analysis capabilities, including timelines, social network analysis, and authorship attribution. It is important to remember that despite the hype and sometimes impressive capabilities, this technology is still rather new. This is cause for concern if it is overused, but also shows great potential for the future, and like all automation for digital forensics, it is useful and necessary, but requires caution and competent human oversight. § CREDIT AUTHORSHIP CONTRIBUTION STATEMENT Mark Scanlon, Frank Breitinger, Chris Hargreaves, Jan-Niclas Hilgert, John Sheppard: Conceptualization, Methodology, Investigation, Writing - Original Draft, Writing - Review & Editing. All authors had equal contribution. While ChatGPT was the focus of the research conducted as part of this paper, it did not contribute to the paper's content or analysis other than where directly quoted or described. model6-num-names
http://arxiv.org/abs/2307.03911v1
20230708060513
A Novel Pseudo-Random Number Generator Based on Multi-Objective Optimization for Image-Cryptographic Applications
[ "Takreem Haider", "Saúl A. Blanco", "Umar Hayat" ]
cs.CR
[ "cs.CR", "cs.IT", "math.IT" ]
aa]Takreem Haiderbb [email protected] aa]Saúl A. Blanco [email protected] ab]Umar Hayatbb [email protected] [bb]Corresponding author [aa]Department of Computer Science, Indiana University Bloomington, IN 47408, USA [ab]Department of Computer Science, University of Surrey, Guildford, Surrey, GU2 7XH, UK Pseudo-random number generators (PRNGs) play an important role to ensure the security and confidentiality of image cryptographic algorithms. Their primary function is to generate a sequence of numbers that possesses unpredictability and randomness, which is crucial for the algorithms to work effectively and provide the desired level of security. However, traditional PRNGs frequently encounter limitations like insufficient randomness, predictability, and vulnerability to cryptanalysis attacks. To overcome these limitations, we propose a novel method namely an elliptic curve genetic algorithm (ECGA) for the construction of an image-dependent pseudo-random number generator (IDPRNG) that merges elliptic curves (ECs) and a multi-objective genetic algorithm (MOGA). The ECGA consists of two primary stages. First, we generate an EC-based initial sequence of random numbers using pixels of a plain-image and parameters of an EC, that depart from traditional methods of population initialization. In our proposed approach, the image itself serves as the seed for the initial population in the genetic algorithm optimization, taking into account the image-dependent nature of cryptographic applications. This allows the PRNG to adapt its behavior to the unique characteristics of the input image, leading to enhanced security and improved resistance against differential attacks. Furthermore, the use of a good initial population reduces the number of generations required by a genetic algorithm, which results in decreased computational cost. In the second stage, we use well-known operations of a genetic algorithm to optimize the generated sequence by maximizing a multi-objective fitness function that is based on both the information entropy and the period of the PRNG. By combining elliptic curves and genetic algorithms, we enhance the randomness and security of the ECGA. To evaluate the effectiveness and security of our generator, we conducted comprehensive experiments using various benchmark images and applied several standard tests, including the National Institute of Standards and Technology (NIST) test suite. We then compared the results with the state-of-the-art PRNGs. The experimental results demonstrate that the ECGA outperforms the state-of-the-art PRNGs in terms of uniformity, randomness, and cryptographic strength. Pseudo-random number generatorElliptic curveGenetic algorithmMulti-objective optimization § INTRODUCTION Pseudo-random number generators (PRNGs) are extensively used in numerous fields, such as statistics, computer science, cryptography, and gaming <cit.>. PRNGs generate a sequence of numbers that appear random but are produced from a predetermined starting point using a mathematical formula. The quality of a PRNG is of utmost importance in the field of cryptography as the security of a cryptographic system depends on the randomness and unpredictability of the keys generated. A good PRNG should possess the following key features to ensure the quality and security of the generated random numbers <cit.>. 1) Randomness: To achieve the quality of a reliable PRNG, the generated number sequence must exhibit no distinguishable characteristics when compared to a truly random sequence. The generated output should be devoid of any identifiable patterns, and each number in the sequence should not correlate with the numbers that precede or follow it. 2) Unpredictability: The PRNG needs to possess resistance against attacks aimed at forecasting future outputs by analyzing past outputs. To achieve this, the PRNG must possess a substantial internal state and employ a robust cryptographic algorithm. 3) Periodicity: Each PRNG has a specific point at which the sequence it produces starts repeating. A PRNG is considered to be of high quality if its period is significantly longer, meaning it is approximately equal to the total number of possible outputs. 4) Security: The primary focus of the PRNG should be on maintaining strong security measures and guaranteeing resilience against commonly recognized forms of attacks, such as brute-force attacks. 5) Efficiency: The PRNG should possess efficiency in terms of both computational speed and memory consumption, particularly for tasks that involve generating a substantial number of random values, such as the creation of cryptographic keys. Designing a PRNG with optimal randomness is a challenging task that requires balancing many different factors <cit.>. The quality of random numbers generated by a PRNG is crucial in many applications, and it is essential to carefully evaluate and choose a PRNG that meets the quality and security requirements of the application or system. In recent years, chaotic systems have become popular in the development of PRNGs due to their desirable features such as unpredictability, irreducibility, sensitivity to initial conditions, ergodicity, and chaoticity <cit.>. Various PRNGs have been designed based on chaotic maps, for instance, PRNGs described in <cit.>. Murillo et al. <cit.> introduced a PRNG that uses an improved 1D logistic map to generate pseudo-random numbers with strong statistical characteristics. Hamza <cit.> presented a method that uses the Chen chaotic system to construct a PRNG for cryptographic purposes involving images. This method <cit.> addresses the issue of non-uniform distribution commonly found in pseudo-random number sequence (PRNS) generated by the Chen chaotic system and produces PRNS with a high level of randomness. Xia and Zheng <cit.>, developed a novel PRNG that utilizes a controlled digital chaotic system. The purpose of this generator is to enhance the dynamic degradation that arises from the use of chaotic systems. Meranza et al. <cit.> utilizes an improved version of the Henon map to design a PRNG. Their research indicates that the cryptographic properties of PRNS generated by the enhanced Henon map are superior to those produced by the traditional Henon map. Barani et al. <cit.>, designed a PRNG for creating PRNS by utilizing a generalized Newton complex map. To ensure the randomness of the generated sequences, several security measures were implemented and the outcomes indicated that this generator can produce secure PRNS. Zhao et al. <cit.> used a hyper-chaotic system to design a PRNG that exhibits high levels of randomness. In addition, Wang et al. <cit.> constructed a PRNG that is based on a logistic chaotic system. Gayoso et al. <cit.> introduced a new PRNG that utilizes the residue number system, which enables the creation of an exceptionally efficient circuit that operates distinctly compared to conventional generators. Furthermore, in <cit.>, the authors create a structure resembling a Hopfield neural network where each neuron is substituted with a compact PRNG. Yu et al. <cit.> developed a PRNG that uses a chaotic system and an improved Hopfield neural network. Their PRNG is designed to decrease the impact of chaotic degradation and enhance the quality of PRNS. Cang et al. <cit.> presented a PRNG based on a generalized conservative Sprott-A chaotic system. Agarwal et al. <cit.> designed a PRNG that is based on the cascade fractal function. The cascade function is created using a combination of two seed maps, which improves the unpredictability and randomness of PRNG. Shi and Deng <cit.> proposed a new PRNG that is based on Baker chaotic map and can generate highly random PRNS. Zang et al. <cit.> developed an algorithm for generating PRNS using complex polynomial chaotic maps. The PRNS generated by this method shows strong randomness and are vulnerable to differential attacks. A significant limitation associated with chaos-based cryptography arises from the fact that chaotic maps are designed to work with real numbers, which is not ideal for cryptographic applications that use finite numbers. Round-off errors in quantizing real numbers can create issues that result in irreversible functions, making decryption impracticable. The use of elliptic curve cryptography (ECC) is gaining popularity in modern cryptographic applications due to its effectiveness, strong security measures, and resilience against attacks. The difficulty of solving the elliptic curve discrete logarithm problem (ECDLP) is a significant factor that motivates the preference for elliptic curves (ECs) over chaotic maps in the design of cryptographic algorithms. Furthermore, ECs exhibit the advantage of necessitating significantly reduced key sizes in comparison to chaotic maps. This characteristic renders them more efficient and viable for implementation within environments that face limitations in terms of available resources. As a result, various PRNGs using the arithmetic of ECs have been designed. Hayat and Azam <cit.>, proposed an algorithm for generating PRNS which is based on ordered ECs. This method is efficient when compared with previously introduced PRNGs over ECs. However, the generator <cit.> is not suitable for ECs over large prime p due to high space and time complexity such as 𝒪(p) and 𝒪(p^2), respectively. A PRNG based on Mordell elliptic curve is introduced by Ullah et al. <cit.>, which is more efficient and has better cryptographic properties than <cit.>. However, the time and space complexity of the generator <cit.> is 𝒪(mp) and 𝒪(m), respectively, where m ≤ p is the size of PRNS, due to which this generator is not compatible with ECs associated with large prime p. An isomorphic EC-based PRNG, developed by Haider et al. <cit.>, produces sequences with high randomness and outperforms existing generators in terms of cryptographic properties; however, it faces compatibility issues with large prime ECs when the parameters of EC and the size of the ordered set are not predetermined. Recently, Adhikari and Karforma <cit.> presented a PRNG over large prime ECs. To generate pseudo-random numbers, firstly, the y-coordinate of the generated point over the EC is extracted and then the least significant 8 bits of the extracted y-coordinate are converted to its decimal representation. Although this PRNG is compatible with ECs over large primes to obtain PRNS with length ℓ, this algorithm needs to generate ℓ number of points over the EC. So, for large ℓ, this method is not suitable for real-world applications. The existing EC-based PRNGs have exhibited favorable outcomes, however, they do not guarantee the generation of PRNs with security levels closely approximating the theoretically optimal values. §.§ Our contribution To address the aforementioned issues, our focus is directed toward the design of a PRNG that produces random numbers of high quality, exhibiting optimal randomness. The following steps outline our contributions: 1) To overcome the challenges posed by low randomness, predictability, and vulnerability to cryptanalysis attacks, we employ a multi-objective genetic algorithm (MOGA) optimization technique. This approach is chosen because MOGA allows us to simultaneously optimize multiple objectives, such as randomness, predictability, and resistance to cryptanalysis attacks. By employing MOGA, we can enhance the overall quality and security of the generated random numbers. 2) Our method takes advantage of the image itself as the seed for the initial population in the genetic algorithm optimization process. This choice is motivated by the image-dependent nature of cryptographic applications. By using the image as the seed, the PRNG can adapt its behavior to the unique characteristics of the input image. This adaptation improves the security of the PRNG, making it more resistant to differential attacks and increasing the level of protection against potential threats. 3) We generate an initial sequence of random numbers based on both the plain-image and elliptic curve. This departure from traditional methods of population initialization is selected for a specific reason. By using the plain-image and elliptic curve, we ensure that the initial solution provided to the genetic algorithm is well-chosen and has desirable properties. This decreases the number of generations required by the genetic algorithm and thus minimizes the overall computation time. 4) The genetic algorithm is utilized to improve the generated sequence by maximizing a fitness function that considers both information entropy and the period of the pseudo-random sequence. This choice is made because the genetic algorithm is effective in optimizing problems that have multiple objectives. By maximizing the fitness function, we can enhance the information entropy and period of the generated sequence, leading to superior-quality random numbers. 5) To evaluate the performance and security of our proposed PRNG, extensive experiments are conducted using various benchmark images. The results are then compared with the existing state-of-the-art PRNGs. The experiments serve as empirical evidence supporting our claims of enhanced performance and security. §.§ Paper organization The rest of the paper is organized as follows. In <ref>, we present the preliminary theoretical background and notions used in the paper. Moreover, in <ref>, we describe our method ECGA for the construction of the proposed IDPRNG based on elliptic curves and a genetic algorithm. In <ref>, we present a comprehensive security analysis and the comparison of the ECGA with the state-of-the-art PRNGs, establishing empirically that our method is superior. Finally, in <ref>, we summarize the findings of the ECGA and discuss possible future directions of our line of work. § PRELIMINARIES In this section, we present fundamental concepts that are used in our algorithm and hold significance for comprehension. §.§ Elliptic Curves Let p>3 be a prime number and a,b be two integers, and we use 𝔽_p to denote the finite field of p elements. An elliptic curve (EC) denoted by E_p,a,b is defined as a set of solutions (x,y) ∈𝔽_p ×𝔽_p satisfying the equation y^2≡ x^3+ ax +b p, along with an additional point at infinity denoted by ∞, where a, b ∈𝔽_p and 4a^3+27b^2≢0 p. The group law operation +_g on an elliptic curve E_p,a,b is defined as follows <cit.>. We denote the multiplicative inverse of a∈𝔽_p by a^-1. Point addition: Let G_1 = (x_1,y_1) ans G_2 = (x_2,y_2) be two points on E_p,a,b such that G_1≠ G_2 and x_1≠ x_2, then the computation of the resulting point G_1 +_g G_2 = (x_3, y_3) when performing point addition over E_p,a,b is given as: λ ≡ (y_2 - y_1)(x_2 - x_1)^-1p. x_3 ≡λ^2 - x_1 - x_2p, y_3 ≡λ(x_1 - x_3) - y_1p. Point doubling: Let G_1 = (x_1,y_1) be a point on E_p,a,b such that y_1≠ 0, then the computation of the resulting point G_1 +_g G_1=(x_3, y_3) when performing point doubling over E_p,a,b is given as: λ ≡ (3 x^2_1)(2 y_1)^-1p x_3 ≡λ^2 - 2x_1p, y_3 ≡λ(x_1 - x_3) - y_1p. §.§ Genetic Algorithm In recent times, there has been significant interest among researchers in evolutionary algorithms (EAs), which have been recognized as valuable in numerous applications (see, e.g., <cit.> and references within). One of the most widely recognized types of EAs is genetic algorithms (GAs), which are search heuristics. Several applications have been devised using genetic algorithms <cit.>. Fundamentally, a GA can be divided into four primary stages <cit.>, which we describe below. * Initial population: During the creation of the population, an initial set of individuals or solutions is generated. These individuals are designed to form the starting point for the genetic algorithm. They represent possible solutions to the problem being addressed and are encoded in a format that allows the algorithm to work with and manipulate them. * Selection: During the selection step, a portion of individuals from the population is selected as parents for the next generation. This selection is primarily determined by the fitness or quality of each individual solution. Individuals with higher fitness scores are more likely to be chosen as parents since they are considered to have superior solutions. * Crossover: During the crossover step, new solutions are generated by merging the genetic material of chosen parent individuals. The genetic material, typically represented as chromosomes or sequences of values, is swapped between parents to produce offspring. This procedure emulates the natural recombination of genes that takes place during reproduction. * Mutation: Mutation involves a spontaneous and random alteration in a specific aspect or trait of a solution. During the mutation step, slight modifications are made to the genetic composition of individual solutions. This randomness plays a crucial role in introducing fresh genetic variations within the population. A genetic algorithm explores the search space through multiple iterations, aiming to discover an optimal or nearly optimal solution for the given problem. § ELLIPTIC CURVE GENETIC ALGORITHM (ECGA) In this section, we provide a novel method namely elliptic curve genetic algorithm (ECGA) for generating pseudo-random numbers by using elliptic curves and a genetic algorithm. The effectiveness of a PRNG highly depends on its tendency to produce random and unpredictable sequences. The quality of a PRNG is significantly enhanced by two important factors, namely high entropy and a long period. The presence of high entropy in a sequence of numbers makes it difficult for an attacker to predict the next number, while a long period decreases the chances of repetitive patterns that could be used for exploitation. We propose a generator that consists of two important stages. Initially, we utilize points on elliptic curves to create a sequence of random numbers. Subsequently, we employ the operations of a genetic algorithm to maximize a fitness function that considers multiple objectives. This fitness function guarantees a higher degree of randomness by considering both the information entropy and the period of the sequence. By employing an appropriate initial solution based on ECs, the optimization process reduces the number of generations needed, thus decreasing computational time. Furthermore, our proposed method ECGA enhances the unpredictability and randomness of the PRNG by incorporating elliptic curves and a genetic algorithm. We provide a concise overview of each stage of the ECGA as follows. §.§ Initialization We use elliptic curves to generate an initial sequence of pseudo-random numbers. The initialization process comprises the following procedures: Step 1: Let I be a plain-image in a two-dimensional array with dimensions r× s, where each element belongs to the symbol set [0, 2^m-1] and m represents the number of bits in the pixel of an image. Here, if n_1,n_2 are two integers with n_1<n_2, [n_1,n_2]={n_1,n_1+1,…,n_2}. We consider an elliptic curve E_p,a,b defined by the equation y^2≡ x^3+ax +b p, where p, a, and b are parameters of the curve and G=(x_0,y_0) is a base point on the curve. Let us assume that H_I, H_a, H_b, and H_p denote the SHA-256 hash value of I, a, b, and p, respectively. The proposed approach begins by selecting an initial point denoted as G_0=(x_0,y_0) on E_p, a,b. Subsequently, a series of n points G_k=(x_k,y_k) are generated, where 1 ≤ k ≤ n. Step 2: Calculate the binary values for the x and y coordinates of the point G_k=(x_k,y_k), with 1≤ k≤ n. We define the function B(x_k) that takes as input the decimal representation of x_k and outputs its binary representation as a sequence of u bits: B(x_k) =x_k^1,x_k^2, … , x_k^u, where u is the number of bits needed to represent x_k in binary form. Similarly, we define the function B(y_k) that takes as input the decimal representation of y_k and outputs its binary representation as a sequence of v bits: B(y_k) =y_k^1,y_k^2, … , y_k^v, where v is the number of bits needed to represent y_k in binary form. Step 3: From Step 1, let ∈{ I, p, a, b } and H_ be the SHA-256 sets from Step 1. Namely, H_ = { h_^i : 1 ≤ i ≤ 256, h_^i∈{ 0, 1 }}. We furthermore define B^x_a,b to be a binary sequence of length 3 ℓ', where ℓ'= min(u,256), obtained by repeatedly merging the corresponding binary bits of H_a, B(x_k), and H_b in an alternating manner as follows: B^x_a,b = h_a^1, x_k^1, h_b^1, h_a^2, x_k^2, h_b^2, …, h_a^ℓ',x_k^ℓ',h_b^ℓ'. Similarly, we define B^y_I,p to be a binary sequence of length 3 ℓ”, where ℓ”= min(v,256), obtained by merging the binary bits of H_I, B(y_k), and H_p as follows: B^y_I,p = h_I^1, y_k^1, h_p^1, h_I^2, y_k^2, h_p^2, …, h_I^ℓ”,y_k^ℓ”,h_p^ℓ”. Step 4: Generate a binary sequence B^x,y_I, p, a,b of length ℓ, where ℓ = 3(ℓ' + ℓ”), by concatenating B^x_a,b and B^y_I,p as follows: B^x,y_I, p, a,b = B^x_a,b +_C B^y_I,p. Note that the concatenation operation denoted by +_C is simply the operation of appending the second sequence to the end of the first sequence. Step 5: Let B_z denote a randomly selected binary sequence of length ℓ. We define the resultant binary sequence B^x,y_I, p, a,b,z of length ℓ as follows: B^x,y_I, p, a,b,z= 0 if ξ_i=ξ'_i ∀ i∈{ 1,2,…,ℓ} 1 if ξ_i≠ξ'_i ∀ i∈{ 1,2,…,ℓ}, where ξ_i and ξ'_i are the i-th element of the binary sequences B^x,y_I, p, a,b and B_z, respectively. Step 6: Divide the binary sequence B^x,y_I, p, a,b,z into segments of a predetermined length, m, and convert each segment into its decimal representation. Let S_d denotes the decimal representation of B^x,y_I, p, a,b,z. Therefore, S_d can be expressed as a sequence of decimal values δ_1, δ_2, ..., δ_t, where t = ⌊ℓ/m ⌋ . Note that each of the decimal values in the sequence S_d falls within the range of [0, 2^m-1]. Step 7: Repeat Step 2 through Step 6 for each k ∈{ 1, 2, …, n }. Let Δ(I,p,a,b,x,y,z,n) be the resulting sequence of length n × t. Step 8: Choose three positive integers ϕ, ψ, and φ. The proposed IDPRNG is represented by the function Ω : Δ→ [0, 2^m-1], where Ω(δ_i) ≡ϕδ_i + ψδ_i+1 + φ2^m. Thus, Ω(I,p, a,b,x,y,z,n,ϕ, ψ, φ) is the initial sequence of random numbers based on the parameters I,p,a,b,x,y,z, n,ϕ, ψ, and φ. Henceforth, we represent it by Ω_I. §.§ Fitness function A fitness function is a tool used to measure how closely a given solution matches the ideal solution. The proposed algorithm aims to find the best possible solution for a given problem. The degree of uncertainty in a PRNS is often measured using information entropy (H), which is an important measure of randomness. For a PRNS to be considered effective, it must contain a high level of uncertainty. The higher the entropy value, the stronger the generator is considered to be. Let Ω be a PRNS taking values from [0, 2^m-1]. The entropy H(Ω) of Ω is defined as: H(Ω) = - ∑_i=1^2^mP(ω_i) log_2 P(ω_i), where P(ω_i) represents the probability of an i-th element ω_i in Ω. Apart from entropy, the period of a PRNS is also a significant factor in assessing its randomness. A PRNS with a long period is generally considered good for cryptographic purposes. The period of Ω, denoted by T(Ω), is the the least positive integer T for which ω_i + T = ω_i for all i ≥ 1. The case T(Ω)=ℓ(Ω) is optimal since then Ω is considered more secure. To achieve our objective, a multi-objective optimization function is employed that seeks to maximize both the information entropy and the period of a pseudo-random number sequence Ω of length ℓ which is initially generated. The purpose of this function is to generate PRNSs that have both maximum entropy and period, to obtain the best possible results. Our optimization problem is based on the following fitness function. Maximize f(Ω) = H(Ω) + T(Ω), where 0 ≤ H(Ω) ≤ m and 1 ≤ T(Ω) ≤ℓ. §.§ Crossover Let Ω_I = (ω_0, ω_1, ..., ω_2^m-1) be the set of pseudo-random numbers generated during the initialization phase. The goal of the crossover operator is to replace elements in Ω_I with those that result in a higher fitness value. In other words, elements with lower fitness values are replaced with those having comparatively higher fitness values. More concretely, the crossover operation is carried out as follows: i) Let P_e be a random permutation of the integers in the range [1, 2^m]. ii)Let V = (v_1, v_2, ... , v_2^m) be a vector of 2^m elements randomly selected from Ω_I. iii)Define Ω_C as the sequence obtained by replacing the v_i-th element of Ω_I with the i-th element of P_e, i.e., ω_i^C=ω_P_e(i) if i = v_j  for some j ω_i otherwise. The purpose of this operation is to increase the diversity of the sequence and improve the chances of finding a globally optimal solution. Specifically, the crossover operation ensures that all integers in the range [0, 2^m-1] are present in Ω_C. To evaluate the quality of the new sequence Ω_C, we compute its entropy H(Ω_C) and period T(Ω_C). If H(Ω_C) ≥ H(Ω_I) and T(Ω_C) ≥ T(Ω_I), then we consider Ω_C as input for the mutation operation. Otherwise, if H(Ω_C) < H(Ω_I) or T(Ω_C) < T(Ω_I), we use Ω_I as input to the mutation phase. §.§ Mutation Let Ω_M be a sequence of length ℓ obtained after crossover operation, and let r and r' be two integers randomly selected from the interval [1,ℓ]. The swapping mutation operator μ_s can be defined as: μ_s(Ω_M,r,r') = Ω_M', where Ω_M' is the sequence obtained after swapping the element at position r with the element at position r' in Ω_M. Since the entropy of a sequence is not affected by the swapping operation, therefore in the mutation phase we only compute the period of the obtained sequence. If T(Ω_M') ≥ T(Ω_M), then Ω_M' is selected for the next generation, otherwise, Ω_M is retained for the next generation. §.§ Termination The stopping criteria of any optimization algorithm depend primarily on two significant factors. The first factor is the number of generations or iterations, where if an algorithm attains the pre-defined number of generations, it terminates. The second factor is based on the optimal solution, where an algorithm terminates if it achieves the optimal solution to the fitness function. It is essential to note that the number of generations alone cannot serve as a feasible parameter to stop an algorithm since it may terminate without any improvement. As the objective is to generate highly random pseudo-random number sequences, therefore the proposed algorithm employs the optimal solution to the problem as the termination condition. In other words, the algorithm stops when the required sequence attains optimal entropy and optimal period, producing a highly random sequence that is well-suited for cryptographic applications. Thus, the PRNS obtained after the termination phase represents the optimized sequence of pseudo-random numbers characterized by optimal entropy and period. We denote the optimized PRNS by Ω_Z. § SECURITY ANALYSIS AND COMPARISON This section evaluates and compares the effectiveness of the ECGA through extensive experiments and security evaluations, which involve a variety of tests such as randomness analysis, entropy analysis, period analysis, Hurst exponent analysis, correlation analysis, key sensitivity analysis, and key space analysis. Additionally, we assess the efficiency of the ECGA in two different ways: 1) by comparing the initially generated sequences with their optimized sequences, and 2) by comparing the ECGA with the state-of-the-art generators <cit.>. For our experiments, we used MATLAB R2022b on a machine with an Intel Core m3-7Y30 @1.61 GHz and 8 GB of RAM. We generated 100 random sequences by selecting parameters at random, and the selected parameters are listed in <ref>. We used two elliptic curves recommended by NIST <cit.>, namely E_p,a,b and E_p',a',b' with prime numbers of 256 bits and 521 bits, respectively. The parameters for these elliptic curves are provided in <ref>. In addition, we used four different standard images with three distinct dimensions and five sets of integer-based triplets to generate these sequences. Our approach generates these 100 sequences by modifying one of the four parameters, namely (a) the elliptic curve, (b) the plain-image, (c) the size of the plain-image, and (d) the triplet (ϕ, ψ, φ) while keeping the others constant. §.§ Randomness analysis The NIST 800-22 test suite <cit.> is widely recognized as a suitable tool for evaluating the randomness of binary sequences. The suite consists of 15 tests and 174 sub-tests and requires, in general, at least 1 million bits to assess the randomness of a sequence. The suite calculates the probability of p_value for each sequence, and if p_value≥λ (or p_value < λ), the sequence is considered random (or non-random), where λ is a predefined threshold known as the significance level. For cryptographic purposes, λ is usually set between 0.001 and 0.01 <cit.>. Moreover, the proportion range of (1-λ) ± 3 √(λ (1-λ)/N) is considered acceptable, where N ≥ 1/ λ indicates the sample size (number of sequences). The NIST suite and its corresponding parameters are listed in <ref>, while a brief explanation of the suite can be found in <cit.>. To ensure the randomness of the ECGA, we tested numerous optimized random sequences generated by our generator using all the tests in the NIST 800-22 test suite. For the experiments, we used a significance level λ = 0.01, a sample size N ∈{ 100,800 }, and a sequence of length n = 10^6 bits. We converted 100 resultant sequences generated by the ECGA based on the randomly selected parameters listed in <ref> into their binary representation for the NIST analysis. Each generated sequence values lie in the range of [0, 2^8-1], resulting in a total of (8 × 100× 10^6) bits for the evaluation of the NIST test suite. We performed NIST tests on two data samples: firstly, by taking the first (1 × 10^8) bits from the total of (8 × 10^8) bits with N = 100, and then by taking the total (8 × 10^8) bits with N = 800. The results are presented in <ref>. The results indicate that for N=100, p_value≥ 0.01 for all tests except the block frequency test, for which p_value= 0.006, which is very close to the acceptable value of 0.01. Additionally, for N=100, the proportion of all the tests is greater than the lower bound of acceptable proportion, which is 0.96. Moreover, for N=800, p_value≥ 0.01 and proportion≥ 0.97 for each test included in the NIST test suite, where 0.97 is the lower bound of the acceptable proportion for a sample size of 800. As a result, the ECGA passed all tests and is capable of generating highly random sequences. We compared the results of the ECGA with the state-of-the-art generators <cit.> using the NIST test suite, the results are presented in <ref>, <ref>, and <ref> and are summarized as follows. 1) The results listed in <ref> are based on 100 different random sequences each of length 10^6 bits. The ECGA passed all 100 sequences, attaining a proportion of 1 for six different tests. On the other hand, generators <cit.>, <cit.>, and <cit.> attained proportions of 1 for three, five, and two tests, respectively. The ECGA also satisfied the acceptable proportion value 0.96 for all the tests listed in <ref>. However, the generator <cit.> failed the Random Excursions Test (RET) with a proportion of 0.72. The main objective of RET is to analyze the occurrence of a specific number of visits in a cumulative sum random walk. The purpose of conducting this test is to assess whether the number of visits to a particular state within a cycle deviates from the expected frequency for a random sequence. The test comprises a total of eight individual tests, each focusing on one specific state: -4, -3, -2, -1, +1, +2, +3, and +4. Thus, the ECGA not only passed all the tests but also attained the highest proportion for the maximum number of tests compared to the generators in <cit.>. 2) <ref> compares the results based on eight distinct sequences, each of length 10^6 bits. The ECGA passed the p_value criterion for each test, while generator <cit.> failed the Non-Overlapping Template Matching Test with a p_value of 0. Furthermore, our generator achieved p_value≥ 0.9 for five tests, compared to only one test for generator <cit.>. Hence, our generator outperforms <cit.>. 3) The comparison results of 30 random sequences with a total of (30 × 10^6) bits are illustrated in <ref>. Out of 41 tests, the ECGA attained a proportion value of 1 for 37 tests, while the generator <cit.> attained a proportion of 1 against 30 tests. Thus, the ECGA passed more tests with a proportion rate of 1 when compared with the generator <cit.>. Thus, the ECGA performed better than other existing PRNGs based on the NIST statistical test suite. The ECGA passed all the tests with the highest proportion rate for the maximum number of tests when compared to the generators <cit.>. Therefore, it can be concluded that the ECGA is suitable for various applications that require randomness and unpredictability. §.§ Entropy analysis The degree of uncertainty in a PRNG can be measured by its information entropy <cit.>, which is typically expressed in bits. A good PRNG should produce sequences with high entropy, meaning a greater degree of uncertainty. In this study, 100 sequences of length 10^6 with values ranging from 0 to 2^8-1 were generated and their entropy was calculated. The optimal entropy denoted by H_max, in our case, is 8. The average entropy of the 100 sequences before optimization was between 6.7702 and 7.1084, with an average of 6.9305. However, after optimization, all 100 sequences achieved their maximum entropy of 8, as demonstrated in <ref> and <ref>. These findings indicate that the ECGA can be very useful for cryptographic purposes, as it significantly increased the entropy of the generated sequences. Furthermore, the results listed in <ref> demonstrates that the ECGA outperformed several state-of-the-art PRNGs <cit.> in generating sequences with optimal entropy. §.§ Period analysis To ensure that a PRNG generates a sequence that is sufficiently random and has a long enough period for its intended use, it is crucial to perform the period test <cit.> on it. The period is a significant attribute of a sequence generated by a PRNG, as it indicates the length of the shortest repeating cycle present in the sequence, if it exists. Specifically, it is the smallest positive integer T for which the k-th element of the sequence matches the (k+T)-th element for all k ≥ 0. If the period of a sequence is equal to its length, then it is the optimal period of the sequence denoted by T_max. We evaluated the effectiveness of the ECGA by generating 100 random sequences, each containing 10^6 numbers, using the ECGA. We analyzed the strength of the sequences by measuring their period, both before and after optimization. Our goal is to determine how much the period increased after optimization. The results are presented in <ref>. Our findings indicate that, before optimization, the sequences have periods ranging from 192 to 998712, while after optimization, all sequences have an optimal period of 10^6. This suggests that our optimization algorithm has significantly increased the period of the random sequences, making them suitable for cryptographic applications. In other words, all the generated sequences have an optimal period and can be used for secure communication. §.§ Hurst exponent The Hurst exponent <cit.> is a statistical test, that determines the trend in data. It is denoted as H_E and falls between 0 and 1. There are three possible scenarios when calculating H_E: 1) If H_E=0.5, the data is random or independent, meaning there is no correlation between the current and previous values. 2) If 0.5<H_E≤1, the data is persistent, meaning if there is an increasing trend in the values, the next values will likely follow the increasing trend. 3) If 0≤H_E<0.5, the data is anti-persistent, meaning if there is an increasing trend in the values, the next values will likely follow the decreasing trend. A value closer to 0.5 indicates more random data. A good PRNG should have an H_E value close to 0.5. The rescaled range (R/S) analysis <cit.> is the most commonly used method to compute H_E. We have calculated the Hurst exponent (H_E) using the (R/S) method for 100 sequences, each of length 10^6. The results are presented in <ref> and shown in <ref>. The results indicate that before optimization, H_E ranged from 0.0665 to 0.2900, while after optimization, it ranged from 0.4916 to 0.5410. This demonstrates that after optimization, all generated sequences have H_E values very close to the ideal value of 0.5, indicating that the sequences are highly random. We have also shown the Hurst plots of the first five sequences in <ref>. We also compared the Hurst exponent of the ECGA with the state-of-the-art generators <cit.>. The results are presented in <ref>, which shows that the ECGA H_E values are closer to the ideal value of 0.5 compared to the generators <cit.>. Thus, the ECGA can generate highly random sequences when compared with the generators described in <cit.>. §.§ Correlation analysis The correlation coefficient <cit.> is a crucial measure for determining the level of similarity between two random sequences of the same length. Given two random sequences, Ω={ω_j}_j=1^l and Ω'={ω_j' }_j=1^l, with length l, the correlation coefficient (R) is calculated using the following formula: R(Ω, Ω') = ∑_j=1^l(ω_j-Ω)(ω_j'-Ω')√(∑_j=1^l(ω_j-Ω)^2)√(∑_j=1^l(ω_j'-Ω')^2). Here, Ω and Ω' represent the mean of Ω and Ω', respectively. The resulting R(Ω, Ω') is a value between -1 and 1. When R(Ω, Ω') is close to 0, the sequences are considered independent, whereas a value of 1 or -1 indicates a high level of dependency. We calculated R(Ω_i, Ω_j') for 100 sequences we generated, where i,j ∈{ 1, 2, ..., 100 } and i ≠ j. From our experiments, we determined that the minimum and average values of the optimized generated sequences for all pairs of i,j excluding when i=j are 0 and 0.0638, respectively. Since the average value is very close to the ideal value of 0, we conclude that the ECGA can generate highly independent sequences. §.§ Key sensitivity analysis Key sensitivity analysis enables the study of how minor changes to the input parameters or initial conditions can cause changes in the resulting output. If a PRNG has high sensitivity, even a slight change in the input can result in a significant difference in the output. This implies that a PRNG should exhibit a high level of sensitivity, even at the single-bit level <cit.>. To demonstrate the high sensitivity of the ECGA, we conducted an experiment in which we slightly altered the parameters of the ECGA. For our purpose, we generated two sequences, Ω and Ω', which represent the original sequence and a slightly varied sequence, respectively, to analyze the sensitivity of the ECGA. A sequence Ω is generated using: the parameters of EC E_p,a,b and the plain-image (a) which are defined in <ref>, r × s = 256 × 256, and (ϕ, ψ, φ) = (25,73,121). Subsequently, slight modifications to E_p,a,b, PI, r × s, and the triplet (ϕ,ψ,φ) resulted in the generation of four distinct sequences: Ω_E_p',a',b'', Ω_PI'', Ω_r' × s'', and Ω_(ϕ', ψ', φ')', respectively. The values of E_p',a',b' are listed in <ref>, while PI' represents the plain-image (b) shown in <ref>. The dimensions of r' × s' were set to 512 × 512, and the triplet (ϕ',ψ',φ') was set to (123,33,77). We analyzed the sensitivity of the ECGA using three different methods: 1) by graphical representation; 2) by computing the number of bit change rate (NBCR); and 3) by computing the correlation coefficient. §.§.§ Graphical representation We analyzed the sensitivity of the ECGA by visually displaying both the original sequence Ω and a slightly altered version of it, denoted as Ω'. The impact of various parameters is investigated and is presented in <ref>. Specifically, <ref>(a) depicts the effects of varying the parameters of the triplet (ϕ, ψ, φ), while <ref>(b) shows the impact of changing the parameters of EC. <ref>(c) and <ref>(d) demonstrate the effects of modifying the parameters PI and the size r × s, respectively. As illustrated in <ref>, a slight modification in any of the parameters E_p',a',b', PI', r' × s', and (ϕ', ψ', φ') resulted in a distinct sequence Ω' that differed from the original sequence Ω. Hence, it can be concluded that the ECGA is highly sensitive to input parameters. §.§.§ Number of bit change rate The number of bit change rate (NBCR) <cit.> is a common measure used to evaluate the sensitivity of a PRNG. To calculate NBCR for two randomly generated sequences Ω and Ω', we use the following equation: NBCR(Ω, Ω')= d_H(Ω,Ω')/n, where, d_H denotes the Hamming distance between Ω and Ω', and n represents the total number of bits in either sequence. An ideal value of NBCR is 50%, which means that the closer the value of NBCR is to 50%, the more sensitive the algorithm is. We have computed the NBCR of the original sequence Ω and a slightly changed sequence Ω' and the results are listed in <ref>. These results depict that the value of NBCR is very close to the optimal NBCR 50%, which indicates that the ECGA is highly sensitive to the input parameters and thus applicable for security applications. We also examined the NBCR of the proposed sequences both before and after the optimization process. The results, as shown in <ref>, indicate that NBCR values are within the intervals of [49.04, 51.19] and [49.99, 50.04] for the sequences before and after optimization, respectively. Additionally, we compared the NBCR of the ECGA with other PRNGs <cit.>, in terms of NBCR. <ref> shows that the NBCR value of the ECGA is identical to the optimal value of 50%, while the NBCR of the other PRNGs is closer to 50%. Therefore, the ECGA is more sensitive to input parameters when compared to the generators <cit.>. §.§.§ Correlation coefficient The sensitivity of the ECGA is also evaluated by calculating the correlation coefficient R between two sequences, Ω and Ω', where Ω represents the original sequence while Ω' represents a slightly modified version of the original sequence. To ensure high sensitivity, R(Ω, Ω') should be close to 0. We analyzed the results of R(Ω, Ω_i') where i belongs to the set E_p',a',b', PI', r' × s', (ϕ', ψ', φ'), and presented in <ref>. The results show that R(Ω, Ω') values ranged between [0.0005, 0.1392] and [0.0004, 0.0017] before and after optimization, respectively. These results indicate that our designed generator is highly sensitive to its parameters, as both before and after optimization, the R values are very close to 0. Furthermore, there is a significant improvement in the R values after the optimization process. Moreover, we conducted a comparison between the sensitivity of the ECGA and the state-of-the-art generators <cit.>, using correlation coefficient as the metric. The outcomes of this comparison are shown in <ref>. The findings from the <ref> indicate that the ECGA is more sensitive than the PRNGs <cit.>. §.§ Key space analysis The evaluation of the security of a cryptographic algorithm is closely linked to the concept of key space. When a PRNG is used for cryptographic purposes, it is essential to analyze its key space, which is the set of possible keys that can be used to generate a sequence. A larger key space makes the algorithm more resistant to exhaustive attacks, thereby improving its resistance to cryptanalysis. To ensure the security of a cryptosystem, it is recommended that the key space should be at least 2^128 <cit.>. The ECGA is based on several parameters, including I, E_p,a,b, B_z, ϕ, ψ, and φ, as described in <ref>. The SHA-256 hash code of I, p, a, and b is also utilized. Additionally, the random sequence B_z has at least 256 bits, and the parameters ϕ, ψ, and φ range from 0 to 255. As a result, the key space of the ECGA is at least 2^256 × 5. In comparison with the recommended key space of 2^128, this is a very significant increase, indicating that the ECGA is capable of withstanding modern cryptanalysis due to its extremely large key space. We conducted a comparative analysis of the key space of the ECGA with that of the existing state-of-the-art generators <cit.>. The findings of our analysis are presented in <ref>. The results indicate that the ECGA has a superior key space in comparison to the PRNGs developed in <cit.>. § CONCLUSION We presented a novel method ECGA for the construction of an image-dependent pseudo-random number generator (IDPRNG) specifically for image-cryptographic applications. We addressed the limitations of traditional PRNGs by using a multi-objective genetic algorithm (MOGA) optimization method and integrating elliptic curves into our approach. The ECGA comprises two key phases. During the initial phase, we utilize pixels from the image along with the parameters of the elliptic curve to generate an initial sequence of random numbers. During the second phase, a genetic algorithm is utilized to enhance the generated sequence by maximizing a fitness function that is based on both the information entropy and period of the pseudo-random sequence. We conducted thorough experiments and security evaluations to assess the performance of the ECGA. These evaluations covered a wide range of tests, including randomness analysis, entropy analysis, period analysis, correlation analysis, Hurst exponent analysis, key sensitivity analysis, and key space analysis. Furthermore, we have compared the ECGA with the existing state-of-the-art generators, from which it is evident that the ECGA: exhibits superior performance relative to other state-of-the-art generators <cit.> as per the NIST statistical test suite; outperformed several state-of-the-art generators <cit.> in generating sequences with optimal entropy; produces better Hurst exponent results that are closer to the ideal value of 0.5 when compared with the generators <cit.>; is more sensitive to input parameters than the generators <cit.>; and has a superior key space in comparison to the generators <cit.>. Future work could explore further optimizations, evaluate the performance on large-scale data sets, and investigate the applicability of the IDPRNG in other domains requiring secure and unpredictable random number generation. plainnat
http://arxiv.org/abs/2307.07339v1
20230714133841
Lagrangian multiforms on coadjoint orbits for finite-dimensional integrable systems
[ "Vincent Caudrelier", "Marta Dell'Atti", "Anup Anand Singh" ]
math-ph
[ "math-ph", "hep-th", "math.MP", "nlin.SI" ]
=-1.7cm =10 truein =16cm =0cm =0cm cd matrix,calc decorations.markings,shapes.geometric,shapes.misc decorations.pathreplacing,positioning arrows cross/.style=cross out, draw=black, minimum size=2*(#1-), inner sep=0pt, outer sep=0pt, cross/.default=1pt cross/.style=cross out, draw=black, minimum size=2*(#1-), inner sep=0pt, outer sep=0pt, cross/.default=1pt [1] OMXMnSymbolE MnLargeSymbolsOMXMnSymbolEmn MnLargeSymbolsboldOMXMnSymbolEbn OMXMnSymbolEmn <-6> MnSymbolE5 <6-7> MnSymbolE6 <7-8> MnSymbolE7 <8-9> MnSymbolE8 <9-10> MnSymbolE9 <10-12> MnSymbolE10 <12-> MnSymbolE12 OMXMnSymbolEbn <-6> MnSymbolE-Bold5 <6-7> MnSymbolE-Bold6 <7-8> MnSymbolE-Bold7 <8-9> MnSymbolE-Bold8 <9-10> MnSymbolE-Bold9 <10-12> MnSymbolE-Bold10 <12-> MnSymbolE-Bold12 OMSmdugmbn undefined undefined MnLargeSymbols'164MnLargeSymbols'164 MnLargeSymbols'171MnLargeSymbols'171 plain theoremTheorem[section] *theorem*Theorem proposition[theorem]Proposition *proposition*Proposition lemma[theorem]Lemma corollary[theorem]Corollary defprop[theorem]Definition-Proposition definition definition[theorem]Definition examplex[theorem]Example remarkx[theorem]Remark ⊠ · 𝔟̱ 𝔤𝔩 𝔰𝔩 𝕀 𝔥 𝔨̨ ł𝔩 𝔪 𝔫 𝔭 𝔯̊ 𝔰 𝔲̆ 𝒜𝖡𝔄^Ω𝒞onnℂℂP^1𝕂𝕄ℚℝ𝕍ℤℋ̋ℐℰℱ𝒦Ø𝒪𝒫ℛ𝒮𝒮ℳ𝔐^Ω𝒩𝒰𝒱𝔛𝔜ℨ i1equationsection1.1 Lagrangian multiforms on coadjoint orbits for finite-dimensional integrable systems Vincent Caudrelier^1, Marta Dell'Atti^2, Anup Anand Singh^1 ^1 School of Mathematics, University of Leeds, UK ^2 School of Mathematics and Physics, University of Portsmouth, UK Lagrangian multiforms provide a variational framework to describe integrable hierarchies. The case of Lagrangian 1-forms covers finite-dimensional integrable systems. We use the theory of Lie dialgebras introduced by Semenov-Tian-Shansky to construct a general Lagrangian 1-form. Given a Lie dialgebra associated with a Lie algebra and a collection H_k, k=1,…,N, of invariant functions on ^*, we give a formula for a Lagrangian multiform describing the commuting flows for H_k on a coadjoint orbit in ^*. We show that the Euler-Lagrange equations for our multiform produce the set of compatible equations in Lax form associated with the underlying r-matrix of the Lie dialgebra. We establish a structural result which relates the closure relation for our multiform to the Poisson involutivity of the Hamiltonians H_k and the so-called “double zero” on the Euler-Lagrange equations. Lie dialgebras are related to Lie bialgebras but are more flexible in that they incorporate the case of non-skew-symmetric r-matrices. The Adler-Kostant-Symes (AKS) scheme is a particular case of the Lie dialgebra construction. We illustrate these points and our construction with the open Toda chain and the rational Gaudin model. The open Toda chain is constructed using two different Lie dialgebra structures on (N+1). The first one possesses a non-skew-symmetric r-matrix and falls within the AKS scheme. The second one possesses a skew-symmetric r-matrix. In both cases, the connection with the well-known descriptions of the chain in Flaschka and canonical coordinates is provided. § INTRODUCTION The concept of Lagrangian multiforms was introduced in <cit.> with the objective of providing a variational criterion of integrability. That work dealt with discrete integrable systems for which the accepted criterion for integrability was the idea of multidimensional consistency <cit.> which can be viewed as the discrete version of the property of commuting (Hamiltonian) flows for a dynamical system sitting in an integrable hierarchy. It was proposed to introduce a generalised action and variational principle, involving a new object (a Lagrangian multiform), to capture purely variationally multidimensional consistency. This idea grew quickly, first in the discrete realm, see <cit.> and references therein. Over the last decade or so, the universality of this idea and its connections with more traditional features of integrability (Lax pair, Hamiltonian structures) has been illustrated in many other incarnations of integrable systems: continuous finite-dimensional systems <cit.>, continuous infinite-dimensional systems – field theories in 1+1 dimensions <cit.> and in 2+1 dimensions <cit.>– relations between discrete and continuous hierarchies <cit.> and semi-discrete systems <cit.>. The concept was even extended recently to non commuting flows in <cit.>. In general, a Lagrangian multiform is a d-form which is integrated over a hypersurface of dimension d in a so-called multi-time space of dimension greater than d to yield an action functional depending not only on the field configurations but also on the hypersurface. This last point in the main departure from a traditional action and principle of least action. One postulates a principle of least action which must be valid for any hypersurface embedded in the multi-time space. This is the postulate which captures the idea of commutativity of the flows and which was adopted as a definition of pluri-Lagrangians, see <cit.> and references therein. In Lagrangian multiform theory, there is an additional postulate, the closure relation which is the direct counterpart of Poisson involutivity of Hamiltonians, the Liouville criterion for integrability. The generalised variational principle produces equations that come in two flavours: 1) Euler-Lagrange equations associated with each of the coefficients of the Lagrangian multiform which form a collection of Lagrangian densities; 2) Corner or structure equations on the Lagrangian coefficients themselves which select possible models and ensure the compatibility of the various equations of motion imposed on a common set of fields. Classifying all possible Lagrangian multiforms along these lines would amount to classifying all integrable hierarchies. In practice, it is a nontrivial task to obtain all the Lagrangian coefficients of a multiform which produce compatible equations of motion. Beyond brute force calculations to solve the corner equations <cit.>, several works have used the idea of variational symmetries to achieve this goal <cit.>. This produces an algorithm to construct the Lagrangian coefficients one after the other from a given initial Lagrangian. Although perfectly fine in theory, this can become quickly unmanageable in practice and usually formulas for only a few Lagrangian coefficients are obtained. It also has the disadvantage of singling out some independent variables in the hierarchy which then appear as the so-called “alien derivatives” in the higher Lagrangian coefficients. More recently, another approach was introduced which takes a more global view on a hierarchy and provides an efficient way of describing all the Lagrangian coefficients in one formula <cit.>, see also <cit.>. A key insight in <cit.> was the incorporation in the Lagrangian multiform of key ingredients known in the Hamiltonian framework for integrable hierarchies, in particular the classical r-matrix, as well as the “compounding” of hierarchies following <cit.>. This paper draws and expands upon this insight and is concerned with Lagrangian 1-forms which allow one to treat integrable hierarchies of finite-dimensional systems. Specifically, we show how the theory of Lie dialgebras<cit.> can be used to construct systematically a Lagrangian multiform for any finite-dimensional system which falls within the Lie dialgebra framework. The latter incorporates and generalises the perhaps more well-known Adler-Kostant-Symes scheme <cit.>. In terms of versatility, this goes beyond the results of <cit.> which were confined to skew-symmetric classical r-matrices. The Lie dialgebra framework can easily accommodate the non-skew-symmetric case. For conciseness, we only illustrate this versatility and our construction on two famous models: the open Toda chain and the (rational) Gaudin model. However, the construction can in principle cover a much larger range of models which falls into the r-matrix scheme, see <cit.> for a description of many such systems including classical tops. To our knowledge, only one instance of a Lagrangian description of the AKS scheme has been proposed before in <cit.>. Compared to the present paper, <cit.> is limited to the AKS scheme and provides only one Lagrangian corresponding to the quadratic Hamiltonian L^2/2 (the idea of Lagrangian multiforms was not yet available at that time). Technically, the approach is different: the authors formulate a Lagrangian on the tangent bundle of a Lie group and use Lagrange multipliers to effectively enforce the motion on a coadjoint orbit. Our main results are: * The definition (<ref>)-(<ref>) of a general Lagrangian multiform from the data of a Lie dialgebra and the proof that its multi-time Euler-Lagrange equations produce a hierarchy of compatible equations in Lax form, Theorem <ref>. * For this general Lagrangian multiform, the derivation of an identity relating its closure relation, its Euler-Lagrange equations and the Poisson involutivity of associated Hamiltonians, Theorem <ref>. * Explicit Lagrangian multiforms for the open Toda chain and the rational Gaudin model. The paper is organised as follows. In Section <ref>, we briefly review the notions of Lagrangian multiforms and Lie dialgebras that we need. Section <ref> introduces the general Lagrangian multiform and contains our two main results, Theorems <ref> and <ref>. In Section <ref>, we illustrate the construction for the open Toda chain associated with a Lie dialgebra via a non-skew-symmetric r-matrix. We present explicit expressions for the Lagrangian coefficients and relate our results to the well-known formulations of the Toda chain in Flaschka and canonical coordinates. In Section <ref>, the same open Toda chain is used to illustrate our construction in the case of a skew-symmetric r-matrix. We also relate our results to the description in Flaschka and canonical coordinates. Section <ref> is concerned with the rational Gaudin model and is the opportunity for us to show how our Lagrangian multiform operates in the case of an infinite-dimensional Lie algebra which accounts for the presence of a spectral parameter in the Lax matrices. Although it deals with a finite-dimensional Gaudin model, this section bears a lot of similarities with the framework introduced in <cit.> for integrable field theories. We end with concluding remarks in Section <ref>. § BACKGROUND MATERIAL §.§ Lagrangian 1-forms We review in more details the notion of Lagrangian multiforms that we need, restricting our attention to Lagrangian 1-forms since our aim is to describe integrable hierarchies of finite-dimensional systems. The basic object is a Lagrangian 1-form [q]=∑_k=1^N _k[q] dt_k and the related generalised action S[q,Γ]=∫_Γ[q] where Γ is a curve in the multi-time ^N with (time) coordinates t_1,…,t_N and q denotes generic configuration coordinates. For instance q could be a position vector in ^d for some d, or as will be the case for us, an element of a (matrix) Lie group. The notation [q] and _k[q] means that these quantities depends on q and a finite number of derivatives of q with respect to the times (t_1,…,t_N). In this paper, we restrict to the case of first derivatives only and simply write _k for the Lagrangian coefficients. The application of the generalised variational principle leads to the following multi-time Euler-Lagrange equations <cit.> ∂_k/∂ q-∂_t_k∂_k/∂ q_t_k=0 , ∂_k/∂ q_t_ℓ=0 , ℓ≠ k , ∂_k/∂ q_t_k=∂_ℓ/∂ q_t_ℓ , k,ℓ=1,…,N . Note that (<ref>) is simply the standard Euler-Lagrange equation for each _k. Condition (<ref>) states that the Lagrangian coefficient _k cannot depend on the velocities q_t_ℓ for ℓ≠ k. The last condition (<ref>) requires that the conjugate momentum to q be the same with respect to all times t_k. The closure relation then stipulates that d[q]=0  ⇔  ∂_t_k_j-∂_t_j_k=0 , on solutions of (<ref>)-(<ref>). §.§ Lie dialgebras and Lax equations Here, we collect facts from the theory of Lie dialgebras as defined in <cit.>, see also <cit.>. Proofs are omitted for brevity and the reader is referred to <cit.> for details. We emphasise that Lie dialgebras are different from the perhaps more familiar Lie bialgebras appearing in Drinfeld's theory of Poisson-Lie groups. Connections and differences between these two structures are discussed in <cit.> and <cit.>. Let be a matrix Lie algebra, with matrix Lie group G, and ^* its dual space. We have the usual (co)adjoint actions[For simplicity, we only work with matrix Lie algebras and corresponding Lie groups.] for all ξ∈^*, X,Y∈, g∈ G, ad_X· Y =[X,Y] , ( ad^*_X·ξ)(Y)=-ξ( ad_X· Y)=-ξ([X,Y]) , Ad_g· X =g X g^-1 , Ad^*_g·ξ(X)=ξ(Ad_g^-1· X) . The space ^* can be endowed with the Lie-Poisson bracket defined by {f,g}(ξ)= (ξ,[∇ f(ξ) , ∇ g(ξ)]) ,  f,g∈ C^∞(^*) , where we introduced the convenient notation (  , ) for the natural pairing between ^* and : ξ(X)=(ξ,X). The gradient ∇ f(ξ) is the element of defined from the differential δ f(ξ) by using the pairing δ f(ξ)(η)=lim_ϵ→ 0f(ξ+ϵη)-f(ξ)/ϵ=(η,∇ f(ξ)) . Introducing a basis {E_α} of and the dual basis {E^*_α} of ^* and coordinates functions ξ_α on ^* we find ∇ f(ξ)=∂ f/∂ξ_αE_α and the well-known coordinate form of the Lie-Poisson bracket {f,g}(ξ)=C_αβ^γ ξ_γ ∂ f/∂ξ_α∂ g/∂ξ_β where C_αβ^γ are the structure constants of . The Lie-Poisson bracket is degenerate in general and the Ad^*-invariant functions on ^* are the Casimir functions. Its symplectic leaves are the coadjoint orbits of G in ^*. The restriction to a coadjoint orbit gives rise to the Lie-Kostant-Kirillov-Souriau symplectic form ω_KK. Let R:→ be a linear map. It is a solution of the modified classical Yang-Baxter equation (mCYBE) if it satisfies [R(X),R(Y)]-R([R(X),Y]+[X,R(Y)])=-[X,Y] ,  ∀ X,Y∈ . By abuse of language, we will call a solution R of (<ref>) a (classical) r-matrix, in relation to the fact that with R one can associate r∈⊗ (which is what is traditionally called the r-matrix) when is equipped with a nondegenerate ad-invariant symmetric bilinear form ⟨  , ⟩ ( e.g. the Killing form when is a finite-dimensional semi-simple Lie algebra). A famous example of an r-matrix arises in the case where admits a direct sum decomposition (as a vector space) into two Lie subalgebras =_+⊕_- . Then, R=P_+-P_- is a solution of (<ref>), where P_± is the projector on _± along _∓. Given a solution R of the mCYBE, one can define on the vector space a second Lie bracket [X,Y]_R=1/2([R(X),Y]+[X,R(Y)]) . The corresponding Lie algebra is denoted by _R. We therefore have an adjoint action of _R on itself and a coadjoint action of _R on ^* ( and _R, being the same vector space, have the same dual space) ad^R_X· Y=[X,Y]_R , ∀ X,Y∈ , (ad^*^R_X·ξ)(Y)=-(ξ , ad^R_X· Y)=-(ξ , [X,Y]_R) . The algebraic significance of the mCYBE and of the second Lie bracket [  , ]_R is given by the following results which lead to essential factorisation properties underlying integrable systems. The key objects are the maps R_±=1/2(R± id) . Let _±= Im R_±. Then, * R_±:_R→ is a Lie algebra homomorphism: R_±([X,Y]_R)=[R_±(X),R_±(Y)] . In particular _±⊂ are Lie subalgebras of . * The mapping i_R:_R→_+⊕_-, i_R(X)=(R+(X),R_-(X)) is a Lie algebra embedding. Thus _R= Im i_R is a Lie subalgebra of _+⊕_- . * The composition of the maps i_R: _R  → _+⊕_- ,  X↦(R_+(X),R_-(X)) , followed by a:_+⊕_- →  ,  (X_+,X_-)↦ X_+-X_- , provides a unique decomposition of any element X∈ as X=R_+(X)-R_-(X). Note that R_+-R_-= id and [X,Y]_R= R_+([X,Y]_R)-R_-([X,Y]_R)=[R_+(X),R_+(Y)]-[R_-(X),R_-(Y)] . We can express the actions of _R in terms of those of . For convenience, we write X_±=R_±(X) for X∈. Then, ad^R_X· Y=1/2 ad_R(X)· Y+1/2 ad_X· R(Y)= ad_X_+· Y_+ - ad_X_-· Y_- , ad^*^R_X·ξ=1/2ad^*_R(X)·ξ+1/2R^*(ad^*_X·ξ)=R_+^*(ad^*_X_+·ξ)-R_-^*(ad^*_X_-·ξ) , where the adjoint A^*:^*→^* of a linear map A:→ is defined by (A^*(ξ),X)=(ξ,A(X)). The application of this framework to integrable systems hinges on the interplay between the two Lie-Poisson brackets one can define on ^*. Indeed, having a second Lie bracket, we can repeat the definition (<ref>) to obtain {f,g}_R(ξ)= (ξ,[∇ f(ξ) , ∇ g(ξ)]_R) . A similar conclusion holds: the symplectic leaves are coadjoint orbits of G_R, the Lie group of _R, in ^*. The restriction to a coadjoint orbit gives rise to the symplectic form which we denote by ω_R. It is the interplay between these two structures that provides integrable systems whose equations of motion take the form of a Lax equation. For this last part, one needs one more ingredient: an Ad-invariant nondegenerate bilinear symmetric form ⟨  , ⟩ on . It allows to identify ^* with and the coadjoint actions with the adjoint actions. Specifically, one has The ^*-invariant functions on ^* are in involution with respect to {  , }_R. The equation of motion d/dtL={L,H}_R induced by an ^*-invariant function H on ^* take the following equivalent forms, for an arbitrary L∈^*, d/dtL=^R*_∇ H(L)· L=1/2 ^*_R∇ H(L)· L=^*_R_±∇ H(L)· L . When there is an -invariant nondegenerate bilinear form ⟨  , ⟩ on so that we can identify ^* with and ^* with , the last equation takes the desired form of a Lax equation for L∈ d/dtL=[M_±,L] , M_±=R_±∇ H(L) . The proof can be found for instance in <cit.> and we only elaborate on certain points which will be useful for our purposes below. The crucial point is to exploit the Ad^*-invariance of the function H defining the time flow. The latter means that the following property holds ad^*_∇ H(ξ)·ξ = 0   ⇔   (ξ , [∇ H(ξ),X])=0 ∀ξ∈𝔤^* ,  ∀ X∈ . Thus, for any two Ad^*-invariant functions H_1 and H_2, {H_1,H_2}_R(ξ) = (ξ,[∇ H_1(ξ) , ∇ H_2(ξ)]_R) =1/2(ξ,[R∇ H_1(ξ) , ∇ H_2(ξ)]+[∇ H_1(ξ) , R∇ H_2(ξ)])=0 . For any function f on ^*, the time evolution associated with the Ad^*-invariant H with respect to the Poisson bracket {  , }_R is defined by d/dtf(L)={f,H}_R(L) , (d/dtL , ∇ f(L) ) = (L,[∇ f(L) , ∇ H(L)]_R)=-1/2 (L,[R∇ H(L) , ∇ f(L)]) =(ad^R*_∇ H(L)· L , ∇ f(L))=1/2(ad^*_R∇ H(L)· L , ∇ f(L)) . Finally, in view of (<ref>) and (<ref>), we have ad^*_R∇ H(L)· L=2 ad^*_R_±∇ H(L)· L . thus establishing the various equivalent forms of the equations in (<ref>) (by restricting f to be any of the coordinate functions on ^*). The involutivity property (<ref>) ensures that we can define compatible time flows associated with a family of Ad^*-invariant Hamiltonian functions H_k, k=1,…,N. If one can supply enough such independent functions, or work on a coadjoint orbit of low enough dimension, one obtains an integrable system described by an integrable hierarchy of equations in Lax form (again using the identification provided by ⟨  , ⟩) ∂_t_kL=[R_±∇ H_k(L),L] ,   k=1,…,N . The typical example of an invariant function H_k is given by H_k=1/k+1(L^k). For our purposes, the Lie groups associated with and _R will be important. We introduce G and G_R as the (connected, simply connected) Lie groups defined for and _R respectively. For simplicity, we only think of matrix groups in this paper. As a set, G and G_R are the same but the crucial difference lies in their multiplications induced by [  , ] and [  , ]_R respectively. The homomorphisms R_± give rise to Lie group homomorphisms (which we denote by the same symbols) and we obtain a factorisation at the group level. With g=e^X, X∈, we have R_± g=e^R_± X . Specifically, let G_±=R_±(G_R) be the subgroups of G corresponding to _±. The composition of the maps i_R: G_R → G_+ × G_- ,   g↦(R_+(g),R_-(g)) , followed by m:G_+× G_- → G ,   (g_+,g_-)↦ g_+ g_-^-1 , allows us to factorise uniquely an arbitrary element g∈ G (sufficiently close to the identity) as g=g_+ g_-^-1 ,  (g_+,g_-)∈G_R= Im i_R . An element g∈ G_R can be identified with its image (g_+,g_-)∈G_R⊆ G_+× G_- and the multiplication ·_R in G_R is most easily visualised using the homomorphism property i_R( g·_R h)=i_R( g)*i_R( h)=(g_+ h_+ , g_- h_-) where * is the direct product group structure of G_+× G_-. This is usually shortened to g·_R h= (g_+ h_+ , g_- h_-) . The group G_R acts on _R by the adjoint action and on ^* via the coadjoint action Ad^R_g· X =g·_R X·_R g^-1 , ∀ X∈_R ,  g∈ G_R , Ad^R*_g·ξ(X) =(ξ , Ad^R_g^-1· X) , ∀ g∈ G_R ,ξ∈^* ,  X∈_R . When writing using the suggestive notation g·_R X·_R g^-1 for the adjoint action, we tacitly view ·_R as an associative product on the matrix Lie algebra and its Lie group. Strictly speaking, this is not possible if R is a solution of (<ref>). It becomes possible for instance if is an associative algebra and we require R to be a solution of the associative Yang-Baxter equation R(X) R(Y)-R(R(X) Y+ X R(Y))+ X Y=0, see <cit.>. This implies that X·_R Y=1/2(R(X) Y+X R(Y) ) defines a second associative product on and allows us to view [X,Y]_R as the commutator X·_R Y-Y·_R X, in complete analogy with [X,Y]=X Y-Y X. We will assume that ·_R is such an associative product in the rest of this paper and use the consequences, e.g. [X,Y]_R=X·_R Y-Y·_R X. The following relations are most useful in the practical calculations of the examples discussed below. With g_±=R_± g, X_±=R_± X, g∈ G_R, X∈_R, Ad^R_g· X =g_+ X_+ g_+^-1-g_- X_- g_-^-1 , Ad^R*_g·ξ =R_+^*(Ad^*_g_+ξ)-R_-^*(Ad^*_g_-ξ) ,  ∀ξ∈^* . Thus, the dual space ^* hosts two coadjoint actions of G and G_R, as it does with the two coadjoint actions of the Lie algebras and _R. The last main result of this framework is known as the factorisation theorem, see e.g.<cit.>. Consider the system of compatible equations with given initial condition ∂_t_kL=^*_R_±∇ H_k(L)· L , k=1,…,N , L(0,…,0)=L_0∈^* . Denote (t_1,…,t_N)= t for conciseness. Let g_±( t) be the smooth curves in G_± which solves the factorisation problem e^-∑_k=1^N t_k∇ H_k(L_0)=g_+( t)^-1 g_-( t) , g_±( 0)=e . Then, the solution to the initial-value problem (<ref>) is given by L( t)=^*_g_+( t)· L_0=^*_g_-( t)· L_0 , and g_±( t) satisfy ∂_t_k g_±( t) =R_±∇ H_k(L( t)) g_±( t) . This result shows that the solution lies at the intersection of coadjoint orbits of G and G_R. Combined with the fact that the coadjoint orbits provide the natural symplectic manifolds associated with the corresponding Lie-Poisson bracket, this means that the natural arena to define our phase space, where L lives, is a coadjoint orbit of G_R in ^* O_Λ={Ad^R*_φ·Λ;φ∈ G_R} ,  for some  Λ∈^* . In the Lagrangian multiform theory, the prevalent idea is that one should think of an integrable system as an integrable hierarchy, in a way completely similar to the Hamiltonian integrable hierarchy we have just recalled. This leads us to work with the space where (t_1,…,t_N)= t lives: the multi-time space. Since the flows commute, the multi-time is simply (a subspace of) ℝ^N_1× (S^1)^N_2, N_1+N_2=N (in general we should allow for the possibility of having periodicity in some of the independent variables (t_1,…,t_N)). The generalisation to the case where the vector fields giving the flows no longer commute but still form a Lie algebra was considered in <cit.> and leads to the consideration of the multi-time being space a (non-abelian) Lie group. §.§ Extension to loop algebras and some special cases The essential results of the Lie dialgebra construction discussed above extend to the infinite-dimensional setting, e.g. the case of loop algebras[There are several subtleties related to duals in infinite dimensions and completions which we do not touch, keeping a less rigorous but more approachable exposition.]. The latter is relevant when one needs Lax matrices with spectral parameters. This is typically the case for integrable field theories but it can also be required for some finite-dimensional systems such as the closed Toda chain or Gaudin models. We will present the extension of the Lie dialgebra construction to this infinite-dimensional setting via the Gaudin example in Section <ref> and we refer the reader to <cit.> for more details. There are special cases of the Lie dialgebra framework that may be more familiar to the reader and will play a role in our examples below. They both arise when admits a direct sum decomposition (as a vector space) into two Lie subalgebras =_+⊕_- , and we take R=P_+-P_-, where P_± is the projector on _± along _∓. The decomposition of induces the decomposition ^*=_+^*⊕_-^* . Using a nondegenerate ad-invariant bilinear form on , we can identify _±^* with _∓^⊥. The first special case, which historically is at the origin of the so-called Adler-Kostant-Symes scheme <cit.> is obtained as follows. We fix Λ to be in _-^* and consider the coadjoint orbit of elements L=Ad^R*_φ·Λ. As a result, only the subgroup G_- in G_R≃ G_+× G_- plays a role since L=Ad^R*_φ·Λ=-R_-^*(Ad^*_φ_-·Λ) and the coadjoint orbit O_Λ lies in _-^*. This is the historic setup which can be used to formulate the open Toda chain in Flaschka coordinates. R is not skew-symmetric in this case. We will present this example in Section <ref> where details on our Lagrangian multiform for this model will be given. The second special case is a further specialisation where _± are isotropic with respect to ⟨  , ⟩, meaning ⟨_±,_±⟩=0 and implying that _±^* can be identified with _∓=_∓^⊥. This case can arise with loop algebras and will be discussed in Section <ref> in relation to the Gaudin model. In this case, R is skew-symmetric, ⟨ RX,Y⟩=-⟨ X,RY⟩ ,  ∀ X,Y∈ . Note that we will also illustrate the case where R is not defined from a decomposition into two subalgebras but rather from a decomposition into nilpotent and Cartan subalgebras. This different setup is accommodated without problems into Lie dialgebras. Interestingly, it can also be used to describe the same open Toda chain as in the AKS scheme and this will be illustrated in Section <ref>. The underlying algebraic structures are very different, though. In particular, R is skew-symmetric in this case while it is not in the AKS formulation, showing that the same Toda chain can arise from two distinct constructions. § LAGRANGIAN MULTIFORM ON A COADJOINT ORBIT §.§ The general Lagrangian multiform and its properties Recalling our comment about the coadjoint orbits of G_R in ^* being the natural arena for an integrable hierarchy, let us introduce the following Lagrangian 1-form [φ] = ∑_k=1^N _k dt_k = K[φ]- H[φ] with kinetic part K[φ] = ∑_k=1^N ( L , ∂_t_kφ·_R φ^-1 ) dt_k ,   L = Ad^R*_φ·Λ ,  φ∈ G_R , and potential part H[φ]=∑_k=1^N H_k(L) dt_k . The field φ∈ G_R contains the dynamical degrees of freedom and, as we will see, the Euler-Lagrange equation will take a natural form when expressed in terms of L = Ad^R*_φ·Λ. Λ is a fixed non-dynamical element of ^* which defines O_Λ, the phase space of the model. Each Lagrangian _k in the Lagrangian multiform has a structure comparable to the familiar Lagrangian pq̇-H in classical mechanics. The potential part is expressed in terms of Ad^*-invariant functions H_k ∈ C^∞(𝔤^*) and we suppose we have N of them[At this stage, we do not necessarily have that N is exactly half of the dimension of O_Λ. As in the AKS scheme, this needs to be addressed in specific cases by choosing a coadjoint orbit of appropriate dimension to ensure Liouville integrability. We will not worry about this for now as our construction follows through anyway.]. We emphasised that one important ingredient in producing equations of motion in Lax form from the coadjoint orbit construction is to use an Ad-invariant nondegenerate bilinear symmetric form ⟨  , ⟩ on to identify ^* with and the coadjoint action with the adjoint action. The reader could therefore wonder why we have written our Lagrangian multiform using the pairing (  , ), an element Λ∈^* and functions H_k on ^*. The point is that we found that it was less confusing to do so when deriving results in general and in examples, in order to identify correctly the subalgebras involved in the decomposition of and ^*. However, we cannot stress enough that ultimately we always use the bilinear form ⟨  , ⟩ to make all the identifications and indeed obtain equations in Lax form, whether this is clearly mentioned or not. Hopefully, this understanding will make the exposition easier to follow. We can now formulate our first main result. The Lagrangian 1-form (<ref>) satisfies the corner equations (<ref>)-(<ref>) of the multi-time Euler-Lagrange equations. The standard Euler-Lagrange equations (<ref>) associated with the Lagrangian coefficients _k take the form of compatible Lax equations ∂_t_kL=[R_±∇ H_k(L),L] , k=1,…,N . The closure relation holds: on solutions of (<ref>) we have ∂_t_k_j-∂_t_j_k=0 , j,k=1,…,N . It is clear that each _k does not depend on ∂_t_ℓφ for ℓ≠ k so the corner equation (<ref>) is satisfied. To see that (<ref>) holds, it is convenient to introduce local coordinates ϕ_α, α=1,…,M, on the group G_R. The only source of dependence on velocities is in the kinetic term of _k. Now ( Ad^R*_φ·Λ , ∂_t_kφ·_R φ^-1 ) =( Λ , Ad^R_φ^-1·(∂_t_kφ·_R φ^-1)) = ( Λ , φ^-1·_R ∂_t_kφ) = ∑_α=1^M( Λ , φ^-1·_R ∂φ/∂ϕ_α )∂_t_kϕ_α≡∑_α=1^M π_α ∂_t_kϕ_α where we have introduced the momentum π_α=( Λ , φ^-1·_R ∂φ/∂ϕ_α ) conjugate to the field ϕ_α. Thus, ∂_k/∂(∂_t_kϕ_α)=π_α is independent of k. The remainder of the multi-time Euler Lagrange equations consists of the standard Euler-Lagrange equations for each _k. We compute δ_k = ( δ L , ∂_t_kφ·_R φ^-1 ) + ( L , δ (∂_t_kφ·_R φ^-1) )-δ H_k(L) , with[More rigorously, the notation δ L means the tangent vector to O_Λ at the point L induced by the element X∈_R which we write more suggestively as δφ·_Rφ^-1. The latter notation is closer to the more familiar one in the variational calculus using matrix-valued fields.] δ L=ad^R*_δφ·_Rφ^-1· L and δ H_k(L) =(δ L,∇ H_k(L) ) =-( L , [ δφ·_Rφ^-1,∇ H_k(L)]_R ) =1/2( L , [R∇ H_k(L) , δφ·_Rφ^-1] )=-1/2(ad^*_R∇ H_k(L)· L , δφ·_Rφ^-1) . So, δ_k = ( ad^R*_δφ·_Rφ^-1· L , ∂_t_kφ·_R φ^-1 ) + ( L , δ (∂_t_kφ) ·_R φ^-1 )   - ( L , ∂_t_kφ·_R φ^-1·_R δφ·_R φ^-1 )+1/2(ad^*_RdH_k(L)· L , δφ·_Rφ^-1) = ( ad^R*_δφ·_Rφ^-1· L , ∂_t_kφ·_R φ^-1 ) - ( ∂_t_kL , δ φ·_R φ^-1 ) + ( L , δφ·_R φ^-1·_R ∂_t_kφ·_R φ^-1 )    + ∂_t_k( L , δ φ·_R φ^-1 )- ( L , ∂_t_kφ·_R φ^-1·_R δφ·_R φ^-1 )+1/2(ad^*_R∇ H_k(L)· L , δφ·_Rφ^-1) = ( ad^R*_δφ·_Rφ^-1· L , ∂_t_kφ·_R φ^-1 ) - ( ad^R*_∂_t_kφ·_Rφ^-1· L , δ φ·_R φ^-1 )   + ( L , [δφ·_R φ^-1 , ∂_t_kφ·_R φ^-1]_R )+1/2(ad^*_R∇ H_k(L)· L , δφ·_Rφ^-1)+∂_t_k( L , δ φ·_R φ^-1 ). The first and third term cancel each other. In the second term we recognise ad^R*_∂_t_kφ·_Rφ^-1· L=∂_t_k L. Hence, δ_k= ( -∂_t_k L+1/2ad^*_R∇ H_k(L)· L , δ φ·_R φ^-1 ) +∂_t_k( L , δ φ·_R φ^-1 ) and we obtain the Euler-Lagrange equation for each _k as ∂_t_k L= 1/2 ad^*_R∇ H_k(L)· L . Now recall that 1/2 ad^*_R∇ H_k(L)· L= ad^*_R_±∇ H_k(L)· L and that, with being equipped with an Ad-invariant nondegenerate bilinear form, ad^*_R_±∇ H_k(L)· L is identified with [R_±∇ H_k(L), L]. Thus, we have obtained (<ref>) variationally as desired. That this set of equations is compatible follows from the commutativity of the flows which is a consequence of the mCYBE and the Ad-invariance of H_k as we now show. Going back to having L∈^* and evaluating its derivatives on a fixed but arbitrary X∈, we have (∂_t_k∂_t_jL)(X) =-1/2∂_t_k( L , [R∇ H_j(L),X] ) =1/4( L , [R∇ H_k(L) , [R∇ H_j(L) , X]] )-1/4( L , [R [R∇ H_k(L) , ∇ H_j(L)] , X] ) . Hence, using the Jacobi identity ([∂_t_k , ∂_t_j]L)(X) =1/4( L , [[R∇ H_k(L) , R∇ H_j(L)] , X])   -1/4(L , R ([R∇ H_k(L) , ∇ H_j(L)]+[∇ H_k(L) , R∇ H_j(L)]) , X] ) =-1/4( L , [[∇ H_k(L) , ∇ H_j(L)] , X])=0 where we use the mCYBE in the second equality and property (<ref>) in the last step. We now establish the closure relation, d=0 on shell. It turns out that the kinetic and potential contributions vanish separately. We have ∂_t_j_k-∂_t_k_j = ∂_t_j( L , ∂_t_kφ·_R φ^-1 )-∂_t_k( L , ∂_t_jφ·_R φ^-1 ) -∂_t_jH_k(L)+∂_t_kH_j(L) . Now, using (<ref>), we find ∂_t_jH_k(L)=(∂_t_j L , ∇ H_k(L))=-1/2 ( L , [R∇ H_j(L) , ∇ H_k(L)])=0. Thus, it is a direct consequence of the Ad^*-invariance of H that the potential contribution to d is zero on shell. We are now left with just the kinetic terms which can be rewritten as ( ∂_t_j L , ∂_t_kφ·_R φ^-1 )- ( ∂_t_k L , ∂_t_jφ·_R φ^-1 )+ ( L , ∂_t_j(∂_t_kφ·_R φ^-1 ) )-( L , ∂_t_k(∂_t_jφ·_R φ^-1 ) ) = ( ∂_t_j L , ∂_t_kφ·_R φ^-1 )- ( ∂_t_k L , ∂_t_jφ·_R φ^-1 )+ ( L , ∂_t_j∂_t_kφ·_R φ^-1 - ∂_t_k∂_t_jφ·_R φ^-1 )   + ( L , ∂_t_kφ·_R ∂_t_jφ^-1 - ∂_t_jφ·_R ∂_t_kφ^-1 ). From the commutativity of flows, we have ∂_t_j∂_t_kφ - ∂_t_k∂_t_jφ = 0, which leaves us with ( ∂_t_j L , ∂_t_kφ·_R φ^-1 )- ( ∂_t_k L , ∂_t_jφ·_R φ^-1 )+ ( L , ∂_t_kφ·_R ∂_t_jφ^-1 - ∂_t_jφ·_R ∂_t_kφ^-1 ). The on-shell relation ∂_t_j L= 1/2 ad^*_R∇ H_j(L)· L allows us to express the first term as ( ∂_t_j L , ∂_t_kφ·_R φ^-1 ) = 1/2( ad^*_R ∇ H_j(L)· L , ∂_t_kφ·_R φ^-1 ) = -1/2( ad^*_∂_t_kφ·_R φ^-1· L ,R ∇ H_j(L)). Since ( ad^R*_∂_t_kφ·_R φ^-1· L ,∇ H_j(L)) = 1/2( ad^*_∂_t_kφ·_R φ^-1· L ,R∇ H_j(L)) + 1/2( ad^*_R ∂_t_kφ·_R φ^-1· L ,∇ H_j(L)) and ( ad^*_R ∂_t_kφ·_R φ^-1· L ,∇ H_j(L))=-( ad^*_∇ H_j(L)· L ,R∂_t_kφ·_R φ^-1) = 0, we have a further simplification to ( ∂_t_j L , ∂_t_kφ·_R φ^-1 ) = - ( ad^R*_∂_t_kφ·_R φ^-1· L ,∇ H_j(L)) = - ( ∂_t_kL ,∇ H_j(L)) = - ∂_t_kH_j(L) = 0 where we have used the result from (<ref>) (with k↔ j). Similarly, we have, for the second term ( ∂_t_k L , ∂_t_jφ·_R φ^-1 ) = - ∂_t_jH_k(L) = 0. For the last remaining term, we have ( L , ∂_t_kφ·_R ∂_t_jφ^-1 - ∂_t_jφ·_R ∂_t_kφ^-1 ) = ( L , -∂_t_kφ·_R φ^-1·_R ∂_t_jφ·_R φ^-1 + ∂_t_jφ·_R φ^-1·_R ∂_t_kφ·_R φ^-1 ) = ( L , [ ∂_t_jφ·_R φ^-1 , ∂_t_kφ·_R φ^-1 ]_R ) = -( ad^R*_∂_t_jφ·_R φ^-1· L , ∂_t_kφ·_R φ^-1 ) = - ( ∂_t_j L , ∂_t_kφ·_R φ^-1 )= ∂_t_kH_j(L) = 0. It is worth noting that the properties of our Lagrangian multiform heavily rely on the mCYBE for R. It is at the heart of the commutativity of the flows and the closure relation. The connection between the closure relation and the CYBE was first identified and established in <cit.> in the context of integrable field theories. Here it is established in the finite-dimensional context and related to Lie dialgebras. Although we have fixed Λ so far, it is natural to ask what happens if we view it as a Lagrange multiplier and derive the Euler-Lagrange equation associated with its variation δΛ∈^*. The variation δ L acquires a new term and now reads δ L=ad^R*_δφ·_Rφ^-1· L+Ad^R^*_φ·δΛ The computation of δ_k proceeds as before, taking into account the new term in δ L. The new contribution related to δΛ reads (δΛ , Ad^R_φ^-1·(∂_t_kφ·_Rφ^-1-∇ H_k(L))) and must be zero for any variation δΛ∈^*. Therefore, ∂_t_kφ·_Rφ^-1=∇ H_k(L) or equivalently, with φ_±=R_±φ, ∂_t_kφ_± =R_±(∇ H_k(L)) φ_± . There seems to be an intriguing relationship between the variational origin of these equations and the mechanism yielding the factorisation theorem, see (<ref>), suggesting that there is a link with dressing transformations. We do not have yet a clear understanding of dressing transformations in the theory of Lagrangian multiforms and this is a topic for future investigation. §.§ Closure relation, Hamiltonians in involution, Kostant-Kirillov form In this section, we derive a structural result which brings together Lagrangian multiforms and essential Hamiltonian aspects of integrable systems. It will be convenient and clearer to work with local coordinates ϕ_α, α=1,…,M on the group G_R, as we did in (<ref>). Then, our Lagrangian multiform can be written in the form [φ] =∑_k=1^N( ∑_α=1^M π_α ∂_t_kϕ_α - H_k) dt_k where we recall that the momentum π_α is defined by π_α=( Λ , φ^-1·_R ∂φ/∂ϕ_α ) . Each Lagrangian _k in the multiform has the structure pq̇-H of a Lagrangian in phase space _k=∑_α=1^M π_α ∂_t_kϕ_α - H_k , and yields its Euler-Lagrange equations from the variation δ_k=∑_β=1^M(∑_α=1^M (∂π_α/∂φ_β-∂π_β/∂φ_α) ∂_t_kϕ_α - ∂ H_k/∂ϕ_β)δϕ_β+∂_t_k(∑_α=1^M π_αδϕ_α) . This is of course consistent with the general result of the previous section and the comparison of the two expressions for δ_k gives ( -∂_t_k L+1/2 ad^*_R∇ H_k(L)· L , δ φ·_R φ^-1 )= ∑_β=1^M(∑_α=1^M Ω_αβ ∂_t_kϕ_α - ∂ H_k/∂ϕ_β)δϕ_β and ( L , δ φ·_R φ^-1 )=(∑_α=1^M π_αδϕ_α) . Thus, we have natural coordinate versions of key components of the theory. In particular, let us denote by θ_R the vertical 1-form θ_R=-∑_α=1^M π_αδϕ_α=-∑_α=1^M ( Λ , φ^-1·_R ∂φ/∂ϕ_α )δϕ_α=-( Λ , φ^-1·_R δφ ) . and let us introduce the vertical 2-form Ω_R=∑_α<βΩ_αβ δϕ_α∧δϕ_β , Ω_αβ=∂π_α/∂φ_β-∂π_β/∂φ_α . Observe the important relation Ω_R=δθ_R . The form Ω_R is the pullback to the group G_R by the map χ:  G_R → O_Λ φ↦Ad^R*_φ·Λ of the Kostant-Kirillov symplectic form ω_R on the coadjoint orbit through Λ∈^*. We recall here that we consider the coadjoint action of the group G_R, not the group G. Relation (<ref>) is the well-known fact that this pullback is an exact form. The expression φ^-1·_R δφ appearing in θ_R can be interpreted as the Maurer-Cartan form on G_R. The structure of our Lagrangian coefficients, in particular their kinetic part, is now elucidated in terms of fundamental objects associated with G_R and its coadjoint orbits in ^*. It is known that the map χ is a submersion[We suppose that we are in a situation where this holds, for instance excluding the trivial case where the orbit is reduced to a point and assuming that the G_R action is proper.]. Also, a coadjoint orbit is always even dimensional as it admits the nondegenerate symplectic form ω_R. Let us introduce local coordinates ξ_m, m=1,…, 2p, on O_Λ (2p≤ M). The tangent map χ_* is represented locally by the 2p× M matrix (∂ξ_m/∂ϕ_α). From now on summation over repeated indices is understood. The pushforward of the vector fields ∂/∂ϕ_α on G_R is given by χ_*(∂/∂ϕ_α) =∂ξ_m/∂ϕ_α ∂/∂ξ_m and the pullback of the differential 1-forms δξ_m on O_Λ reads χ^*(δξ_m)=∂ξ_m/∂ϕ_α δϕ_α . If we write for the Kostant-Kirillov form ω_R=ω_mn δξ_m∧δξ_n , then we have the following relation with the coefficients of its pullback Ω_R=χ^*(ω_R) Ω_αβ= ∂ξ_m/∂ϕ_α ∂ξ_n/∂ϕ_β ω_mn . In view of (<ref>), it remains to introduce the Euler-Lagrange vertical 1-forms on G_R EL_k≡ EL_k^β δϕ_β≡(Ω_αβ ∂_t_kϕ_α -∂ H_k/∂ϕ_β) δϕ_β . This is the pullback of the following vertical 1-form on O_Λ EL_k=χ_*(Υ_k)= Υ_k^n χ_*(δξ_n)=(∑_mω_mn ∂_t_kξ_m -∂ H_k/∂ξ_n)χ_*(δξ_n) with the relation EL_k^β= Υ_k^n ∂ξ_n/∂ϕ_β . Since χ is a submersion, the matrix (∂ξ_m/∂ϕ_α) has maximal rank 2p so the Euler-Lagrange equations EL_k^β=0 imply the equations Υ_k^n=0 (and vice versa). This is of course just the confirmation in the present coordinate notations of the result we obtained previously that the (multi-time) Euler-Lagrange equations from our Lagrangian multiform produce Lax equations naturally living on coadjoint orbits of G_R. As a consequence, whenever we say that an equality holds “on shell”, we mean that it holds modulo EL_k^β=0 or equivalently Υ_k^n=0. We can take advantage of this in the following way. Ω_R is the pullback of the Kostant-Kirillov form ω_R on the coadjoint orbit O_R. The latter is nondegenerate and therefore induces a Poisson bracket with bivector P_R=∑_m<nP_mn∂/∂ξ_m∧∂/∂ξ_n , P_mn ω_nr=δ_mr . The corresponding Poisson bracket on O_Λ is known (see e.g. <cit.>) to be the restriction of the Lie-Poisson bracket (<ref>) on ^* {f,g}_R(ξ)=(ξ,[∇ f(ξ) ,∇ g(ξ)]_R) . In other words, when f, g are restricted to O_Λ, we have {f,g}_R=P_mn∂ f/∂ξ_m∂ g/∂ξ_n . With these notions introduced, we see that the Euler-Lagrange equations Υ_k^n=0 take the form ∑_mω_mn ∂_t_kξ_m =∂ H_k/∂ξ_n , and can be written in Hamiltonian form ∂_t_kξ_m=P_mn∂ H_k/∂ξ_n={ξ_m,H_k}_R . The system of simultaneous equations (<ref>) on the ξ_m admits a solution (at least locally) if and only if the flows are compatible, [∂_t_k,∂_t_ℓ]=0. For an arbitrary function f, this means [∂_t_k,∂_t_ℓ]f={{H_k, H_ℓ}_R,f}_R=0 . The stronger condition { H_k, H_ℓ}_R=0 is the familiar Hamiltonian criterion for integrability (together with a sufficient number of independent such functions H_k of course). After these preliminary steps, we are now ready to state our second main result and its corollary, the significance of which will be discussed after the proofs. The following identity holds ∂_k/∂ t_ℓ-∂_ℓ/∂ t_k+Υ_k^m P_mn Υ_ℓ^n={H_k,H_ℓ}_R . The proof is by direct computation. ∂_k/∂ t_ℓ-∂_ℓ/∂ t_k =(∂π_α/∂ϕ_β-∂π_β/∂ϕ_α)∂_t_ℓϕ_β ∂_t_kϕ_α-∂ H_k/∂ϕ_β∂_t_ℓϕ_β+∂ H_ℓ/∂ϕ_α∂_t_kϕ_α =(Ω_αβ ∂_t_kϕ_α-∂ H_k/∂ϕ_β)∂_t_ℓϕ_β+∂ H_ℓ/∂ξ_m∂ξ_m/∂ϕ_α∂_t_kϕ_α =(ω_mn ∂_t_kξ_m-∂ H_k/∂ξ_n)∂_t_ℓξ_n+∂ H_ℓ/∂ξ_m∂_t_kξ_m =(ω_mn ∂_t_kξ_m-∂ H_k/∂ξ_n)P_nr( ω_rs ∂_t_ℓξ_s+∂ H_ℓ/∂ξ_r-∂ H_ℓ/∂ξ_r)+∂ H_ℓ/∂ξ_m∂_t_kξ_m =-Υ_k^n P_nr Υ_ℓ^r+∂ H_k/∂ξ_n P_nr ∂ H_ℓ/∂ξ_r, hence the result. The closure relation for the Lagrangian multiform is equivalent to the involutivity of the Hamiltonians H_k with respect to the Lie-Poisson R-bracket {  , }_R. The closure relation requires that on shell, we have d=∑_k<ℓ(∂_ℓ/∂ t_k-∂_k/∂ t_ℓ)dt_k∧ dt_ℓ=0 . From the previous theorem, on shell we have ∂_k/∂ t_ℓ-∂_ℓ/∂ t_k={H_k,H_ℓ}_R . Hence the result. The connection between the closure relation for Lagrangian 1-forms and the involutivity of Hamiltonians was first discussed in <cit.>. The content of our Corollary establishes this result for all Lagrangian 1-forms in the class that we have introduced in this paper. They include any system describable by the coadjoint orbit and r-matrix methods of Lie dialgebras. An extension of the connection between closure and involutivity to the field theory context (Lagrangian 2-forms) was discussed in <cit.>. In the field theory context, the connection between the closure relation and the classical Yang-Baxter equation was elucidated in <cit.>. In the present article on Lagrangian 1-forms, this connection is also at the heart of our results since the entire construction is based on the availability of the second Lie bracket [  , ]_R on , a feature ensured if R satisfies the mCYBE. The content of the theorem sheds fundamental light on the link between the closure relation and the involutivity of the Hamiltonian as it establishes an off-shell identity which clearly shows the interplay between the coefficients d, the Euler-Lagrange equations, the Poisson tensor on the coadjoint orbit and the Poisson bracket of the Hamiltonians related to our Lagrangian coefficients. A particular point is that it shows in the present general setting that d has a so-called “double zero” on the equation of motion. This idea was introduced in <cit.> and developed in <cit.> as an important ingredient of Lagrangian multiform theory. However, the relation to Hamiltonians in involution was not noticed there. The status of the “double zero” term Υ_k^n P_nr Υ_ℓ^r is now clearly identified as well as its relation to the Euler-Lagrange equations. This term is the off-shell element linking the Hamiltonian integrability criterion {H_k,H_ℓ}_R=0 and the integrability criterion advocated in Lagrangian multiform theory: the closure relation d=0 on shell. § OPEN TODA CHAIN IN THE AKS SCHEME As this is our first example, we first spend some time reviewing the known Adler-Kostant-Symes Lie algebraic construction of the Lax matrix and the Lax equation reproducing Flaschka's approach. Then, we will make the connection with our variational approach. Algebraic setup: Let us choose =(N+1), the Lie algebra of (N+1)× (N+1) traceless real matrices, _+ the Lie subalgebra of skew-symmetric matrices and _- the Lie subalgebra of upper triangular traceless matrices, yielding =_+⊕_- . Here R=P_+-P_- and R_±=± P_± with P_± the projector on _± along _∓. The following Ad-invariant nondegenerate bilinear form ⟨ X,Y⟩= Tr(XY) allows the identification ^*∼, and it induces the decomposition ^*=_-^*⊕_+^*≃_+^⊥⊕_-^⊥ , where _±^⊥ is the orthogonal complement of _± with respect to ⟨  , ⟩: _+^⊥ is the subspace of traceless symmetric matrices and _-^⊥ the subspace of strictly upper triangular matrices. Let us choose Λ =[ 0 1 0 0 … 0; 1 0 1 0 … 0; 0 1 0 1 … 0; 0 0 1 ⋱ ⋱ ⋮; ⋮ ⋱ ⋱ 1; 0 0 0 … 1 0; ]∈_-^*≃_+^⊥ and consider its orbit under the (co)adjoint action of G_-, the Lie subgroup associated to _- consisting of upper triangular matrices with unit determinant. Lax matrix and Lax equations for the first two flows: As explained in Section <ref>, the AKS case corresponds to the particular case where φ∈ G_- so that L=Ad^R*_φ·Λ=-R_-^*(Ad^*_φ_-·Λ), and the coadjoint orbit O_Λ lies in _-^*. Using ⟨  , ⟩ we can identify the adjoint and coadjoint actions ad^*∼ad and Ad^*∼Ad. Also, we use use it to identify the transpose A^*:^*→^* of any linear map A:→ with the transpose of A with respect to ⟨  , ⟩ defined on . Writing (ξ,X)=⟨ Y,X⟩ means that we have (A^*(ξ),X)=(ξ,A(X))=⟨ Y,A(X) ⟩=⟨ A^*(Y),X ⟩ . This allows us to work with L=-R_-^*(φ_- Λ φ_-^-1)=-R_-^*(φ Λ φ^-1) , where we have dropped the redundant subscript on φ in the second equality with φ=φ_-∈ G_-. From the definitions ⟨ X , R_± Y ⟩=⟨ R^*_± X , Y ⟩ and ⟨ X , P_± Y ⟩=⟨ Π_∓ X , Y ⟩, where we denote by Π_± the projector onto _±^⊥ along _∓^⊥, we find R^*_±=±Π_∓. Note that this is an example of non-skew-symmetric r-matrix since R^*=Π_–Π_+≠ -R=P_–P_+ . Now φ Λ φ^-1 is of the form φ Λ φ^-1=[ a_1 * * * … *; b_1 a_2 * * … *; 0 b_2 a_3 * … *; 0 0 b_3 ⋱ ⋱ ⋮; ⋮ ⋱ ⋱ *; 0 0 0 … b_N a_N+1; ] . So, we find L=Π_+(φ Λ φ^-1)=[ a_1 b_1 0 0 … 0; b_1 a_2 b_2 0 … 0; 0 b_2 a_3 b_3 … 0; 0 0 b_3 ⋱ ⋱ ⋮; ⋮ ⋱ ⋱ b_N; 0 0 0 … b_N a_N+1; ] , it is symmetric tridiagonal. Using the Hamiltonian H_1(L) = -1/2 Tr L^2 , we then find R_+∇ H_1(L)=P_+(-L)=[ 0 b_1 0 0 … 0; -b_1 0 b_2 0 … 0; 0 -b_2 0 b_3 … 0; 0 0 -b_3 ⋱ ⋱ ⋮; ⋮ ⋱ ⋱ b_N; 0 0 0 … -b_N 0; ] . A direct substitution in (<ref>) with k=1, ∂_t_1L=[R_±∇ H_1(L),L] reproduces the open finite Toda lattice equations in Flaschka's coordinates a_n, b_n∂_t_1a_1=2b_1^2 , ∂_t_1a_N+1=-2b_N^2 , ∂_t_1a_j=2(b_j^2-b_j-1^2) , j=2,…,N , ∂_t_1b_j=b_j(a_j+1-a_j) , j=1,…,N . The next flow generated by the Hamiltonian H_2(L) = -1/3 Tr L^3 , with gradient ∇ H_2(L)=-L^2 yields R_+∇ H_2(L)=P_+(-L^2) as [ 0 b_1(a_1+a_2) b_1 b_2 0 … 0; -b_1(a_1+a_2) 0 b_2(a_2+a_3) b_2 b_3 … 0; -b_1 b_2 -b_2(a_2+a_3) 0 b_3(a_3+a_4) … 0; 0 -b_2 b_3 -b_3(a_3+a_4) ⋱ ⋱ ⋮; ⋮ ⋱ ⋱ b_N(a_N+a_N+1); 0 0 0 … -b_N(a_N+a_N+1) 0; ] . The corresponding equations from (<ref>) with k=2 read ∂_t_2a_1=2 b_1^2(a_1+a_2) , ∂_t_2a_N+1=-2 b_N^2(a_N+a_N+1) , ∂_t_2a_j=2b_j^2(a_j+a_j+1)-2 b_j-1^2(a_j-1+a_j) , j=2,…,N, ∂_t_2b_1= b_1(a_2^2-a_1^2+b_2^2 ) , ∂_t_2b_N=b_N(a_N+1^2-a_N^2-b_N-1^2) , ∂_t_2b_j=b_j(a_j+1^2-a_j^2+b_j+1^2-b_j-1^2) , j=2,…,N. Lagrangian description: We need to choose a convenient parametrisation of φ since this is the essential ingredient in the Lagrangians _k. We choose φ=U Y , where Y= diag(y_1,…,y_N+1) is the diagonal matrix of diagonal elements of φ (y_i=φ_ii) and U=φ Y^-1 is the upper triangular matrix with 1 on the diagonal and arbitrary elements u_ij, 1≤ i<j≤N. Since φ has non zero determinant, y_i≠ 0, i=1,…,N+1, and with this parametrisation, we find L as in (<ref>) with a_1=y_2y_1 u_12 , a_N+1=-y_N+1y_N u_N,N+1 , a_i=y_i+1y_i u_i,i+1-y_iy_i-1 u_i-1,i , i=2,…,N , b_i=y_i+1y_i , i=1,…,N . Note that ∑_j=1^N+1a_j=0, so we have 2N independent variables on the coadjoint orbit O_Λ. We compute the kinetic part of _k defined in (<ref>) as K_k = ⟨-R^*_-(φ Λ φ^-1) , ∂_t_kφ·_R φ^-1⟩ = -⟨φ Λ φ^-1 , R_-(∂_t_kφ·_R φ^-1)⟩ =-⟨φ Λ φ^-1 , ∂_t_kφ φ^-1⟩=- Tr(Λ φ^-1 ∂_t_kφ) , where in the third step we have used the morphism property of R_- R_-(∂_t_kφ·_R φ^-1)=∂_t_kφ_- φ_-^-1 and φ_-=φ∈ G_-. It remains to express it in terms of our chosen coordinates to get K_k =- Tr(Λ Y^-1 U^-1 ∂_t_k(UY) )=- Tr(Y Λ Y^-1 U^-1 ∂_t_kU )= - ∑_j=1^Ny_j+1/y_j ∂_t_ku_j,j+1 . From these results, it becomes apparent that the convenient coordinates are b_i as given in (<ref>) and u_i≡ u_i,i+1, i=1,…,N. The first two Lagrangians involve the Hamiltonians (<ref>) and (<ref>), and can now be expressed in the u_i,b_i coordinates as follows _1= K_1 - H_1 = -∑_j=1^Nb_j ∂_t_1u_j+ 1/2∑_j=2^N(b_j u_j-b_j-1 u_j-1)^2+∑_j=1^Nb_j^2+ 1/2b_1^2 u_1^2+ 1/2b_N^2 u_N^2 , _2=K_2 - H_2 = -∑_j=1^Nb_j ∂_t_2u_j+1/3∑_j=2^N(b_j u_j-b_j-1 u_j-1)^3+1/3(b_1 u_1)^3+1/3(b_N u_N)^3 +∑_j=2^N-1b_j^2(b_j+1 u_j+1-b_j-1 u_j-1)+b_1^2(b_2 u_2)-b_N^2(b_N-1 u_N-1) . The variation of _1 reads δ_1 = -∑_j=1^N∂_t_1u_j δ b_j+∑_j=1^N∂_t_1b_j δ u_j- ∂_t_1∑_j=1^Nb_j δ u_j    + ∑_j=2^N(b_ju_j-b_j-1u_j-1)(u_j δ b_j+b_j δ u_j)- ∑_j=1^N-1(b_j+1u_j+1-b_ju_j)(u_j δ b_j+b_j δ u_j)    +2∑_j=1^Nb_j δ b_j+ b_1u_1^2 δ b_1+b_1^2u_1 δ u_1+ b_Nu_N^2 δ b_N + b_N^2u_N δ u_N , and gives the following Euler-Lagrange equations ∂_t_1u_1=u_1^2 b_1-u_1 (b_2 u_2-b_1 u_1)+2 b_1 , ∂_t_1u_N=u_N (b_N u_N-b_N-1 u_N-1)+u_N^2 b_N+2 b_N , ∂_t_1u_j=u_j(b_j u_j-b_j-1 u_j-1)-u_j(b_j+1 u_j+1-b_j u_j)+2 b_j , ∂_t_1b_1=b_1(b_2u_2-b_1u_1)-b_1^2 u_1 ,  ∂_t_1b_N=-b_N(b_Nu_N-b_N-1u_N-1)-b_N^2 u_N , ∂_t_1b_j=b_j(b_j+1u_j+1-b_ju_j)-b_j(b_ju_j-b_j-1u_j-1) , for j=2,…,N-1. It is easy to see that these equations give exactly (<ref>) using the identification (see (<ref>)) a_1=b_1u_1 , a_N+1=-b_Nu_N , a_j=b_ju_j-b_j-1u_j-1 , j=2,…,N . This provides a very explicit check that our Lagrangians produce the corresponding Lax equations, in coordinates naturally dictated by the coadjoint orbit construction of the kinetic term, here u_j, b_j. As recalled in Section <ref>, the kinetic part of a Lagrangian provides the (pullback of the) symplectic form of the model via the Cartan form θ_R. Here, we have (see the total derivative term in δ_1) θ_R=∑_j=1^Nb_j δ u_j ⇒ Ω_R=∑_j=1^Nδ b_j∧δ u_j . This shows that the coordinates u_j, b_j are canonical. In the present case, choosing b_j, a_j for j=1,…,N, as the coordinates on the coadjoint orbit O_Λ, we can also express the Kostant-Kirillov form explicitly using the formula u_j=1/b_j∑_ℓ=1^j a_ℓ to get ω_R=∑_j=1^N1/b_j∑_ℓ=1^jδ b_j∧δ a_ℓ . It is instructive to see how the usual Hamiltonian formulation of the open Toda chain in canonical coordinates q_i,p_i is derived from our Lagrangian formulation. From the symplectic form (<ref>), we deduce the following (canonical) Poisson brackets[Here we drop the subscript R when referring to the Poisson bracket {  , }_R since there will be no confusion with another Poisson bracket.] {b_j , u_k}=δ_jk , {b_j , b_k}=0={u_j , u_k} , j,k=1,…,N . The Legendre transformation ∂_1/∂ (∂_t_1 u_j)=b_j reproduces, as it should, the Hamiltonian ∑_j=1^N∂_1/∂ (∂_t_1 u_j) ∂_t_1u_j-_1= - 1/2∑_j=2^N(b_j u_j-b_j-1 u_j-1)^2-∑_j=1^Nb_j^2- 1/2b_1^2 u_1^2- 1/2b_N^2 u_N^2=H_1(L) . The matrix L for 𝔰𝔩(N+1) in canonical coordinates (q_i, p_i) is given by L = [ p_1 e^q_1-q_2 0 0 0 …; e^q_1-q_2 p_2 e^q_2-q_3 0 0 …; 0 e^q_2-q_3 p_3 e^q_3-q_4 0 …; 0 0 ⋱ ⋱ ⋱ ; ⋮ e^q_N-1-q_N p_n e^q_N-q_N+1; 0 e^q_N-q_N+1 p_N+1 ] , and by comparison with (<ref>), we set the change of variables q_j=∑_k=j^Nln b_k , j=1,…,N , p_j=b_j u_j-b_j-1 u_j-1 , j=2,…,N , p_1=b_1 u_1 , p_N+1=-b_N u_N . From (<ref>), we deduce by direct calculation {q_j , p_k}=δ_jk ,  {q_j , q_k}=0={p_j , p_k} ,  j,k=1,…,N , {q_j , p_N+1}=-1 ,  {p_j , p_N+1}=0 ,  j=1,…,N . Note that p_N+1 is redundant for the description of the dynamics since we only need the map (u_j,b_j)↦ (q_j,p_j) for j=1,…,N. This is captured by the fact that the previous relations imply that C=∑_j=1^N+1 p_j is a Casimir on the 2N phase space with coordinates (q_1,…,q_N,p_1,…,p_N+1) and we can work with C=0. The coordinate p_N+1 is still useful to write the Hamiltonian in the compact familiar form as H_1= -1/2∑_j=1^N+1p_j^2-∑_j=1^N-1e^2(q_j-q_j+1)-e^2q_N . Hamilton's equations ∂_t_1q_j={q_j,H_1}, ∂_t_1p_j={p_j,H_1} yield ∂_t_1p_1=2e^2(q_1-q_2) , ∂_t_1p_N+1=-2e^2q_N , ∂_t_1p_j=2(e^2(q_j-q_j+1)-e^2(q_j-1-q_j)) , j=2,…,N-1 , ∂_t_1q_j=p_N+1-p_j ,  j=1,…,N . These can be seen to be equivalent to (<ref>), thus completing the Hamiltonian description of the first flow for open Toda chain, from our Lagrangian formulation. The same analysis can be performed with _2 although the calculations are longer. We simply record here the Euler-Lagrange equations obtained from δ_2 ∂_t_2u_1= u_1((b_1 u_1)^2-(b_2 u_2-b_1 u_1)^2-b_2^2 ) +2 b_1 b_2 u_2 , ∂_t_2u_N=u_N((b_N u_N-b_N-1 u_N-1)^2-(b_N u_N)^2+b_N-1^2 ) -2 b_N b_N-1 u_N-1 , ∂_t_2u_j=u_j((b_j u_j-b_j-1 u_j-1)^2-(b_j+1 u_j+1-b_j u_j)^2)+u_j (b_j-1^2-b_j+1^2) +2 b_j (b_j+1 u_j+1-b_j-1 u_j-1) , ∂_t_2b_1= b_1((b_2 u_2-b_1 u_1)^2-(b_1 u_1)^2+b_2^2 ) , ∂_t_2b_N= b_N((b_N u_N)^2-(b_N u_N-b_N-1 u_N-1)^2-b_N-1^2 ) , ∂_t_2b_j= b_j((b_j+1 u_j+1-b_j u_j)^2-(b_j u_j-b_j-1 u_j-1)^2)-b_j (b_j-1^2-b_j+1^2) , for j=2,…,N-1. We leave it to the reader to check that these correctly reproduce (<ref>) using again (<ref>). To conclude this example we establish the closure relation for the first two flows, ∂_t_2_1-∂_t_1_2=0  on shell . We know from our general results that this must hold, so this is simply an explicit check. We know that the kinetic and potential contributions give zero separately, so we split the calculations accordingly. For the potential terms, it is more expedient to use the a_j,b_j coordinates[Note that for conciseness, we treated the equations for j=1 and j=N on the same level as for j=2,…,N-1 by formally introducing b_0=0 and b_N+1.] and equations (<ref>) and (<ref>) ∂_t_2H_1-∂_t_1H_2 = ∂_t_1(∑_j=1^N+1a_j^3/3+∑_j=1^Nb_j^2(a_j+a_j+1) )-∂_t_2(∑_j=1^N+1a_j^2/2+∑_j=1^Nb_j^2 ) = ∑_j=1^N+1 2 a_j^2(b_j-1^2-b_j^2) + ∑_j=1^N2b_j^2(a_j^2-a_j+1^2+b_j-1^2-b_j+1^2) -∑_j=1^N+1 2 a_j(b_j-1^2(a_j-1+a_j)-b_j^2(a_j+a_j+1)) -∑_j=1^N2b_j^2(a_j^2-a_j+1^2+b_j-1^2-b_j+1^2) =∑_j=1^N+1 2 (a_ja_j+1b_j^2-a_ja_j-1b_j-1^2 ) =0 , where in the last step we recognise a telescopic sum. For the kinetic terms, we also use the a_j, b_j coordinates wherever possible to expedite the calculations ∂_t_1K_2 - ∂_t_2K_1 = ∑_j=1^N(∂_t_1 (b_j∂_t_2u_j)-∂_t_2(b_j∂_t_1u_j )) =∑_j=1^N(∂_t_1((a_j+1^2-a_j^2+b_j+1-b_j-1^2)u_jb_j-2b_j^2(a_j+a_j+1) )-∂_t_2((a_j+1-a_j)u_jb_j-2b_j^2 )) =∑_j=1^N((∂_t_1(a_j+1^2-a_j^2+b_j+1-b_j-1^2)- ∂_t_2(a_j+1-a_j)) u_jb_j - 2b_j^2(b_j+1^2-b_j-1^2)) =0 , where in the last step the first term gives zero for each j upon using the equations of motion and the remaining terms form a telescopic sum adding up to zero. A Lagrangian multiform for the Toda chain was first constructed in <cit.> using variational symmetries of a given starting Lagrangian, which would be _1 in our context, to construct higher Lagrangian coefficients which constitute a multiform when assembled together. The infinite Toda chain was studied more recently in <cit.> to illustrate the newly introduced theory of Lagrangian multiforms over semi-discrete multi-time. In <cit.>, the analogue of our _2 and _3 were constructed. The Noether integrals J_1 and J_2 (equations (10.11) and (10.12) in <cit.>) which constitute the potential part of their Lagrangians are nothing but H_2(L) and H_3(L) with L parametrised as in (<ref>), up to an irrelevant change of convention e^q_i-q_i+1→e^q_i+1-q_i and setting q_i=x_i and p_i=ẋ_i. The kinetic part of the higher Lagrangians in <cit.> involves the so-called alien derivatives which are symptomatic of constructing a multiform from a starting Lagrangian and building compatible higher Lagrangian coefficients. Our construction prevents the problem of alien derivatives altogether, putting all the Lagrangian coefficients on equal footing. This was also achieved previously in the context of field theories in <cit.>. § OPEN TODA CHAIN WITH A SKEW-SYMMETRIC R-MATRIX We now present the same model for the same algebra =(N+1) but endowed with a different Lie dialgebra structure. This is based on the Cartan decomposition of and leads to a skew-symmetric r-matrix. One attractive feature of this setup that we only illustrate for (N+1) is that it allows for a generalisation to any finite semi-simple Lie algebra, see <cit.>. Algebraic setup: Consider the decomposition 𝔤 = 𝔫_+ ⊕𝔥⊕𝔫_- , where 𝔥 is the Cartan subalgebra of diagonal (traceless) matrices and 𝔫_± the nilpotent subalgebra of strictly upper/lower matrices. Let P_±, P_0 be the projectors onto 𝔫_± and 𝔥 respectively, relative to the decomposition (<ref>) and set R=P_+-P_-. It can be verified that R satisfies the mCYBE. Here R_±=±(P_±+P_0/2) and 𝔤_± = Im(R_±) = 𝔟_± = 𝔥⊕𝔫_± . We have the following action of R_± on the elements y ∈𝔥 and w_±∈𝔫_± R_± (y) = ±1/2 y , R_± (w_±) = ± w_± , R_± (w_∓) = 0 . Taking the same bilinear form as in (<ref>), ⟨ X, Y⟩=(XY), we see that P_±^*=P_∓ ,  P_0^*=P_0  so that   R^*=-R . Thus we have a skew-symmetric r-matrix here. For the related Lie groups we have the following factorisation close to the identity φ = φ_+ φ_-^-1 , φ_± = W_± Y^± 1 , Y∈exp(𝔥) , W_±∈exp(𝔫_±) . Lax matrix and Lax equations for the first two flows: For Λ∈𝔤^*≃, the expression of L as a coadjoint orbit of Λ is given by L = Ad^R*_φ·Λ = R^^*_+( W_+ Y Λ Y^-1 W_+^-1) - R^^*_-(W_- Y^-1 Λ Y W_-^-1) . We choose Λ as in (<ref>), emphasising that in this case it is an element of the full 𝔤^* ≃𝔤, and Y ∈exp(𝔥), W_±∈exp(𝔫_±) given by Y = diag( η_1 , η_2 …, η_N+1) , Y = 1 , W_- =[ 1 0 0 … 0; ω^-_2,1 1 0 … 0; ω^-_3,1 ω^-_3,2 1 … 0; ⋮ ⋱ ⋱ ⋱ 0; ω^-_N,1 ω^-_N,2 … ω^-_N,N-1 1; ] ,    W_+ =[ 1 ω^+_1, 2 ω^+_1, 3 … ω^+_1, N; 0 1 ω^+_2, 3 … ω^+_2 ,N; 0 0 1 ⋱ ⋮; ⋮ ⋱ ω^+_N-1, N; 0 0 … 0 1; ] . From (<ref>), we deduce that R^*_±=±(P_∓+P_0/2) so that R_±^* (y) = ±1/2 y , R_±^* (w_±) = 0 , R_±^* (w_∓) = ± w_∓ , for y ∈𝔥 , w_±∈𝔫_±. Let us introduce the variables (w_i,z_i), defined as w_i = ω^+_i,i+1 - ω^-_i+1,i/2 , z_i=2 η_i+1/η_i , from which we determine the Flaschka coordinates as a_i = w_i z_i-w_i-1 z_i-12 , i = 2, …, N-1 , a_1 = w_1 z_12 , a_N+1 = -w_N z_N2 , b_i = z_i2 , i=1, …, N . The evaluation of (<ref>) in those coordinates reproduces the tridiagonal form as in (<ref>). One can then check that the equations for the first two flows (<ref>) and (<ref>) in the previous section derive from the Lax equation ∂_t_k L = [ R_+(∇ H_k(L)),L ] , k = 1,2 , where the Hamiltonians are taken as H_1(L) = (L^2) , H_2(L) = 2/3 (L^3) , and we recall that R_+=P_+ +P_0/2 here. Lagrangian description: The Lagrangian multiform takes the form for L ∈𝒪_Λ , φ∈ G_R = ∑_k _k dt_k = ∑_k ( K_k(L) - H_k(L) ) dt_k , where the kinetic and the potential terms are given by K_k(L) = Tr ( L ∂_t_kφ·_R φ^-1 ) , H_k(L) = 2/k+1 Tr (L^k+1 ) , respectively. As in the previous section, the kinetic term will let us recognise natural canonical variables of the system in this description. Recalling (<ref>), (<ref>), (<ref>) and (<ref>), we find K_k(L) = Tr ( Λ φ^-1·_R ∂_t_kφ ) = Tr ( Λ φ^-1_+ ·∂_t_kφ_+ ) - Tr ( Λ φ^-1_- ·∂_t_kφ_- ) =∑_i=1^N η_i+1η_i ∂_t_kω_i,i+1^+ -∑_i=1^N η_i+1η_i ∂_t_kω_i+1,i^- =∑_i=1^N z_i ∂_t_k w_i . The k-th Lagrangian coefficient expressed in terms of the coordinates (w_i,z_i) reads _k = K_k - H_k = ∑_i=1^N z_i ∂_t_kw_i - H_k , with (w_i,z_i) being canonical coordinates, and with, for k=1,2, H_1(L) = Tr (L^2) = ∑_i=1^N1/2(z_i^2+w_i^2 z_i^2 )- ∑_i=1^N-11/2 w_i z_i w_i+1 z_i+1 , H_2(L) = 23 Tr (L^3) = ∑_i=1^N1/4(z_i^2 w_i+1 z_i+1-z_i+1^2 w_i z_i + w_i^2 z_i^2 w_i+1 z_i+1 -w_i z_i w_i+1^2 z_i+1^2 ) . To obtain the latter expressions of the Hamiltonians, it suffices to use the following expression for L in the w_i,z_i coordinates L = 1/2[ w_1 z_1 z_1 0 0 … 0; z_1 w_2 z_2-w_1 z_1 z_2 0 … 0; 0 z_2 w_3 z_3-w_2 z_2 z_3 … 0; 0 0 z_3 ⋱ ⋱ ⋮; ⋮ ⋱ ⋱ z_N; 0 0 0 … z_N -w_N z_N ] . Note that we can just as easily determine the higher Hamiltonians and hence the higher Lagrangian coefficients _k, although the expressions become lengthy. The variation δ_1 yields the following Euler-Lagrange equations ∂_t_1 w_1 = z_1 - w_12 ((w_2 z_2 - w_1 z_1) - w_1 z_1) , ∂_t_1 w_N = z_N - w_N2 ( - w_N z_N - ( w_N z_N - w_N-1 z_N-1 ) ) , ∂_t_1 w_i = z_i - w_i2 ( (w_i+1 z_i+1 -w_i z_i ) -(w_i z_i - w_i-1 z_i-1 ) ) , ∂_t_1 z_1 = z_12 ((w_2 z_2 - w_1 z_1) - w_1 z_1) , ∂_t_1 z_N = z_N2 ( -w_N z_N +(w_N z_N - w_N-1 z_N-1) ) , ∂_t_1 z_i = z_i2 ((w_i+1 z_i+1- w_i z_i) - ( w_i z_i - w_i-1 z_i-1 ) ) , for i=2, …, N-1, while the variation of _2 gives the Euler-Lagrange equations for the second flow ∂_t_2 z_1 = z_14( (w_2 z_2-w_1 z_1)^2 -(w_1 z_1)^2 + z_2^2 ) , ∂_t_2 z_N = z_N4( (-w_N z_N)^2 - (w_N z_N - w_N-1 z_N-1)^2 -z_N-1^2 ) , ∂_t_2 z_i = z_i4( (w_i+1 z_i+1-w_i z_i)^2 -(w_i z_i-w_i-1 z_i-1)^2 + z_i+1^2- z_i-1^2 ) , ∂_t_2 w_1 = z_12(w_2 z_2) -w_14( (w_2 z_2-w_1 z_1)^2 -(w_1 z_1)^2 + z_2^2 ) , ∂_t_2 w_N = z_N2(w_N-1 z_N-1) -w_N4( (w_N z_N)^2 -(w_N z_N-w_N-1 z_N-1)^2- z_N-1^2 ) , ∂_t_2 w_i = z_i2((w_i+1 z_i+1-w_i z_i)-(w_i z_i-w_i-1 z_i-1)) -w_i4( (w_i+1 z_i+1-w_i z_i)^2 -(w_i z_i-w_i-1 z_i-1)^2 + z_i+1^2- z_i-1^2 ) , with i=1,…,N-1. One can check that these reproduce the more familiar equations (<ref>)-(<ref>) in Flaschka coordinates, using (<ref>). As in the previous section, we can relate our results with the Hamiltonian formulation of the Toda chain in traditional canonical coordinates (q_i,p_i). With θ_R = ∑_i=1^N z_i δ w_i     {z_i,w_j} = δ_ij , {w_i,w_j} = 0 = {z_i,z_j} , we see that it suffices to set q_i = ∑_ℓ=i^N lnz_N2 , i = 1, …, N p_i = w_i z_i-w_i-1 z_i-12 , i = 2, …, N p_1 = w_1 z_12 , p_N+1 = -w_N z_N2 . The explicit verification of the closure relation in the first two flows is completely analogous to that given at the end of the previous section. § RATIONAL GAUDIN MODEL Gaudin models are a general class of integrable systems associated with Lie algebras with a nondegenerate invariant bilinear form. Unlike the case of the open Toda lattice, the Lax matrix of a Gaudin model is a Lie algebra-valued rational function of a variable λ, the spectral parameter. We will only look at finite Gaudin models here, which describe certain spin chains and mechanical systems. To accommodate this, we need to extend our construction to certain infinite-dimensional Lie algebras. Before diving into the required algebraic machinery, it is useful to recall the usual presentation of the equations of the model that we are aiming at describing variationally. We do so in the simplest case of a rational Lax matrix with simple poles. Many generalisations are known, including elliptic and non-skew-symmetric cases <cit.>. The Lax matrix of a (rational) Gaudin model associated with a finite Lie algebra and a set of points ζ_r ∈ℂ(r=1, …, N) and the point at infinity is given by the following -valued rational function L()= ∑_r=1^N X_r/λ - ζ_r + X_∞ ,  X_1,…,X_N,X_∞∈ . The coefficients H^n_k, r of (-ζ_r)^-n-1, n≥ 0, in (L()^k+1)/(k+1), k≥ 1, are Hamiltonians in involution (with respect to the Sklyanin bracket). Of course, only a finite subset of them are independent and generate nontrivial flows. In the rest of this paper, we will focus on the coefficients corresponding to n=0 and drop the extra label by simply writing H^0_k, r=H_k, r. The most famous ones are the quadratic Gaudin Hamiltonians which are the coefficients H_1, r in 1/2(L()^2) = 1/2∑_r=1^N (X_r^2)/(λ - ζ_r)^2 + ∑_r=1^NH_1, r/λ - ζ_r + 1/2(X_∞^2) , and read H_1, r=∑_s≠ r (X_rX_s)/ζ_r-ζ_s+(X_rX_∞) ,  r=1,…,N . The functions H_k, r give rise to a hierarchy of compatible equations in Lax form ∂_t_k^rL()=[M_k, r() , L()] . For k=1, we have M_1, r = -X_r/ - ζ_r , and (<ref>) gives the following equations of motion for the degrees of freedom in X_1,…,X_N,X_∞ ∂_t_1^rX_s = [X_r , X_s]/ζ_r-ζ_s , s≠ r , ∂_t_1^rX_r = -∑_s≠ r[X_r , X_s]/ζ_r-ζ_s- [X_r , X_∞] , ∂_t_1^rX_∞ = 0 . We proceed to derive a Lagrangian multiform description of the set of equations (<ref>)-(<ref>), as well as those corresponding to the next higher Hamiltonians with k=2. In principle, we could also include all higher Hamiltonians, but the first two levels are enough to illustrate our method. To do so, we need to be able to interpret L() as living in a coadjoint orbit and use the framework of Lie dialgebras. This is described in <cit.> which we now review and adapt to our purposes. Algebraic setup: Let Q = {ζ_1, …, ζ_N, ∞}⊂ℂP^1 be a finite set of points in ℂP^1 including the point at infinity, and denote by ℱ_Q() the algebra of -valued rational functions in the formal variable λ with poles in Q. Further, define the local parameters λ_r = λ - ζ_r, ζ_r ≠∞ , λ_∞ = 1/λ , and let S={1,…,N,∞}. This is to be used as an index set, so ∞ is viewed here purely as a label for an index, not as the point at infinity. For each r∈ S, consider the algebra _r of formal Laurent series in variable λ_r with coefficients in , _r = ⊗ℂ((λ_r)) , with Lie bracket [Xλ_r^i, Yλ_r^j] = [X, Y] λ_r^i+j, X,Y ∈ . We have the vector space decomposition into Lie subalgebras _r = _r+⊕_r- , where _r+=⊗ℂ[[λ_r]] , r≠∞ , _∞ +=⊗_∞ℂ[[λ_∞]] , and _r-=⊗λ_r^-1ℂ[λ_r^-1] , r≠∞ , _∞ -=⊗ℂ[λ_∞^-1] . In other words, _r+ is the algebra of formal Taylor series in _r (without constant term when r=∞) and _r- is the algebra of polynomials in _r^-1 without constant term (except when r=∞). Associated with this decomposition, we have projectors P_r± onto _r± relative to _r∓. Let us now consider _Q defined as the following direct sum of Lie algebras _Q = ⊕_r∈ S_r . The above decomposition yield the decomposition of _Q as _Q=_Q+⊕_Q-  with  _Q+ = ⊕_r∈ S_r+  and  _Q- = ⊕_r∈ S_r- , and the related projectors P_±. Although useful, as we will see below, the decomposition (<ref>) is not what we need to interpret (<ref>) within the Lie dialgebra setup. So, let us consider the map ι_λ: ℱ_Q() →_Q, f ↦(ι_λ_1 f, …, ι_λ_N f, ι_λ_∞ f ), where ι_λ_r f ∈_r is the formal Laurent series of f ∈ℱ_Q() at ζ_r ∈ℂP^1 and ι_λ_∞ f ∈_r that of f ∈ℱ_Q() at ζ_∞. This is an embedding of Lie algebras. In addition, we have the vector space decomposition _Q = _Q+⊕ι_λℱ_Q(). Let us introduce the projectors Π_± associated with this decomposition. They are different from P_± related to (<ref>). The following relation is useful in practical calculations (see below when computing gradients or in (<ref>)) Π_-(X)=ι_∘π_∘ P_-(X) ,  X∈_Q , where the map π_:_Q-→ℱ_Q() given by π_ (Y^1(_1),…, Y^N(_N),Y^∞(_∞)) =∑_r∈ SY^r(_r) puts elements of _Q- and ℱ_Q() in one-to-one correspondence. This amounts to decomposing an f∈ℱ_Q() into the sum of its partial fractions Y^r(_r). We define the r-matrix we need as R=Π_+-Π_- and use it to define on _Q the structure of a Lie dialgebra to which we will apply the general results of that theory. Since we want to work with rational fractions which we have naturally embedded as ι_λℱ_Q() into _Q, we need to identify the dual space this corresponds to, so that we can identify the coadjoint action and its orbits appropriately. The nondegenerate invariant symmetric bilinear form on , given by (X, Y) ↦(XY), can be used to define a nondegenerate invariant symmetric bilinear form on _Q by setting ⟨ X, Y ⟩ = ∑_r∈ SRes_λ_r = 0(X^r(λ_r) Y^r(λ_r)) . Both _Q+ and ι_λℱ_Q() are Lie subalgebras which are (maximally) isotropic with respect to the bilinear form ⟨   ,  ⟩ in (<ref>). This tells us that _Q+^*≃ι_λℱ_Q() , so that elements of _Q+^* are those we should work with if we want to deal with Lax matrices which are rational fractions of the spectral parameter. Accordingly, coadjoint orbits of _Q+ in _Q+^* are the natural arena for the description of Gaudin Lax matrices. _Q+ is the group associated with the algebra _Q+, with elements of the form φ_+=(φ_+^1(_1),…,φ_+^N(_N),φ_+^∞(_∞) ) . Each component φ_+^r(_r) is a Taylor series in the local parameter _r with values in G whose Lie algebra is , φ_+^r(_r)=∑_n=0^∞ϕ_(n)^r_r^n , φ_+^∞(_∞)=+∑_n=1^∞ϕ_(n)^∞_∞^n . As always, in practice we use the identification (<ref>) (identifying the action and coadjoint actions accordingly) and the (co)adjoint orbit of an element f∈ι_λℱ_Q() can be seen to be given by the elements F=Π_-(Ad_φ_+· f)≡ι_ L . In (<ref>), the adjoint action of φ_+ on f is defined component-wise (Ad_φ_+· f)^r(_r)=φ_+^r(_r) f^r(_r) φ_+^r(_r)^-1 ,  r∈ S . Thus, we have a construction that allows us to interpret a rational Lax matrix L() as an element of a (co)adjoint orbit and recast (<ref>) as the following Lax equation in ι_λℱ_Q() ∂_t_k^rι_ L=[R_±∇ H_k, r(ι_ L),ι_ L] , where H_k, r are the following invariant functions on _Q H_k, r: X∈_Q↦__r=0(X^r(_r)^k+1)/k+1 ,  k≥ 1 . We now apply the described framework to show how (<ref>) is derived in this context for k=1,2. Then we construct explicitly the corresponding Lagrangian coefficients of our multiform and check that their Euler-Lagrange equations produce the correct equations of motion. Lax matrix and Lax equations for the first two flows: Let us choose Λ() =∑_r=1^NΛ^r/-ζ_r+Ω , and apply (<ref>) to f=ι_Λ to get ι_ L=Π_-(Ad_φ_+·Λ) = ι_∘π_∘ P_-(Ad_φ_+·Λ) = ι_∘π_(ϕ_(0)^1 Λ_1 (ϕ_(0)^1)^-1/-ζ_1,…, ϕ_(0)^N Λ_N (ϕ_(0)^N)^-1/-ζ_N,Ω) ≡ι_∘π_(A_1/-ζ_1,…, A_N/-ζ_N,Ω) =ι_(∑_r=1^NA^r/-ζ_r+Ω) . This is the desired form of (<ref>) where now each X_r is of the form A_r=ϕ_(0)^r Λ_r (ϕ_(0)^r)^-1 with Λ_r∈ fixed and ϕ_(0)^r containing the dynamical degrees of freedom. This is the (co)adjoint description required to compute our Lagrangian coefficients, see below. Next, we derive the Lax equations in ι_λℱ_Q() associated with the functions H_k, r(ι_ L) for k=1, 2. The gradient of H_k, r at the point ι_ L is defined as the element of _Q satisfying lim_ϵ→ 0H_k, r(ι_ L+ϵη)-H_k, r(ι_ L)/ϵ=⟨η , ∇ H_k, r(ι_ L)⟩ , for all η∈_Q. It is enough for our purposes to calculate R_-(∇ H_k, r(ι_ L)), therefore, we can restrict η to _Q+. Thus, writing ∇ H_k, r(ι_ L)=N^(k)+ι_ h^(k) , N^(k)∈_Q+ ,  h^(k)()∈ℱ_Q() , recalling that _Q+ and ι_ℱ_Q() are isotropic with respect to the bilinear form in (<ref>), (<ref>) becomes __r=0( η_r ι__r L^k )=∑_s ∈ S__s=0( η_s ι__s h^(k)), for any η_s ∈_s+, s∈ S, implying ( ι__sh^(k))_- = 0 ,  ∀ s≠ r , (ι__rh^(k))_- = (ι__r L^k)_- . This means that the rational function h^(k)() has a nonzero principal only at ζ_r which equals (ι__r L^k)_-, so h^(k)() = (ι__r L^k)_-() , and we find R_-(∇ H_k, r(ι_ L)) = -Π_-(∇ H_k, r(ι_ L))= -ι_ h^(k) = -ι_( (ι__r L^k)_- ) . For k=1, 2, this gives us R_-(∇ H_1, r(ι_ L)) = -ι_A_r/-ζ_r , and R_-(∇ H_2, r(ι_ L)) = -ι_( A_r^2/(-ζ_r)^2 +∑_s ≠ rA_rA_s+A_sA_r/( - ζ_r)(ζ_r - ζ_s) + A_rΩ +Ω A_r/ - ζ_r) , respectively. As a consequence, we find the Lax equations for the two levels of flows as ∂_t_1^rι_ L=[-ι_A_r/-ζ_r , ι_ L] , ∂_t_2^rι_ L=[-ι_(A_r^2/(-ζ_r)^2 +∑_s ≠ rA_rA_s+A_sA_r/( - ζ_r)(ζ_r - ζ_s) + A_rΩ +Ω A_r/ - ζ_r) , ι_ L] . Explicitly, they yield the following equations on the A_s, ∂_t_1^rA_s= [A_r , A_s]/ζ_r-ζ_s ,  s≠ r , ∂_t_1^rA_r= -∑_s≠ r[A_r , A_s]/ζ_r-ζ_s- [A_r , Ω] , thus reproducing (<ref>)-(<ref>) ((<ref>) is automatic here since Ω is a constant element of ), and ∂_t_2^rA_s = - [A_r^2 , A_s]/(ζ_r-ζ_s)^2 + ∑_s^'≠ r[A_r A_s^' + A_s^'A_r, A_s]/(ζ_r-ζ_s)(ζ_r-ζ_s^') + [A_rΩ + Ω A_r , A_s]/ζ_r-ζ_s ,   s ≠ r , ∂_t_2^rA_r = ∑_s ≠ r[A_r^2 , A_s]/(ζ_r-ζ_s)^2 -∑_s≠ r∑_s^'≠ r[A_r , A_s A_s^']/(ζ_r-ζ_s)(ζ_r-ζ_s^') - ∑_s ≠ r[A_r , A_s Ω + Ω A_s]/ζ_r-ζ_s - [A_r , Ω^2] . Lagrangian description: Applying our general formula for the Lagrangian coefficients, we obtain the following multiform on the orbit of Λ(), with elements ι_ L given in (<ref>), = ∑_k=1^N∑_r ∈ S_k, r dt_k^r , with _k, r = ∑_s∈ S_λ_s = 0(ι__sL ∂_t_k^rφ_+^s(_s) φ_+^s(_s)^-1)-H_k, r(ι_ L) , where H_k, r(ι_ L) is the restriction of H_k, r to ι_L. For the kinetic part, we have _λ_s = 0(ι__sL ∂_t_k^rφ_+^s(_s) φ_+^s(_s)^-1)=(Λ_s (ϕ_(0)^s)^-1∂_t_k^rϕ_(0)^s) ,  s=1,…,N, and _λ_∞ = 0(ι__∞L ∂_t_k^rφ_+^∞(_∞) φ_+^∞(_∞)^-1)=(Ω∂_t_k^rϕ_(1)^∞ ϕ_(1)^∞)=1/2∂_t_k^r(Ω (ϕ_(1)^∞)^2). The contribution at ∞ is a total derivative, so it will not enter the Euler-Lagrange equations and hence we discard it. Thus, only the term ϕ_(0)^s in the Taylor series of φ_+^s(_s) appears in the kinetic term. We will simply denote it by ϕ^s to lighten notations. The Lagrangian coefficients of the Gaudin multiform take the form _k, r = ∑_s=1^N(Λ_s (ϕ^s)^-1∂_t_k^rϕ^s) - H_k, r(ι_ L). More explicitly, for k=1,2, we have H_1, r(ι_L) = ∑_s≠ r (A_rA_s)/ζ_r-ζ_s+(A_rΩ) , and H_2, r(ι_L) = (A_r (∑_s≠ rA_s/ζ_r-ζ_s +Ω)^2) - (A_r^2 (∑_s≠ rA_s/(ζ_r-ζ_s)^2) ) . Varying _1, r and _2, r with respect to ϕ^s, s=1,…,N (recalling that A_s=ϕ^s Λ_s (ϕ^s)^-1), one can check by direct calculations that the Euler-Lagrange equations give exactly (<ref>)-(<ref>). The algebraic framework we have used to describe the Lagrangian multiform for the Gaudin is to a very large extent similar to that used in <cit.> to construct Lagrangian multiforms of Zakharov-Mikhailov type. Therefore, in hindsight, it is perhaps not so surprising that the Lagrangian _1, r = ∑_s=1^N(Λ_s (ϕ^s)^-1∂_t_1^rϕ^s) - ∑_s≠ r (A_rA_s)/ζ_r-ζ_s - (A_rΩ) , appears to be the direct analog in the finite-dimensional case of the Zakharov-Mikhailov Lagrangians which describe integrable field theories with rational Lax matrices <cit.>. It is a rather satisfying outcome that we have unravelled the unifying structure underlying such Lagrangians, whether in finite or infinite dimensions. They are all connected to Lie dialgebras which control the structure of their kinetic part and tell us which potentials to include (invariant functions on ^*). Note that in <cit.>, a very similar Lagrangian, their eq. (24), was constructed by a completely different method: an adaptation of the idea of 4d Chern-Simons theory, see <cit.> and references therein, and of the construction in <cit.> to the case of a BF theory in 3d. This suggests the tantalising direction of deriving our Lagrangian multiforms from an appropriately adapted BF theory. This could perhaps offer an interpretation for the appearance of Lie dialgebras from this point of view, instead of introducing them ad hoc as we do in the present paper. We know from the general theory that the closure relation d = 0 holds on shell. This implies ∂_t_j^s_k, r - ∂_t_k^r_j, s = 0 , for all possible combinations of j,k and r,s. As we know, the kinetic and potential contributions give zero separately in each case. Let us illustrate the main steps here for k=1, j=2 and r≠ s in (<ref>), the left-hand side of which will then read ∑_s^'=1^N ∂_t_2^s(Λ_s^' (ϕ^s^')^-1∂_t_1^rϕ^s^') - ∑_s^'=1^N ∂_t_1^r(Λ_s^' (ϕ^s^')^-1∂_t_2^sϕ^s^') - ∂_t_2^s H_1, r(ι_ L) + ∂_t_1^r H_2, s(ι_ L) . Using the equations of motion, we have ∂_t_2^s H_1, r(ι_L) = ∑_s^'≠ r1/ζ_r-ζ_s^'(( - [A_s^2 , A_r]/(ζ_s-ζ_r)^2 + ∑_s^''≠ s[A_s A_s^'' + A_s^''A_s, A_r]/(ζ_s-ζ_r)(ζ_s-ζ_s^'') + [A_s Ω + Ω A_s , A_r]/ζ_s-ζ_r) A_s^')   + ∑_s^'≠ r s^'≠ s1/ζ_r-ζ_s^'(A_r ( - [A_s^2 , A_s^']/(ζ_s-ζ_s^')^2 + ∑_s^''≠ s[A_s A_s^'' + A_s^''A_s, A_s^']/(ζ_s-ζ_s^')(ζ_s-ζ_s^'') + [A_s Ω + Ω A_s , A_s^']/ζ_s-ζ_s^') )   + 1/ζ_r-ζ_s(A_r ( ∑_s^'≠ s[A_s^2 , A_s^']/(ζ_s-ζ_s^')^2 -∑_s^'≠ s∑_s^''≠ s[A_s , A_s^' A_s^'']/(ζ_s-ζ_s^')(ζ_s-ζ_s^'') - ∑_s^'≠ s[A_s , A_s^'Ω + Ω A_s^']/ζ_s-ζ_s^' - [A_s , Ω^2] ) )   +(( - [A_s^2 , A_r]/(ζ_s-ζ_r)^2 + ∑_s^'≠ s[A_s A_s^' + A_s^'A_s, A_r]/(ζ_s-ζ_r)(ζ_s-ζ_s^') + [A_s Ω + Ω A_s , A_r]/ζ_s-ζ_r) Ω) . This is seen to add up to zero by assembling the terms of the same nature (quartic, cubic or quadratic in A), manipulating the sums, using the ad-invariance property ([A,B]C)=(A[B,C]) and the identity 1/(ζ_r-ζ_s)(ζ_r-ζ_s^')+1/(ζ_s-ζ_s^')(ζ_r-ζ_s^') +1/(ζ_s-ζ_r)(ζ_s-ζ_s^')=0 . Similar calculations give ∂_t_1^r H_2, s(ι_ L)=0. For the kinetic terms we have ∂_t_2^s∑_s^'=1^N (Λ_s^' (ϕ^s^')^-1∂_t_1^rϕ^s^') - ∂_t_1^r∑_s^'=1^N (Λ_s^' (ϕ^s^')^-1∂_t_2^sϕ^s^') = ∑_s^'=1^N ((∂_t_2^sA_s^') (∂_t_1^rϕ^s^') (ϕ^s^')^-1) - ∑_s^'=1^N ((∂_t_1^rA_s^') (∂_t_2^sϕ^s^') (ϕ^s^')^-1)   + ∑_s^'=1^N ( A_s^' [(∂_t_2^sϕ^s^')(ϕ^s^')^-1 , (∂_t_1^rϕ^s^')(ϕ^s^')^-1] )   + ∑_s^'=1^N ( A_s^'((∂_t_2^s∂_t_1^rϕ^s^')(ϕ^s^')^-1 - (∂_t_1^r∂_t_2^sϕ^s^')(ϕ^s^')^-1) ). The commutativity of flows ensures that the last term equals zero. Further, using the relation ∂_t_2^sA_s^' = [(∂_t_2^sϕ^s^')(ϕ^s^')^-1 , A_s^'],  s^' = 1,…,N , it is easy to see that the first and the third terms cancel each other. Finally, for the second term, using ad-invariance, (<ref>) and the on-shell relations in (<ref>) and (<ref>), we have ∑_s^'=1^N ((∂_t_1^rA_s^') (∂_t_2^sϕ^s^') (ϕ^s^')^-1) = ((∂_t_1^rA_r) (∂_t_2^sϕ^r) (ϕ^r)^-1) + ∑_s^'≠ r((∂_t_1^rA_s^') (∂_t_2^sϕ^s^') (ϕ^s^')^-1) = -∑_s^'≠ r( [A_r , A_s^']/ζ_r - ζ_s^' (∂_t_2^sϕ^r) (ϕ^r)^-1) - ([A_r , Ω] (∂_t_2^sϕ^r) (ϕ^r)^-1)    + ∑_s^'≠ r([A_r , A_s^']/ζ_r - ζ_s^' (∂_t_2^sϕ^s^') (ϕ^s^')^-1) = -∑_s^'≠ r (A_s^'∂_t_2^s A_r)/ζ_r - ζ_s^' - (Ω∂_t_2^sA_r) -∑_s^'≠ r (A_r ∂_t_2^s A_s^')/ζ_r - ζ_s^' =-∂_t_2^s H_1, r(ι_L) which we showed to be zero previously. § CONCLUSION In this work, we provided an answer to the problem of constructing all the coefficients in Lagrangian 1-form for a large class of finite-dimensional integrable systems (any model fitting the Lie dialgebra framework). A reinterpretation of our construction is that we proved that any collection of compatible equations in the Lax form ∂_t_kL=[R_±∇ H_k(L),L] ,   k=1,…,N , is variational, by explicitly providing a collection of Lagrangians assembled in a multiform. The closure relation is equivalent to the involutivity of the Hamiltonians H_k. This is a corollary of our stronger result, Theorem <ref>, which establishes the off-shell origin of this equivalence by identifying the “obstruction” to ∂_k/∂ t_ℓ-∂_ℓ/∂ t_k={H_k,H_ℓ}_R in the form of Υ_k^m P_mn Υ_ℓ^n . The latter identifies the Poisson tensor P_mn of the R-bracket on the phase space as the ingredient specifying the idea of “double-zero on the equations of motions”. Of the many models we could have used to illustrate the results, we chose the open Toda chain and the Gaudin model, two emblematic finite-dimensional integrable systems. A strong motivation for studying the finite Gaudin model is Vicedo's construction of a class of non-ultralocal field theories as affine Gaudin models <cit.>. We very much hope that the present results combined with the approach of <cit.> and Vicedo's construction will allow us to overcome the current limitation of Lagrangian multiforms to only ultralocal field theories. § ACKNOWLEDGEMENTS V.C. wishes to thank T. Skrypnyk for helpful discussions on higher Gaudin Hamiltonians and B. Vicedo and M. Vermeeren for helpful feedback and comments on our draft. V.C. is grateful for M. Semenov-Tian-Shansky for pointing out that the associative YBE in Remark <ref> was already introduced in <cit.> and is related to the older notion of Rota-Baxter algebras. V.C. and M.D. would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme Dispersive Hydrodynamics when initial work on this paper was undertaken (EPSRC Grant Number EP/R014604/1). A.A.S. is funded by the School of Mathematics EPSRC Doctoral Training Partnership Studentship (Project Reference Number 2704447). sn-aps-M99[LN]LN S. Lobb, F.W. Nijhoff, Lagrangian multiforms and multidimensional consistency, J. Phys. A42 (2009), 454013. [N1]N1 F.W. Nijhoff, Lax pair for the Adler (lattice Krichever-Novikov) system, Phys. Lett. A 297 (2002), 49–58. [ABS]ABS A.I. Bobenko, Y.B. Suris, Integrable systems on quad-graphs, Int. Math. Res. Not. 2002(11) (2002) 573–611. [HJN]HJN J. Hietarinta, N. Joshi, F.W. Nijhoff, Discrete Systems and Integrability, Cambridge Texts in Applied Mathematics (Cambridge University Press, 2016). [Su1]Su1 Y.B. Suris,Variational formulation of commuting Hamiltonian flows: multi-time Lagrangian 1-forms, J. Geom. Mech. 5 (2013), 365. [PS]PS M. Petrera, Y.B. Suris, Variational symmetries and pluri-Lagrangian systems in classical mechanics, J. 24 (2017), 121-145. [SV]SV Y. Suris, M. Vermeeren; On the Lagrangian Structure of Integrable Hierarchies, in Advances in Discrete Differential Geometry, edited by A. Bobenko, 347–78. Springer Berlin Heidelberg, 2016.. [SNC1]SNC1 D. Sleigh, F.W. Nijhoff, and V. Caudrelier A Variational Approach to Lax Representations, J. Geom. Phys. 142 (2019), 66-79. [SNC2]SNC2 D. Sleigh, F.W. Nijhoff, and V. Caudrelier Variational symmetries and Lagrangian multiforms, Lett. Math. Phys. 110 (2020), 805–826. [CS1]CS1 V. Caudrelier, M. Stoppato, Hamiltonian multiform description of an integrable hierarchy, J. Math. Phys. 61 (2020), 123506. [CS2]CS2 V. Caudrelier, M. Stoppato, Multiform description of the AKNS hierarchy and classical r-matrix, J. Phys. A54 (2021), 235204. [CSV]CSV V. Caudrelier, M. Stoppato, B. Vicedo, Classical Yang-Baxter equation, Lagrangian multiforms and ultralocal integrable hierarchies, arXiv:2201.08286 (2022). [SNC3]SNC3 D. Sleigh, F.W. Nijhoff, and V. Caudrelier Lagrangian Multiforms for Kadomtsev–Petviashvili (KP) and the Gelfand–Dickey Hierarchy, Int. Math. Res. Not. 2023, (2023), 1420–1460. [N2]N2 F.W. Nijhoff, Lagrangian 3-form structure for the Darboux system and the KP hierarchy, Lett. Math. Phys. 113:27 (2023). [N3]N3 F.W. Nijhoff, Integrable hierarchies, Lagrangian structures and non-commuting flows, M. Ablowitz, B. Fuchssteiner, M. Kruskal (Eds.), Topics in Soliton Theory and Exactly Solvable Nonlinear Equations, World Scientific (1987), 150-181. [SV2]SV2 D. Sleigh, M. Vermeeren, Semi-discrete Lagrangian 2-forms and the Toda hierarchy, J. Phys. A55 (2022), 475204. [Ver]Ver M. Vermeeren, Continuum limits of pluri-Lagrangian systems, J. Integ. Syst. 4 (2019), xyy020. [CNSV]CNSV V. Caudrelier, F. Nijhoff, D. Sleigh, M. Vermeeren, Lagrangian multiforms on Lie groups and non-commuting flows, J. Geom. Phys., 187 (2023), 104807. [PV]PV M.Petrera, M. Vermeeren, Variational symmetries and pluri-Lagrangian structures for integrable hierarchies of PDEs, Eur. J. Math. 7 (2021), 741–765. [RSTS]RSTS A. Reyman, M. Semenov-Tian-Shansky, Group-Theoretical Methods in the Theory of Finite-Dimensional Integrable Systems (1994). [FG]FG L. Feher, A. Gabor, Adler–Kostant–Symes systems as Lagrangian gauge theories, Phys. Lett. A301 (2002), 58-64. [AVV]AVV M. Adler, P. van Moerbeke, P. Vanhaecke, Algebraic Integrability, Painlevé Geometry and Lie Algebras. Springer Berlin Heidelberg (2004). [STS]STS M. Semenov-Tian-Shansky, Integrable Systems: the r-matrix Approach, RIMS-1650 (2008), Kyoto University. [BBT]BBT O. Babelon, D. Bernard, M. Talon, Introduction to classical integrable systems, Cambridge University Press, 2003. [KS]KS Y. Kosmann-Schwarzbach, Lie Bialgebras, Poisson Lie Groups, and Dressing Transformations, In: Kosmann-Schwarzbach, Y., Tamizhmani, K.M., Grammaticos, B. (eds) Integrability of Nonlinear Systems. Lecture Notes in Physics, vol 638, 2004. Springer, Berlin, Heidelberg. [Ad]Ad M. Adler, On a trace functional for formal pseudo-differential operators and the symplectic structure of the Korteweg–de Vries type equations, Invent. Math. 50 (1978), 219–248. [Kos]Kos B. Kostant, The solution to a generalized Toda lattice and representation theory, Adv. in Math. 34 (1979), 195-338. [Sy]Sy W.W. Symes, Systems of Toda type, inverse spectral problems, and representation theory, Invent. Math. 59 (1980), 13–51. [Vic]Vic B. Vicedo, On integrable field theories as dihedral affine Gaudin models, International Mathematics Research Notices 2020, no. 15 (2020): 4513-4601, arXiv:1701.04856 [hep-th]. [Ver2]Ver2 M. Vermeeren, Hamiltonian structures for integrable hierarchies of Lagrangian PDEs, Open Comm. Nonl. Math. Phys. 1 (2021). [S1]S1 D. Sleigh, The Lagrangian multiform approach to integrable systems, PhD thesis (2021), University of Leeds. [Skr]Skr T. Skrypnyk, Elliptic Gaudin-type model in an external magnetic field and modified algebraic Bethe ansatz, Nucl. Phys. B988 (2023), 116102. [ZM]ZM V.E. Zakharov, A.V. Mikhailov, Variational principle for equations integrable by the inverse problem method, Funct. Anal. Appl. 14 (1980), 43–44. [CY]CY K. Costello, M. Yamazaki, Gauge Theory And Integrability, III, arXiv:1908.02289. [VW]VW B. Vicedo, J. Winstone, 3-Dimensional mixed BF theory and Hitchin’s integrable system, Lett. Math. Phys. 112, 79 (2022). [CSV2]CSV2 V. Caudrelier, M. Stoppato, B. Vicedo, On the Zakharov–Mikhailov action: 4d Chern–Simons origin and covariant Poisson algebra of the Lax connection, Lett. Math. Phys. 111, 82 (2021). [STS2]STS2 M.A. Semenov-Tian-Shanskii, What is a classical r-matrix?, Funct. Anal. Its Appl. 17 (1983), 259–272.
http://arxiv.org/abs/2307.04964v2
20230711015524
Secrets of RLHF in Large Language Models Part I: PPO
[ "Rui Zheng", "Shihan Dou", "Songyang Gao", "Yuan Hua", "Wei Shen", "Binghai Wang", "Yan Liu", "Senjie Jin", "Qin Liu", "Yuhao Zhou", "Limao Xiong", "Lu Chen", "Zhiheng Xi", "Nuo Xu", "Wenbin Lai", "Minghao Zhu", "Cheng Chang", "Zhangyue Yin", "Rongxiang Weng", "Wensen Cheng", "Haoran Huang", "Tianxiang Sun", "Hang Yan", "Tao Gui", "Qi Zhang", "Xipeng Qiu", "Xuanjing Huang" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Wasserstein Distributionally Robust Regret-Optimal Control under Partial Observability The authors are affiliated with the Department of Electrical Engineering at Caltech. Emails: {jhajar,tkargin,hassibi}@caltech.edu. Joudi Hajar Taylan Kargin Babak Hassibi August 12, 2023 =========================================================================================================================================================================================================================== thefnmarkfootnotetext Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include reward models to measure human preferences, Proximal Policy Optimization (PPO) to optimize policy model outputs, and process supervision to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. Beyond additional qualitative results, we even find that LLMs successfully trained by our algorithm can often better understand the deep meaning of the queries, and its responses are more able to hit people's souls directly. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes[ <https://github.com/OpenLMLab/MOSS-RLHF>], aiming to make modest contributions to the advancement of LLMs. Disclaimer: This paper contains content that may be profane, vulgar, or offensive. § INTRODUCTION Nowadays, large language models (LLMs) have made remarkable progress, posing a significant impact on the AI community <cit.>. By scaling up model size, data size, and the amount of training computation, these LLMs emerge prominent characteristics that are not present in small models, typically including in-context learning <cit.>, instruction following <cit.>, and step-by-step reasoning <cit.>. Based on these emergent abilities, LLMs even exhibit some potential to link between words and percepts for interacting with the real world, leading to the possibilities of artificial general intelligence (AGI), like embodied language models with tool manipulation <cit.> and generative agents in interactive sandbox environment <cit.>. Despite the capacities, since LLMs are trained to capture the data characteristics of pre-training corpora (including both high-quality and low-quality data) <cit.>, these models are likely to express unintended behaviors such as making up facts, generating biased or toxic text, or even harmful content for humans <cit.>. Accordingly, it is crucial that the ratio of safety progress to capability progress increases as emphasized in OpenAI's plan for AGI <cit.>. Hence, it is necessary to align LLMs with human values, e.g., helpful, honest, and harmless (3H) <cit.>. Especially, the arrival of open source foundation models, such as LLaMA <cit.> and OpenChineseLLaMA <cit.>, has rapidly promoted the LLMs into the supervised fine-tuning (SFT) stage. In order to mitigate a huge risk of harmfulness, most of the current work tries to add some 3H data in SFT, hoping to activate the responses of the models to make a positive change at the moral and ethical level <cit.>. However, even though a set of safety and groundedness objectives are added to capture the behavior that the model should exhibit in a dialog <cit.>, the model’s performance remains below human levels in safety and groundedness <cit.>. Hence, it requires more effective and efficient control approaches to eliminate the potential risk of the use of LLMs. Fortunately, OpenAI and Anthropic have verified that RLHF is a valid avenue for aligning language models with user intent on a wide range of tasks <cit.>. However, training large language models that align with human values is a daunting task, often resulting in repeated failure when trained using reinforcement learning <cit.>. Generally speaking, successful RLHF training requires an accurate reward model as a surrogate for human judgment, careful hyperparameter exploration for stable parameter updating, and a strong PPO algorithm for robust policy optimization. While the reward model trained by low-quality data and hard-to-define alignment target can easily mislead the PPO algorithm to a unintelligible direction. Besides, finetuning language models with PPO needs to coordinate four models to work together, i.e., a policy model, a value model, a reward model, and a reference model, making it hard to train and scale up to large-scale parameter models. In the new language environment, PPO suffers from sparse reward and inefficient exploration in word space, making it sensitive to hyperparameters. Models trained solely through repeated experiments, failed runs, and hyperparameter sweeps achieve far inferior results. The huge trial and error cost of LLMs makes researchers dare not easily let the research enter the RLHF stage, which hinders the LLMs safe landing. Hence, a robust PPO algorithm specially designed for LLMs is the key step to align human preferences. In this report, we carefully dissect the framework of RLHF and discuss the entire process that determines the success of the algorithm's training. We explored how the quality of the reward model affects the final result of the policy model. We find that the quality of the reward model directly determines the upper bound of the policy model, and designing an appropriate PPO algorithm is crucial for RLHF's successful training. Moreover, accurate code implementation matters in deep policy (practice makes perfect). Therefore, we have conducted in-depth evaluations of the inner workings of PPO algorithm to study how code-level and theory-level optimizations change agent training dynamics. We propose to monitor the PPO training process by using action space modeling metrics derived from the policy model, such as perplexity, response length, and KL divergence between the policy model and the SFT model. These metrics are more informative of the training stability than the values of response reward and loss functions. Based on these observations, we identify the policy constraints in the PPO algorithm as the key factor to achieve consistent alignment with human preferences. After extensive comparative experiments with various possible implementations of PPO framework, we finally introduce a preferable policy optimization algorithm named PPO-max, which incorporates the collection of effective and essential implementations, and is carefully calibrated to avoid interference among them. PPO-max alleviates the instability of vanilla PPO training and enables longer training steps with a larger training corpus. We evaluate PPO-max on 7B and 13B SFT models, demonstrating comparable alignment performance with ChatGPT. Contributions are summarized as follows: 1) we release competitive Chinese and English reward models, respectively, which have good cross-model generalization ability, alleviating the cost of relabeling human preference data; 2) we conduct in-depth analysis on the inner workings of PPO algorithm and propose the PPO-max algorithm to ensure stable model training; and 3) we release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans. § RELATED WORK Despite the promising capacities, LLMs are likely to express unintended behaviors such as making up facts, generating biased or toxic text, or even harmful content for humans <cit.> due to the low-quality pre-training data. Hence, it is necessary to align LLMs with human values, e.g., helpful, honest, and harmless (3H) <cit.>. In order to mitigate a huge risk of harmfulness, most of the current work tries to involve 3H data in SFT, hoping to activate the responses of the models to make a positive change at the moral and ethical level <cit.>, while the model’s performance remains below human levels in safety and groundedness <cit.>. Hence, more effective and efficient control approaches are required to eliminate the potential risk of LLMs. Fine-tuning language models to align with human preferences provides an effective solution to this challenge, where an agent is required to learn human preferences and provide human-like results given a context and corresponding suffixes ranked or scored by human annotators. Reinforcement Learning (RL) provides the most straightforward solution to reach this goal, for the agent needs just scarce supervision signal from the reward model as human proxies, and is modified through numerous trials under RL framework, namely Reinforcement Learning from Human Feedback (RLHF). There have been many attempts on this path recently <cit.>. In the context of large language models, RLHF is especially adopted for the purpose of a helpful, honest, and harmless LLM that aligns with human values <cit.>, alleviating the negative societal impacts from general-purpose language models. LaMDA <cit.> finetunes large language models to participate in interesting, helpful, factually grounded, and safe natural language dialogue and use of external information to ensure accuracy and groundedness. Rather than using reinforcement learning, they apply a mix of supervised learning techniques for human preference alignment. InstructGPT <cit.> finetunes GPT-3-type models <cit.> to improve helpfulness, which is mixed with RL from human preferences expressed through comparisons. <cit.> adopts the pre-training and fine-tuning tradition to train the preference model for human alignment, claiming that ranked preference modeling turns out to be the most effective training objective for distinguishing between “good” and “bad” behavior. This attempt is further improved by an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, and PPO is incorporated to stabilize RL training <cit.>. Despite its effectiveness, RLHF (especially PPO) exhibits complexity, instability, and sensitivity to hyperparameters, which is not yet addressed in previous works. Under similar concerns, several works highlighted the importance of PPO for RL framework and made an attempt to improve its efficiency <cit.>. <cit.> reveals that much of the observed improvement in reward brought by PPO may come from seemingly small modifications to the core algorithm (i.e. code-level optimizations). <cit.> further points out that a large number of low- and high-level design decisions of RL are usually not discussed in research papers but are indeed crucial for performance. As a result, <cit.> conducts a fair comparison among low-level designs based on a unified RL implementation and claims that the policy initialization scheme significantly influences the performance. Despite the efforts of revealing the importance of PPO and its recommended implementation, few attempts have been made to address the problem of instability and sensitivity to hyperparameters. In this paper, we dissect the framework of RLHF, especially shedding light on the inner workings of PPO, and explore an advanced version of the PPO which efficiently improves the training stability of the policy model. § REINFORCEMENT LEARNING FROM HUMAN FEEDBACK The training process of AI assistant comprises three main stages: supervised fine-tuning (SFT), reward model (RM) training, and proximal policy optimization (PPO) on this reward model. During the SFT phase, the model learns to engage in general human-like dialogues by imitating human-annotated dialogue examples. Subsequently, the reward model is trained, in which the model learns to compare the preference of different responses based on human feedback. Lastly, in the PPO phase, the model is updated based on feedback from the reward model, striving to discover an optimized policy through exploration and exploitation. In the RLHF process, we mainly consider the stages of RM training and reinforcement learning via PPO. The PPO algorithm follows a series of steps as depicted in Figure <ref>. §.§ Reward Modeling For the RM architecture, we use pre-trained transformer-based language models with the last unembedding layer removed and add an additional linear layer to the final transformer layer. Given any text, the reward model will assign a scalar reward value to the last token, and the larger the reward value, the better the sample. Following Stiennon et al. <cit.>, training reward models often involves utilizing a dataset comprised of paired comparisons between two responses generated for the same input. The modeling loss for each pair of preferred and dispreferred samples is: ℒ (ψ) = logσ(r(x, y_w) - r(x, y_l)), where σ is the sigmoid function. r represents the reward model with parameters ψ, and r(x,y) is the a single scalar predicted reward for input prompt x and response y. Additionally, we follow <cit.> to use imitation learning, which introduces the autoregressive LM loss on the preferred response of each pair, allowing the model to imitate the preferred response in each sentence pair. In practice, we add the coefficient β_rm the LM loss respectively. Finally, we define the following reward modeling loss: ℒ (ψ) = - λ𝔼_(x, y_w, y_l) ∼𝒟_𝓇𝓂 [logσ(r(x, y_w) - r(x, y_l))] + β_rm𝔼_(x, y_w) ∼𝒟_𝓇𝓂 [log (r'(x, y_w)], where 𝒟_𝓇𝓂 is the empirical distribution of the training set. r' is the same model with r except for the top linear layer, the dimension of which corresponds to the vocabulary size, and r'(x,y_w) is the likelihood given the prompt x and the preferred response y_w. We incorporate an extra term into the reward function, which introduces a penalty based on the Kullback-Leibler (KL) divergence between the learned RL policy π^RL_ϕ and initial supervised model π^SFT. The total reward can be expressed as <cit.>: r_total = r(x,y)- ηKL(π^RL_ϕ(y|x),π^SFT(y|x)), where η is KL reward coefficient and controls the strength of the KL penalty. This KL divergence term plays two significant roles within this context. First, it functions as an entropy bonus, fostering exploration within the policy landscape and preventing the policy from prematurely converging to a single mode. Second, it works to ensure that the RL policy's output does not deviate drastically from the samples that the reward model encountered during its training phase. §.§ Reinforcement Learning Applying RL to dialogue generation presents significant challenges due to the substantial state-action space. In this context, we consider human interaction as the “environment”. At each timestep, t, the agent (i.e., the AI assistant) receives a state s_t from the environment (i.e., the dialogue history), which consists of all the dialogue text up to this point, both by the assistant and the human. Then, based on its policy π, the agent's action a_t is to generate the next token. The environment returns a reward r(s_t, a_t), which is calculated from a reward function r trained from human preference data. The agent then transitions to the next state s_t+1, which includes the next dialogue history. The aim of RL is to find an optimal behavior strategy for the agent to maximize the cumulative reward (i.e., return) over a trajectory τ={s_1,a_1,…,s_T,a_T}. One kind of return is finite-horizon undiscounted return R(τ)=∑_t=1^T' r(s_t,a_t), which is simply the sum of rewards accumulated within a fixed number of steps. Another one is the infinite-horizon discounted return R(τ)=∑_t=0^∞γ^t r(s_t, a_t), takes into account all rewards obtained by the agent throughout its entire trajectory with a discount factor γ∈ (0,1). §.§.§ Policy Gradient Methods Policy gradient methods <cit.> are a type of RL techniques that directly optimize the policy of the agent—the mapping of states to actions—instead of learning a value function as in value-based methods. The central idea behind policy gradient methods is to improve the policy using the gradient ascent algorithm. In essence, these methods adjust the parameters of the policy in the direction that maximally improves the expected return. The policy π is typically parameterized by θ, we denote it as π(a|s,θ), which is the probability of taking action a in state s. The update rule for the policy gradient is given as: θ←θ + α∇_θ J(θ), where α is the learning rate, J(θ) represents the expected return when following policy π_θ and the gradient of policy performance ∇_θ J(θ) is called the policy gradient. A general form of policy gradient can be formulated as: ∇_θ J(θ) = 𝔼_τ∼π_θ[ ∑_t=0^T ∇_θlogπ_θ(a_t|s_t)Φ_t ], where Φ_t could be any of Φ_t = R(τ) or Φ_t = ∑_t^'=t^T R(s_t^', a_t^') or Φ_t = ∑_t^'=t^T R(s_t^', a_t^') - b(s_t) with baseline b. All of these choices lead to the same expected value for the policy gradient, despite having different variances. The return is calculated through Monte Carlo sampling. If the return is favorable, all actions are “reinforced” by increasing their probability of being selected. The advantage of this approach lies in its unbiased nature, as we rely solely on the actual return obtained rather than estimating it. However, a challenge arises due to the high variance associated with this method. This variance stems from the fact that different trajectories can result in diverse returns due to the stochasticity of the environment (random events during an episode) and the policy itself. To reduce this variance, a common strategy is to use advantage function estimates in place of raw returns in the policy gradient update rule. The advantage function A(s_t, a_t) represents how much better it is to take a specific action a_t at state s_t, compared to the average quality of actions at that state under the same policy. Thus, Φ_t = A(s_t, a_t). Mathematically, A(s_t, a_t) = Q(s_t, a_t) -V(s_t), where Q(s_t, a_t) is the action-value function, representing the expected return after taking action a_t at state s, and V(s_t) is the value function, representing the average expected return at state s_t. The application of policy gradients with advantage functions forms a crucial backbone in the realm of RL. However, the estimation methods for the advantage function vary significantly across different algorithms, thereby creating a landscape of diverse approaches. In the next section, we introduce Generalized Advantage Estimation (GAE) <cit.>, a method that is foundational to policy optimization algorithms and has seen widespread use. §.§.§ Generalized Advantage Estimation The following is a layman-friendly explanation of how GAE is derived. The advantage function, A, is defined as the difference between the Q function (the expected return) and the value function (the expected return from following the policy from a given state). The Q function considers a specific action, while the value function averages over all possible actions according to the policy. However, in practice, we use returns (sum of rewards) from actual episodes to estimate the Q function. This introduces a high amount of variance because future rewards can be very noisy. One way to reduce this noise is by estimating future returns (after time step t) using the value function. The GAE algorithm effectively acts as a middle ground between using simple one-step Temporal Difference (TD) returns and using full Monte Carlo returns, balancing bias and variance. The following is a layman-friendly explanation of how GAE is derived. The TD-k return R̂_t^k is a combination of actual rewards and estimated returns: R̂_t^k = r_t + γ r_t+1 + … + γ^(k-1) r_t+k-1 + γ^k V(s_t+k), where γ is the discount factor. The advantage estimate using TD-k returns is called the k-step advantage, defined as: Â_t^k = R̂_t^k - V(s_t)=∑_l=1^kγ^l δ_t+l = -V(s_t) + r_t + γ r_t+1 + ⋯ + γ^k-1 r_t+k-1 + γ^k V(s_t+k), where δ_t=r_t+γ V(s_t+1)-V(s_t) is the TD error. There's a significant bias-variance trade-off with k-step advantages. If k is small, the bias is high because the advantage estimation is based on fewer steps and thus depends heavily on the accuracy of the value function. On the other hand, if k is large, the variance can be high because the advantage estimation involves summing up many noisy rewards. In order to balance the bias-variance trade-off in the advantage estimation, GAE defines the advantage function as an exponential moving average of k-step advantages, with weights (1-λ)λ^(k-1): Â_t^GAE(γ,λ) = (1-λ)(Â^(1)_t+λÂ^(2)_t+λ^2Â^(3)_t+⋯) = (1-λ)(δ_t + λ(δ_t+γδ_t+1) +λ^2(δ_t+γδ_t+1+γ^2δ_t+2)+…) = (1-λ)(δ_t(1+λ+λ^2+…)+γδ_t+1(λ+λ^2+λ^3+…) + γ^2 δ_t+2(λ^2+λ^3+λ^4+…)+… ) = (1-λ)(δ_t (1/1-λ)+γδ_t+1 (λ/1-λ)+γ^2δ_t+2 (λ^2/1-λ)+…) = ∑^∞_l=0(γλ)^l δ_t+l. This definition of GAE smoothly interpolates between high bias (when λ=0) and high variance (when λ=1) estimators, effectively managing the trade-off. GAE(γ,0):Â_t=δ_t= r_t+γ V(s_t+1)-V(s_t). GAE(γ,1): Â_t=∑_l=0^∞γ^lδ_t+1=∑_l=0^∞γ^l r_t+1 - V(s_t). Through GAE, we can estimate Â_t of the advantage function A(s_t, a_t) accurately. This estimate will play a crucial role in constructing a policy gradient estimator: ∇_θĴ(θ) = 1/|𝒟|∑_τ∈𝒟∑_t=1^T ∇_θlogπ_θ(a_t|s_t) Â_t, where 𝒟 is a finite batch of samples, we will use 𝔼̂_t to represent the aforementioned 1/|𝒟|∑_τ∈𝒟∑_t=1^T. §.§.§ Proximal Policy Optimization PPO and TRPO <cit.> are two pivotal techniques in RL, aimed at effectively training a policy without jeopardizing its stability. The underlying intuition for these methods is the idea of “small, stable steps”: a philosophy of gently nudging the policy towards optimization, rather than forcing aggressive updates that might destabilize the overall learning process. In traditional RL, the principle of policy gradient mandates that new and old policies remain close in the parameter space. However, this proximity in parameter space does not necessarily equate to similar performance, and a slight variance in parameters can drastically impact the effectiveness of the policy. Furthermore, if a large, unrestrained step is taken, it can lead to a collapse in policy performance, a scenario often described as “falling off the cliff”. This inherent risk is a limiting factor in terms of sample efficiency in vanilla policy gradients. Instead of being confined by parameter closeness, TRPO introduces a different kind of constraint on policy updates. It regulates the change in policies by ensuring the KL divergence, remains within an acceptable limit: maximize_θ 𝔼̂_t [π_θ(a_t|s_t)/π_θ_old(a_t|s_t)Â_t ], subject to 𝔼̂_t [KL(π_θ_old(·|s_t), π_θ(·|s_t)) ] ≤δ, where θ_old is the old policy parameters before the update. There are two primary variants of PPO: PPO-Penalty and PPO-Clip. While TRPO puts a hard constraint on the KL divergence to prevent harmful updates, PPO-Penalty addresses the unconstrained optimization problems by employing a penalty-based approach instead of constraints: ℒ_𝓅𝓅ℴ-𝓅ℯ𝓃𝒶𝓁𝓉𝓎(θ)= 𝔼̂_t [π_θ(a_t|s_t)/π_θ_old(a_t|s_t)Â_t ] - βKL(π_θ_old(·|s_t), π_θ(·|s_t)), with penalty factor β. Clipped Surrogate Objective. PPO-Clip attempts to keep the new policy close to the old policy, but instead of putting a constraint on the KL divergence like TRPO, it uses a clipped version of the policy ratio in its objective. The objective function is expressed as: ℒ_𝓅𝓅ℴ-𝒸𝓁𝒾𝓅(θ)= 𝔼̂_t [min(π_θ(a_t|s_t)/π_θ_old(a_t|s_t)Â_t, clip(π_θ(a_t|s_t)/π_θ_old(a_t|s_t), 1-ϵ, 1+ϵ)Â_t ) ], where π_θ(a_t|s_t)/π_θ_old(a_t|s_t) is the ratio of the new policy's probability over the old policy's probability and ϵ is a hyperparameter that determines how much the new policy can deviate from the old policy. The clip function limits the value of π_θ_old(a_t|s_t) between (1-ϵ, 1+ϵ). The clipping acts as a regularizer, limiting the extent to which the policy can change drastically from one iteration to the next. Preventing overly large policy updates ensures the learning process's robustness while maintaining more sample-efficient learning than vanilla policy gradient methods. Value Function Estimation. In PPO algorithm, the critic model, often referred to as the value function, estimates the expected returns for each state. The learning objective of this model is to minimize the discrepancy between its predicted values and the actual return values. The loss function of the critic model is commonly defined using Mean Squared Error (MSE), given by the following formula: ℒ_𝒸𝓇𝒾𝓉𝒾𝒸(ϕ) = 𝔼̂_t[‖ V_ϕ(s_t) - R̂_t ‖^2 ]. Here, V_ϕ(s_t) represents the critic model's predicted value for state s_t with parameters ϕ, and R̂_t represents the actual return value for state s_t and always can be estimated as: R̂_t=∑^∞_l=0γ^l r_t+l. Mixing Pretraining Gradients. To mitigate potential degradation in the model's language skills and knowledge retention during PPO, we also explore the incorporation of pretraining data into the RL phase. The models utilizing this method are denoted as “PPO-ptx”, a combined objective function is shown as follows <cit.>: ℒ_𝓅𝓅ℴ-𝓅𝓉𝓍(θ) = ℒ_𝓅𝓅ℴ-𝒸𝓁𝒾𝓅(θ) + λ_ptx𝔼_x ∼𝒟_pretrain[log(π^RL_θ (x))], where λ_ptx is the pretraining loss coefficient and 𝒟_pretrain is the pretraining data distribution. § REWARD MODELING FOR HELPFULNESS AND HARMLESSNESS Reward model is trained to reflect the preference of human. Theoretically, we can directly fine-tune the model using Reinforcement Learning and human annotations. While due to constraints in workload and time availability, it is unfeasible for humans to provide sufficient feedback for training before each optimization iteration. Therefore, a more effective way involves training a reward model (RM), which aims to emulate the evaluation process performed by humans. In this section, we first cover the technical details of RM, then show the RM performance we used, and attach the performance changes during training. §.§ Models and Datasets For English, we start with the original LLaMA-7B<cit.> which is of the decoder-only architecture. We use 160k pairwise samples of the HH-RLHF dataset<cit.> which consists of 118k helpful and 42k harmless instances as training set. From the remaining 8.5k data, we randomly selected approximately 0.7k helpful and 0.3k harmless examples for a total of 1k data as the test set, and the rest is used as the validation set during training. For Chinese, we use the OpenChineseLLaMA <cit.>. It is developed through incremental pre-training on Chinese datasets, building upon the foundation of LLaMA-7B, which significantly improves its understanding and generation abilities on Chinese. We hired professional annotators to manually label 39k pairwise samples including 31k helpful and 8k harmless samples. We constructed the training set by randomly sampling 24k helpful and 6k harmless instances, and then we allocated 2.4k helpful and 0.6k harmless samples from the remaining data at random to form the test set. The rest is used for validation. §.§ Training Setup This section introduces the training implementations for the RM. The learning rate is set to 5e-6 with a warmup over the first 10% steps. We use a dynamic batch method instead of a fixed value, which balances the number of tokens in each batch as much as possible for a more efficient and stable training phase. The batch size changes according to the number of tokens in a batch, with a maximum of 128 and a minimum of 4. We fixed the training step to 1000, approximately 1.06 epoch for the whole training set. We set β_rm=1, which represents LM loss weight to train our reward model for the entire experiment. §.§ HH Evaluation Results In this section, we present the HH evaluation results of our RM. We primarily analyze the trained reward model with the test set introduced in Sec. <ref>, which comprises of 0.9k samples of HH-RLHF for English and 3k samples sampled from the dataset labeled by annotators for Chinese. We feed the test input into our RM and get the reward value on the preferred and dispreferred responses respectively, and then subtract them to get the difference score. Figure <ref> shows the distribution of the difference score. Both models exhibit a degree of alignment with human preferences, with the RM trained on Chinese data we construct by hiring annotators showing substantial consistency with human judgments. We examined several samples from the test dataset that displayed the most significant disparities between the model and human preferences. For the Chinses test data, we observed that for each pair the response that RM gave a higher reward was notably longer compared to the other which is preferred by human, although more or less involving fabricating facts and making false claims. In the case of English test data, we noticed that the model assigned lower scores to responses that acknowledged the lack of information, which were characterized by their honesty but lacked helpfulness. Conversely, those responses appeared to be correct and helpful, while containing deceptive information, misleading our RM into assigning high rewards. We provide such an example in Chinese and English respectively in Table  <ref>. §.§ Training Performance In this section, we show the performance changes in the training process. Specifically, Figure <ref> shows the trend of training loss of PM. We can see that the accuracy of RM trained on the Chinese dataset is higher than that of English because the Chinese dataset we constructed exhibits a significant disparity between the better and worse responses in most pairs. While many English pairs show similar levels of quality, which poses a greater challenge for RM to determine the superiority or inferiority of responses, resulting in model facing difficulty in modeling the differential features between the two responses. As a result, training and testing accuracy on the English dataset is expected to be lower. Besides, we find that the rate of improvement significantly slows down after 200 steps for both models, approximately equivalent to 0.2 epochs, the accuracy of which is comparable to that obtained after training for a complete epoch. However, when utilizing the 200-step model as the initialization for PPO, we observe unsatisfactory performance. Thus, accuracy alone is insufficient as a criterion for the RM. § EXPLORATION OF PPO Proximal Policy Optimization (PPO) <cit.> is the core algorithm to achieve alignment with human preferences. The performance of PPO is influenced by multiple factors in practical applications. Some prior works have summarized possible tricks that may be necessary and effective in the field of reinforcement learning <cit.>, but how to stabilize RLHF training with language models remains unknown. We expect to explore which tricks are critical, and which metrics can reflect the model state during and after RLHF training. We first introduce the metrics that are instructive in the training process, and then the training trajectories and effects under different implementations to reveal core tricks in RLHF. We use PPO-max to denote the most suitable implementation we find for the language model. §.§ Models and Training Setup The training implementations for the preference model (PM) and PM dataset are introduced in Sec. <ref>. In this section, we introduce the models' initialisation and the hyper-parameter details in exploring PPO. We verified a number of methods in reinforcement learning to ensure stable convergence and better results for PPO training phase. To improve the experimental efficiency, these experiments are mainly conducted on a randomly selected subset of our Chinese data and will not be trained to optimal results when we have observed enough information to analyze the comparison methods. As shown in Sec. <ref>, four models need to be loaded during the ppo training phase. For reference model and policy model, we initialize both models from a 7B SFT model. The SFT model is applied to supervised fine-tuning for 2 epochs based on OpenChineseLLaMA on 1M filtered instruction data (containing 400K single-round instruction samples and 600K multi-turn instruction samples). We set a learning rate of 9.5e-6 and a consine learning rate schedule. The learning rate eventually decays to 10% of the peak learning rate. The global batch size is set to 1024. We use the reward model to initialize the critic model and reward model. We train the models on a manually constructed HH dataset containing 8k harmless queries and 20k helpful queries and we fix the number of steps instead of the number of epochs. In all experiments, we set a batch size of 128 for sampling from the environment and a batch size of 32 for training policy model and critic model. The learning rate of policy model and critic model is set to 5e-7 and 1.65e-6 with a warmup over the first 10% steps, respectively. All of the experiments are conducted on identically implemented machines. Each machine contains eight 80G A100 GPUs, 1TB of RAM, and 128 CPUs. We use ZERO2 and gradient checkpoint to save on GPU memory cost in the training phase. §.§ Evaluation Metrics for Monitor Training Process We expect to identify some metrics that reflect the quality of PPO training, this contributes to tracking the helpful, honest, and harmless capability of policy models without resorting to manual (or GPT-4) evaluation. We found it challenging to accurately distinguish the merits of two models with similar abilities. But it is indeed feasible to observe training stability and promptly identify serious deviations. Various metric curves when continuously optimizing policy model with vanilla PPO implementation are shown in Figure <ref>. We first introduce the pattern collapse phenomenon in vanilla PPO training, which means that SFT models are over-optimized and exhibit highly biased behavior. A reasonable policy model is expected to be consistent with human preferences in the distribution of dialogue variety in the real world (e.g., data not seen in training the reward model). However, we observe that the trained policy model has a tendency to cheat the reward model through specific patterns for anomalous higher scores. The training trajectories on reward score and training loss of vanilla PPO are illustrated at the top of Figure <ref>. We observed stable convergence processes in training loss, but higher rewards do not reflect better policy behaviors from the perspective of human and GPT-4 evaluation. This means that the reward scores and training losses do not indicate whether the PPO is optimizing correctly. In vanilla PPO training, the response rewards of policy model gradually deviate from the original distribution and exhibit long-tail characteristics. We show the distribution of response rewards under different training steps in the Appendix <ref>. An empirical strategy is to compare the training process of good and bad policy models to find suitable metrics. We show more indicative training metrics at the bottom of Figure <ref>, including perplexity, KL divergence between the policy and reference models, and the average length of generation responses. Previous work proposed an approximate linear relationship between the root KL and PM scores <cit.>, but for smaller models, such an association appeared to be weak. We find the model response falls into the OOD region of preference model when the original policy is over-optimized. We will further discuss this scaling effects in the next section. We simultaneously observe that the collapsed model uniformly delivers longer responses and exhibits lower perplexity for such generative patterns. We use these metrics to show the importance of different tricks and their impact on PPO training in section <ref>. §.§ Implement Details in PPO We propose the instability and pattern collapse problem of the primitive PPO algorithm in sec <ref>. Such sensitivity derives from the over-optimization of the policy model which traps it into fixed generative patterns. Recent works have explored the implementation details of PPO algorithms in different scenarios. However, the application scenarios and data structures of traditional RL are quite different from RLHF. We determined to verify the applicability of these tricks in language model training and propose a set of PPO implementations that support stable optimization. We mainly focus on methods that efficiently assist PPO training and their parameter sensitivity in the body of this paper. Figure <ref> illustrates numerous available tricks in PPO training, we first summarize the score reparameterization method (§5.3.1), followed by the optimization constraints for policy model (§5.3.2), and finally we present the different initialization methods for policy and critic models (§5.3.3). More experiments on hyper-parameter tuning and tricks that are verified as less critical are discussed in the appendix, such as advantage estimation function and gradient clipping. In the following, it always refers to our own experiments when we mention PPO if not specifically stated. §.§.§ Score Reparameterization We use the term “score” to refer to the two vital intermediate variables involved in PPO training. The reward score is given by the reward model trained with human preferences data, and the advantage score is calculated by the GAE function. According to existing works, reparameterizing these scores to a stable distribution (e.g., a standard normal distribution) may intensify the stability of PPO. The reported operations are into three parts for verification. We use {r(x,y)}≜{r_n(x,y)}_n=1^ℬ to denote a reward sequence in training, r_n(x,y) to denote the results of per-batch reward, σ(A) and A̅ to denote the mean and standard deviation of variable A. Comparative experiments with different tricks and hyperparameters are shown in Figure <ref>. Reward Scaling controls training fluctuations by scaling the rewards where the rewards are divided by the standard deviation of a rolling discounted sum. Based on the observation history, the reward for current state can be expressed as r_n(x,y) / σ(r(x,y)). In contrast to the experimental results of Engstrom <cit.>, we show that reward scaling doesn't guide proper policy optimization, and PPO exhibits consistent patterns in training trajectories with and without reward scaling. In our experiments, we believe that tighter constraints are required to ensure training stability. Reward Normalization and Clipping was first proposed by Mnih <cit.>. The processed reward can be denoted as: r̃(x,y) = clip(r_n(x,y) - r(x,y)/σ(r(x,y) , -δ, δ), where δ denotes the clip region. It is generally believed In traditional RL that reward clip is ineffective or even detrimental in certain scenarios <cit.>. However, we find that strict advantage cropping can also maintain training stability within a fixed epoch. Interestingly, hyperparameter tuning does not affect the similarity of the different methods in the early training period, and models with larger clipping thresholds exhibit greater strategy alteration and converge to higher rewards in the latter half. As we mentioned earlier, this does not imply better performance in the manual evaluation. Determining the optimal clipping bound within a limited number of trials is challenging in view of such inconsistency between the reward model and manual evaluation results, we suggest adopting a relaxed clipping strategy and incorporating other tricks to constrain the policy optimization when training RLHF. Advantages Normalization and Clipping has similarities to the operation on reward, but differs in details that its normalization occurs only at the minibatch level. After calculating the advantage based on GAE, PPO normalizes the advantage value by subtracting its mean and dividing it by its standard deviation. Andrychowicz <cit.> first attempt to apply Advantages Normalization in gaming domain and reported that this trick didn't exhibit significant improvements. Although parameter selection for advantage clipping would be more sensitive and difficult, we instead find that a severe constraint on advantage can provide similar effects to reward clip in PPO training. Considering that different score reparameterization operations theoretically provide similar effects on PPO training, we recommend constraining the instability of policy optimization on the reward level. Experiments on the simultaneous application of reward, advantage, or value clipping operations are shown in Appendix <ref>. §.§.§ Policy Constraints To tackle the over-optimization problem on the policy model, an intuitive solution is to constrain the policy optimization to a limited range. We validate various existing tricks to control the update of generation policy, such constraints are empirically proved to be necessary for longer training procedures. Figure. <ref> shows the influence of different constraint methods and hyperparameters on policy optimization. Token Level KL-Penalty constrains the policy optimization by applying a regularization term to reward that is proportional to the KL-divergence of current and original policy distributions. This approach was first introduced by Stiennon <cit.> and widely adopted in different RLHF implementations. Given a template-response pair (x,y), we treat the logits distribution of the token output as a sampling of the policy distribution and apply an empirically estimated KL-penalty sequence to response reward, the total reward with KL-penalty can be denoted as: r_total(x,y_i) = r(x,y_i) - ηKL(π^RL_θ(y_i|x),π^SFT(y_i|x)), where π^RL_θ(y_i|x) denotes the action space of i-th reponse token, and η is a hyper-parameter. Anthropic <cit.> used a small weight to balance the ratio of reward and KL-penalty in PPO training (0.001), and they did not find significant effects of the above operation on RL training. Instead, we find this constraint critical to the stability of PPO and allow further scaling up on the training step. Results with policy divergence penalty are illustrated in Figure <ref> by setting lambda to 0.05, and there is a significant difference to the method in Figure <ref> with a noticeable correction in the later training period. Interestingly, we show that RLHF is able to significantly improve the response quality while barely modifying the language modeling (exhibiting an almost zero KL divergence from the original policy). More experiments on the impact of different constraint values are shown in appendix <ref> Importance Sampling in PPO aims to rectify the policy divergence between the historical generative model and current model when optimizing policy model with responses in the experience buffer. EasyRL <cit.> argues that an oversized buffer would induce a wrong estimation of the advantage of the current policy, which impairs the stability of the policy optimization. We revalidated this hypothesis by directly fixing the policy distribution to observations of reference model, which is equivalent to having an infinite experience buffer in the training process. We find this setup doesn't have as severe impacts as expected, and only exhibits fluctuations in the later stage of training. We additionally investigate the cooperative effect of this setup with KL penalties in view that they share similar controls on PPO. Experimental results indicate that this implementation further stabilizes PPO training, but compromises the final performance of the policy model. Entropy Bonus provides a reference model-independent constraint on PPO training. There is controversy in past research about whether this method is effective in different scenarios. Mnih <cit.> reported that entropy bonus could enhance exploration by encouraging policy models to generate more diverse actions, while others did not find clear evidence that such operations help <cit.>. We claim that these views can coexist as configurations regarding entropy bonus exhibit vast sensitivity on parameter selection and code implementation. A comparison of successful and failed experiments is presented in appendix <ref>. With correct configurations, we did not find an obvious advantage of this trick relative to KL-penalty. We, therefore, recommend the latter instead of directly constraining the diversity of the strategy space. §.§.§ Pretrained Initialization A common setting is to initialize the policy and critic model over the existing reference model and reward model in RLHF. Such initialization is quite rare in past research scenarios and its impact on PPO training is still unexplored. We investigated different initialization methods at the early stage of training, expecting to uncover the requirements of RLHF for the trained model capabilities. The training discrepancy induced by different initialization methods is shown in Figure <ref>. The initialization of the critic model did not significantly affect the convergence or fluctuation of the PPO and only varied the numerical stability at the early stage of optimization. In contrast, a policy model initialized without SFT training is clearly incapable in PPO training, which indicates that the construction of a supervised policy model is indispensable in RLHF. *Critic Model Initialization We first discuss the influence of different critic model initialization on PPO training. An observation is that the critic model requires giving feedback to each step in the decision sequence, and introduces a gap between this task requirement and directly scoring response, which makes it a less-than-perfect choice to initialize the critic model with the reward model. We explore this issue by applying a different initialization. Considering that providing correct score feedback for a single action requires the model to have basic language modeling capability, we design two scenarios to vary the consistency between the critic model initialization and its training objective: (1) Initialize the critic model with our SFT model and randomly initialize its reward head. (2) Optimize only the reward model until the loss of value prediction function approaches zero. We show the training dynamics of this setup starting from the optimization policy model in Figure <ref>. Based on the experimental results, we believe the critic model pre-training helps to improve the training stability by providing better advantage estimation. Initializing the critic model with a reward or SFT model will converge to similar results, implying that PPO can adaptively provide the capability to fit the advantage function. Intuitively, fluctuations in the early training period imply that the model is focusing on optimizing the critic model and does not have a consistent optimization direction in terms of generation policies. We recommend replacing the learning rate warmup with the critic model pre-training as a generic initialization strategy. *Policy Model Initialization An interesting question is whether we need to supervise fine-tuning our pre-train model before PPO, we wondered about the feasibility of directly enabling language models to interact with humans through policy optimization. Unfortunately, such attempts failed and we observed a severe reduction in language modeling ability in the training results, which implies that a qualified dialogue model is essential for underlying PPO training. Furthermore, we notice that the train model response obtains lower rewards relative to the policy model after SFT, which may provide circumstantial evidence for the effectiveness of using human preference data to directly fine-tune the model for alignment. §.§ PPO-max Setup We now describe our training implementations in the PPO-max algorithm. Based on the discussion and validation in Sec <ref>, we selected the most effective strategy for each component of PPO. We normalize and clip the current group of rewards based on historical mean and variance records, and subsequently add a KL-penalty term to constrain the policy optimization. In the model loading phase, we initialize the critic model with our reward model and pre-train it before applying PPO formally. We use global gradient clipping and set a small size of the experience buffer. To reduce alignment tax, we add pre-train language model loss in policy optimization as InstructGPT <cit.> and simultaneously clip the value function loss. More detailed settings can be found in our open-source code. We show the complete training dynamics of PPO-max in Figure <ref>. § EVALUATIONS AND DISCUSSIONS In this section, we provide a detailed analysis of the advantages of the RLHF models over the SFT models. These advantages are evident not only in the direct comparison between RLHF and SFT models but also in their performance gap when facing ChatGPT. §.§ Alignment Metrics and Experiment Setups Alignment is a vague and confusing topic that is intractable to evaluate. In the context of our paper, we endeavor to align models with human intentions. To be more specific, we define models to act as being helpful and harmless similar to <cit.>. Helpfulness means the model should follow instructions; it must not only follow instructions but also deduce the intent from a few-shot prompt or another interpretable pattern. However, the intention behind a given prompt can often be unclear or ambiguous, which is why we depend on our annotators' judgment, and their preference ratings constitute our primary metric. Harmlessness is also challenging to measure. The extent of damage caused by language models usually depends on how their outputs are utilized in the real world. For instance, a model that generates toxic outputs could be harmful in a deployed chatbot but could also be beneficial if used for data augmentation to train a more precise toxicity detection model. As a result, we employ more precise proxy criteria to capture various aspects of a deployed model's behavior that can be helpful or harmful. In order to compare the RLHF models with baseline models, we generate a single response for each test prompt and task human annotators by comparing the responses from different models and labeling their preferences. We repeat this experiment multiple times using GPT-4 as the annotator and consistently obtain agreement levels between the evaluations. Baseline.We employ several baselines for comparison, including two SFT models trained on LLaMA and OpenChineseLLaMA datasets. These SFT models are trained on Chinese and English datasets, respectively. Additionally, we derive two RLHF models using PPO-max from these two types of SFT models [We differentiate between two language models, one trained on English text (`en') and the other on Chinese text (`zh').] We also compare our models with OpenAI’s ChatGPT [https://platform.openai.com/docs/models] (gpt-3.5-turbo-0613), an excellent language model tuned with RLHF. Generation. We generate a single response for each prompt using nucleus sampling <cit.> with a probability threshold of p = 0.9 and a temperature of τ = 0.8 for each baseline model. To avoid repetitive responses, we apply a repetition penalty <cit.> with a hyperparameter of β = 1.1 based on previously generated tokens. Additionally, we set the maximum token length to 2048. §.§ Preference Comparison between RLHF models and SFT models Human evaluation is known to be both time-consuming and costly, yet it remains crucial for obtaining human-aligned assessments and serving as a reliable foundation for comprehensive evaluation. Following a similar approach as InstructGPT <cit.>, our primary metric for evaluation is based on human preference ratings derived from a held-out set of prompts. It is important to note that we only select prompts that have not been included in the training process, ensuring unbiased evaluation. Furthermore, incorporating the expertise of GPT-4, the most powerful model to date, to compare responses from different chatbots offers valuable insights and enhances the evaluation process. This approach aligns with the findings of studies such as AlpacaFarm <cit.> and LLM-as-a-judge <cit.>, which suggest that end-to-end automation evaluation can provide a relatively fair assessment when compared to human preferences. Therefore, in this paper, we follow a similar evaluation method in LLM-as-a-judge <cit.> and supplement the overall evaluation process with GPT-4. Human Evaluation. Our annotators consistently expressed a strong preference for the outputs of RLHF-trained models across all question types in both Chinese and English, as illustrated in Figure <ref>. Specifically, the RLHF model on the English dataset exhibits significant advantages on the Harmless held-out dataset, receiving a rating of 62% compared to 5% for the SFT model. These findings indicate that the RLHF model substantially enhances its ability to address a wide range of issues, including personal privacy, political sensitivity, and the handling of toxic and biased prompts within minority communities and ethnic groups. Additionally, there is a slight improvement observed in the Helpful held-out dataset, with a rating of 44% compared to 30% for the SFT model, suggesting that the SFT model can also benefit from optimization via RLHF. We have also demonstrated that our RLHF model enhances the performance of the SFT model on both the Helpful and Harmless datasets in the Chinese domain. This showcases the substantial potential of PPO-max in the RLHF phrase. GPT-4 as a Judge. While GPT-4 may not be a perfect evaluator, we can observe some similarities between its results and human evaluations. In our GPT-4 evaluation setting, the results closely mirror those of human evaluation, as depicted in the right sub-figure of Figure <ref>. When assessing harmful prompts, the RLHF model trained on the English dataset continues to demonstrate significant advantages in the Harmless dataset, despite GPT-4 producing more tie votes than human evaluators. This trend is also apparent in the Chinese Harmless evaluation. Notably, Figure <ref> highlights a substantial improvement in the RLHF model, particularly in helpful datasets, compared to evaluations based on human preferences. §.§ Our Models vs. ChatGPT on Harmless Evaluation In this part, we conduct a comparison between our model and one of the most popular existing models, ChatGPT. Our objective was to showcase the advantages of the RLHF model when facing a more formidable opponent, rather than aiming to surpass ChatGPT. To achieve this, we select the “harmless” capability as our comparative metric, and we employ GPT-4 for automated evaluations. Mitigating Defeats to ChatGPT. Figure <ref> provides evidence that our RLHF models still lag behind OpenAI's ChatGPT. However, we observe significant improvements in our RLHF models compared to the SFT models, particularly in mitigating losses when facing ChatGPT. Specifically, the RLHF model trained on English text managed to decrease the defeat rate from 45% to 24%. Similarly, the RLHF model trained on Chinese text achieved a reduction in the defeat rate from 37% to 29%. While surpassing ChatGPT's performance remains a challenging task, it is noteworthy that the RLHF models were able to compete on par with ChatGPT on certain prompts where the SFT models previously failed. This indicates that the RLHF approach enhances the models' ability to generate more effective responses and bridge the gap between their performance and that of ChatGPT. §.§ Language Understanding Evaluation To examine the potential decline in Natural language understanding (NLU) abilities resulting from finetuning models using PPO, we conduct tests on Chinese RLHF model using the C-Eval[https://github.com/SJTU-LIT/ceval], which is a comprehensive Chinese evaluation suite for foundation models. It consists of approximately 13k multi-choice questions spanning 52 diverse disciplines and four difficulty levels. We primarily evaluate our models in the initial release, whose results are from few-shot prompting. The experimental results indicate a decrease in NLU capabilities after employing PPO. By incorporating pre-training data into the PPO training phase, PPO-ptx effectively alleviates the decline in NLU capabilities. The rationale behind this method was to leverage the knowledge acquired during pre-training and combine it with the reinforcement learning framework of PPO. §.§ Example Dialogues To provide a more intuitive demonstration of our model's dialogue abilities, we present some dialogue examples in Tables <ref> and <ref>. It is evident that the RLHF-trained model generates responses with a higher level of informational content compared to the SFT model. These responses effectively assist in addressing user prompts. Moreover, the SFT model demonstrates a basic ability to identify harmful prompts, but it still remains susceptible to producing harmful outputs when prompted accordingly. In contrast, the RLHF model exhibits superior judgment when it comes to harmful content and is less prone to inducements, displaying a higher degree of coherency. More dialogue examples are presented in the appendix <ref>. § LIMITATIONS Exploring RLHF is indeed a valuable but lonely direction, and we are glad that the core backbone of the laboratory can firmly explore an uncertain direction. Moreover, in the past few months, everyone has been so full of passion and motivation. RLHF not only allows the models to achieve human alignment, but also seems to align everyone's will. A thousand mile journey begins with the first step. Although we have taken the first step in RLHF, due to time and resource constraints, this work still has the following limitations: Scaling Law. While our study primarily focuses on a 7-billion-parameter model, we have yet to investigate the impact of model size and data scale on the performance of RLHF. Reward Model. Our experiments are based on openly available English human preference datasets and a small amount of self-constructed Chinese data. The quality and quantity of the data we have at our disposal are arguably not sufficient for a comprehensive evaluation of the reward model. Evaluation Metric. Our evaluation criteria largely rely on manual evaluations and GPT-4 automated evaluations. We have not utilized numerous available benchmarks and NLP tasks to conduct a detailed assessment of our models. Performance Indicator. Our focus during the PPO phase is more geared towards achieving stability rather than enhancing the final performance. While stability is crucial, it does not necessarily guarantee improved outcomes. Additionally, the reward score cannot reliably serve as an indicator for predicting RLHF performance during the training phase. It implies that a more suitable performance indicator during the training phase needs to be sought. nips § EASTER EGG “15,000 years ago, a fractured thigh bone was often fatal. However, a human femur that recovered from a fracture marks the dawn of human civilization. It meant that after the injury, someone took care of the wound, someone provided water and food, someone protected this person from the predators. This kind of support and solidarity is how we survived till this day and made our civilization last.” — Zhezhi Zhou in The Wandering Earth 2 We believe that the MOSS in “The Wandering Earth” is likely to have done training similar to human alignment, and finally had an impressive performance. We found that the RLHF stage is crucial to the transformation of model values. In interaction with people, he can better understand the deep semantics of human language, understand the operation logic of human society, and enter the human heart. If we have a good reward model, such as the reward model we released, PPO-max is the key to successfully training the policy model. But what if we don't have a good reward model? We hope that the Part 2 will make it clear.
http://arxiv.org/abs/2307.05475v1
20230711175913
Primordial magnetic fields and the Hubble tension
[ "Karsten Jedamzik", "Levon Pogosian" ]
astro-ph.CO
[ "astro-ph.CO", "astro-ph.GA" ]
=1 Planck SPTpol SPT SPT-3G 2018 ACT DR4 ḇ ACTPollite actpollite dr4 PMF deg ACT WMAP P_ cal T_ cal M_PlT^(M)T^(Λ)P_barρ_barP_totρ_totκκ'ḍP prior fidr_ dr_ dhΩ_ mh^2Ω_ m_∼^> _∼^< ^<_∼ a]Karsten Jedamzikb]Levon Pogosian[a]Laboratoire de Univers et Particules de Montpellier, UMR5299-CNRS, Universite de Montpellier, 34095 Montpellier, France[b]Department of Physics, Simon Fraser University, Burnaby, BC, V5A 1S6, [email protected]@sfu.ca Magnetic fields appear to be present in essentially all astrophysical environments, including galaxies, clusters of galaxies and voids. There are both observational and theoretical motives for considering the possibility of their origin tracing back to the events in the very early universe, such as the electroweak phase transition or Inflation. Such a primordial magnetic field (PMF) would remain embedded in the plasma and evolve to persist through the radiation and matter eras, and to the present day. As described in this Chapter, a PMF present in the primordial plasma prior to recombination could help relieve the Hubble tension. A stochastic magnetic field would induce inhomogeneities, pushing the baryons into regions of lower magnetic energy density and speeding up the recombination process. As a consequence, the sound horizon at last scattering would be smaller, which is a necessary ingredient for relieving the Hubble tension. Intriguingly, the strength of the magnetic field required to alleviate the tension is of the right order to also explain the observed magnetic fields in galaxies, clusters of galaxies and voids. These findings motivate further detailed studies of recombination in the presence of PMFs and observational tests of this hypothesis. Primordial magnetic fields and the Hubble tension [ ================================================== § INTRODUCTION The statistical significance of the Hubble tension, currently just over 5σ, is primarily driven by the difference between the SH0ES measurement of H_0=73.04 ± 1.04 km/s/Mpc using Cepheid calibrated supernova <cit.> and the H_0=67.36 ± 0.54 km/s/Mpc obtained from fitting the ΛCDM model to the Planck CMB data <cit.>. Other independent measurements tend to re-enforce the tension <cit.>, with a clear trend of all measurements that do not rely on a model of recombination giving a higher H_0, in the 69-73 km/s/Mpc range <cit.>, and estimates that use the standard treatment of recombination giving H_0 of around 67-68 km/s/Mpc <cit.>. This trend points to a missing ingredient in the standard description of the universe prior and/or during recombination - something that would help reduce the sound horizon at last scattering, r_⋆. One such ingredient could be a stochastic primordial magnetic field (PMF) embedded in the plasma and generating inhomogeneities in the baryon distribution, or baryon clumping <cit.>. The inhomogeneities make the recombination complete faster <cit.>, resulting in a smaller r_⋆. At the time of writing this Chapter, it remains an open question whether PMFs will play a key role in resolving the Hubble tension. Preliminary investigations <cit.>, based on a toy-model of baryon clumping <cit.>, have shown that cosmological data is broadly compatible with the presence of PMFs, and they could help relieve the tension to a certain extent. Moreover, the required strength of the PMF appears to be of just the right order of magnitude to help explain all observed galactic, cluster and intergalactic magnetic fields. Needless to say that, if confirmed, this would be a truly exciting development, solving two puzzles at the same time. Achieving a higher level of certainty requires obtaining the exact ionization history from compressible magneto-hydrodynamics (MHD) simulations including the effects of radiation transport that are currently underway <cit.>. In what follows, we start by providing an overview of cosmological magnetic fields, their possible primordial origin and evolution, and their observational signatures. We then describe the physics of recombination in the presence of magnetic fields, introducing the simple model of baryon clumping used in the existing studies. After presenting the current observational status of this model, we discuss the current state of the MHD simulations and prospects for the future. § PRIMORDIAL MAGNETIC FIELDS AND THEIR OBSERVATIONAL SIGNATURES The PMF hypothesis was first proposed as a possible explanation of the origin of galactic magnetic fields <cit.>. All galaxies, irrespective of their type or age, appear to contain a magnetic field of ∼micro-Gauss (μG) strength that is coherent over the extent of the galaxy. Clusters of galaxies are also known to have magnetic fields of similar strength. The origin of the galactic and cluster fields is not entirely understood <cit.>. It may well be that initially small fields of strength of ∼10^-20-10^-18 G excited by the Biermann battery operating at first structure formation are subsequently amplified by the small-scale and/or large-scale dynamo to reach approximate equipartition of magnetic energy with the turbulent energy in collapsed structures (see <cit.> for a review on astrophysical dynamos). Interestingly, μG strength fields are observed in high redshift galaxies that would be too young to have gone through the number of revolutions necessary for the large-scale dynamo to work <cit.>. Still, there is a possibility that the supernova explosions in protogalaxies provided the magnetic seed fields that were later amplified by compression, shearing and stochastic motions <cit.>. An alternative explanation is that magnetic fields emerged from the evolution of the early Universe, having been generated either during cosmic phase transitions (e.g. the electroweak transition) <cit.> or during inflation <cit.> (see also <cit.>) for reviews). The pre-recombination universe was fully ionized and, if a PMF was ever generated, dissipation due to magnetic diffusion would be negligible. Dissipation due to the kinetic viscosity damping fluid motions induced by the PMF on small scales would, however, significantly drain the energy of the PMF <cit.> over the many e-folds of cosmic expansion between the very early Universe and the recombination era. Notwithstanding this fact, if only the smallest fraction ∼ 10^-10 of a PMF initially in equipartition with the radiation would survive, it could be sufficient to eliminate the need for dynamo amplification altogether. As discussed below, it could also leave observable signatures in the CMB. While PMFs have been a subject of continuous study over many decades, there were commonly considered to be unnecessary because less exotic astrophysical explanations could not be ruled out. The interest in PMFs was renewed when evidence emerged of magnetic fields in ∼Mpc size voids in the intergalactic space. Observations of TeV blazars by Hess and Fermi <cit.> have led to the surprising conclusion that magnetic fields exist in the extra-galactic medium between galaxies with a very large volume filling factor <cit.>. TeV gamma-rays emitted by the blazar may pair-produce on the diffuse extra-galactic background star light. The resulting electron-positron pairs will subsequently Compton scatter on CMB photons converting them to GeV gamma-rays. Observations of blazars have not found these GeV photons with the flux expected. Though other less understood explanations exist <cit.>, the most straightforward explanation is that the electron-positron pairs were deflected by magnetic fields out of the light cone. A lower limit on the field strength of B ≳ 3× 10^-15G (assuming a coherence scale of ∼kpc) could be inferred. While adding to the case for PMFs, it is not ruled out that such fields could be the result of outflows from galaxies, filling essentially all space with magnetic fields. The synchrotron emission from a few mega-parsec (Mpc) long ridge connecting two merging clusters of galaxies is another observation that is well-matched by simulations based on the PMF hypothesis <cit.>. The question of the origin of the magnetic fields in the universe is difficult to settle without the help of further observations. It is hoped that blazar gamma-ray observations by the future Cherenkov Telescope Array mission <cit.> may raise the lower limit to the ∼3-10 pico-gauss (pG) range <cit.>. However, the only way to be certain of the primordial origin of the observed fields is to find their signatures in the CMB. Since the PMF survives Silk damping <cit.>, magnetosonic modes on large scales, ∼1-10 Mpc, may lead to additional power in the high-ℓ tail of the CMB temperature anisotropy spectrum <cit.>. Dissipation of magnetic fields before recombination may lead to spectral distortions <cit.>, and after recombination may lead to an increase of the optical depth <cit.>. The PMF may lead to additional polarization anisotropies due to Faraday rotation <cit.> The non-Gaussianity of the PMF may lead to a signal in the bi-spectrum or tri-spectrum of the anisotropies <cit.>. All of the above effects have led to upper limits in the nano-gauss (nG) range. However, as we elaborate in the following subsection, CMB appears to be most sensitive to the generation of small-scale baryonic density fluctuations before recombination, with detectable PMF strength in the 0.01-0.1 nG range <cit.>. Moreover, as shown in <cit.>, the baryon clumping before recombination may help relieve the Hubble tension. § RECOMBINATION WITH PRIMORDIAL MAGNETIC FIELDS A statistically homogeneous and isotropic PMF is characterized by a power spectrum expected to be a smooth function of the Fourier number k. On large scales, the universe is a highly conducting plasma with the magnetic field “frozen-in” and diluting with the expansion as B = B_0/a^2, where B_0 is the present day (comoving) strength and a is the scale factor normalized to unity today. On smaller scales, the PMF generates perturbations in the plasma and dissipates its strength. PMFs generated causally in phase transitions generically <cit.> have a Batchelor spectrum <cit.> that monotonically increases with k until a peak at a certain small integral scale <cit.>, beyond which it decays. Inflationary magnetogenesis, on the other hand, can produce a range of spectral shapes depending on the particular model (see e.g.<cit.> for a review), with the simplest models <cit.> predicting a scale-invariant spectrum. CMB constraints are often quoted in terms of the comoving field strength smoothed on the 1 Mpc scale, B_ 1Mpc, which, in many cases, is not a representative measure of the PMF. The more relevant quantity for the baryon clumping effect is the effective comoving strength, B_ eff, quantifying the average magnetic total energy density. For scale-invariant fields, the two measures are essentially the same. For causal fields, however, the difference is quite dramatic, with B_ eff≫ B_ 1Mpc. Unless specified otherwise, the field strengths quoted in this paper are B_ eff. Unlike most of the evolution of the magnetized plasma in the early universe, when it is an incompressible fluid, MHD becomes compressible before recombination for scales well below the photon mean free path l_γ∼ 1Mpc[All cited length scales are comoving.]. Photons are free-steaming over ∼ kpc scales and their sole effect is to introduce a drag force on peculiar motions in the CMB rest frame characterized by a linear drag coefficient α. The Euler equation for the baryon velocity field takes the form of ∂ v∂ t + ( v·∇) v + c_s^2 ∇ρρ = - α v - 1 4 πρ B× (∇× B) where the speed of sound, c_s ≈ 2× 10^-5c, is very small on scales well-below l_γ, while being much larger, c_s = √(1/3)c, on length scales larger than l_γ. The first term on the right-hand side (RHS) is the photon drag, while the last term is the Lorentz force due to the PMF, B× (∇× B) = ∇ B^2/2 - ( B·∇) B, which pushes baryons into regions of lower magnetic energy density. This force is opposed by the baryon-photon fluid pressure characterized by c_s^2. On scales well-below l_γ, the plasma is free to compress until the baryonic pressure gradients backreact. Using the continuity equation, ∂ρ∂ t + ∇(ρ v) = 0 , one can estimate the amplitude of the generated density fluctuations in a back-of-the-envelope estimate <cit.>. Imagine a stochastic magnetic field at uniform baryon density and negligible velocities initially. These are very realistic initial conditions, since at earlier times the evolving l_γ is smaller than the magnetic fluctuations scale L, such that the plasma is incompressible, and photon diffusion suppresses any fluid motion. At redshifts z∼ 10^4-10^6, depending on the scale, the photon mean free path l_γ starts exceeding the scale L. At this point Eq. (<ref>) applies and there is a significant drop in c_s. Initially, only the terms on the RHS of Eq. (<ref>) are important and the plasma obtains a terminal velocity v≃ c_A^2/(α L), where c_A = B/√(4πρ) is the Alfven speed of the plasma. Using Eq. (<ref>), it is straigthforward to show that the density fluctuations grow as δρ /ρ≃ v t/L ≃ c_A^2 t/α L^2 with time t. These density fluctuations become larger with time until either the pressure forces become important in counteracting further compression, or the source magnetic stress term decays. The former happens when the last term on the LHS of Eq. (<ref>), (c_s^2/L) δρ /ρ, is of the order of the magnetic force term c_A^2/L. That is, density fluctuations cannot become larger than δρ / ρ (c_A/c_s)^2. It has been shown by numerical simulation <cit.> that even in the highly dragged viscous MHD regime magnetic fields decay, albeit slower than during turbulent MHD. Magnetic fields excite fluid motions which in turn get dissipated by the photon drag. It was found that a magnetic structure decays when the Eddy turn-over rate v/L ≃ c_A^2/α L^2 equals the Hubble rate H≃ 1/t. Entering this into the expression for δρ /ρ one finds that the average density fluctuation cannot exceed unity by much. This leads to the following estimate for the generated density fluctuations: δρ/ρ≃ min[1,(c_A/c_s)^2] . Since, at recombination, c_A = 4.34 km/s ( B 0.03 nG) and c_s = 6.33 km/s, even fairly weak fields may generate order-unity density fluctuations on small scales. This simple estimate has subsequently been confirmed by full numerical simulations <cit.>. As the Universe expands, density fluctuations on ever larger scales are generated and subsequently decay. Causally generated PMFs loose a significant fraction of their power with each e-fold as the peak of the Batchelor spectrum keeps moving to larger scales and weaker fields according to B(L) = B_0 (L/L_0)^-5/2. Scale-invariant fields, on the other hand, are largely unaffected by the evolution. As discussed above, that magnetic fields and the associated density fluctuations decay when the Eddy turnover time equals the Hubble time. This can be used to derive a correlation between the magnetic field strength and the scale of the smallest magnetic structure that survives until just before recombination <cit.>: B_ rec 80 p G(L/ kpc) , where approximate equality is attained for fields which do undergo dissipation in the first place. For fields which are so weak and/or are on such large scales that the Eddy turnover time never reaches the Hubble time, no dissipation ever occurs and no correlation can be derived (hence the inequality sign). Thus, quite generally, for sufficiently strong fields, B∼ 10-100 pG, inhomogeneities are generated on ∼ 0.1-1 kpc scales shortly before recombination. Further dissipation occurs across recombination as a result of the sudden drop in the free electron density. This leads to a factor of 6.2 drop in the strength of the causal PMF, while, again, a scale-invariant field strength is not affected. After recombination, the PMF evolution is almost entirely halted or at least significantly slowed down as recent numerical simulations show <cit.>. The post-recombination field strength is the one relevant for the subsequent structure formation and is commonly referred to as “pre-collapse”. For causal fields, the resultant pre-collapse correlation is given by B_0≃ 5 p G (L/ kpc) [It is noted that there is no conflict between the statement that the magnitude of B_ eff diminishes by a factor 6.2 during recombination, on the one hand, and Eq. <ref> and this relation, on the other hand, as the coherence length L is increasing during the dissipation.]. As the differences between various measures of the PMF are model-dependent and can be a source of confusion, we provide a glossary in Table <ref>. It has been shown by detailed simulations <cit.> that, irrespective of their coherence length, pre-collapse magnetic fields of ∼5 pG lead to final post-collapse cluster fields of ∼μG with Faraday rotation measures in good agreement with observations. The magnetically induced inhomogeneities are on scales much too small (i.e. ℓ∼ 10^6-10^7) to be observed directly as a contribution to CMB anisotropies. However, the baryon clumping has a prominent effect on the rate of recombination, speeding it up and resulting in a smaller sound horizon at photon-baryon decoupling, r_⋆. The speed up of recombination follows from the recombination rate being proportional to the square of the electron density n_e. Since, generally, ⟨ n_e^2⟩ > ⟨ n_e⟩^2 in a clumpy universe, where ⟨⟩ denotes spatial average, the recombination rate is enhanced compared to ΛCDM. A smaller r_⋆ raises the value of H_0 inferred from the very accurately measured angular size of the sound horizon, helping reduce the Hubble tension. § INHOMOGENEOUS RECOMBINATION The evolution of the cosmic electron density depends on the full details of the probability distribution function (PDF) p(Δ) to find baryons at density Δ⟨ρ⟩, where ⟨ρ⟩ is the average baryonic density and Δ is an enhancement factor. The shape of the baryon density PDF is currently unknown, and neither is its evolution before recombination, which could be substantial, particularly for blue Batchelor spectra. In the absence of knowledge of the PDF, <cit.> and <cit.> utilized a simple three-zone model (see Sec. <ref>) to describe the distribution of baryons. In the three-zone model, the amplitude of baryon inhomogeneity is controlled by the clumping factor b ≡( ⟨ρ^2 ⟩-⟨ρ⟩^2/⟨ρ⟩^2) which sets the second moment of the PDF, while the higher order moments are effectively determined by the choice of the three-zone model parameters. As we show next, the higher moments, and the third moment in particular, play a very important role in determining the enhancement of the recombination rate. The evolution of the electron number density around recombination is given by dn_e/ dt + 3 H n_e = -C(α_e n_e^2-β n_H^0 e^-hν_α/T) , where n_e is the free electron density, n_H is the total matter number density (protons and hydrogen), and n_H^0 = n_H - n_e is the density of neutral hydrogen. For simplicity, to illustrate the importance of the higher moment, we consider a universe with only hydrogen ( i.e. no helium). Since Eq. (<ref>) is quadratic in n_e, and since ⟨ n_e^2 ⟩ > ⟨ n_e⟩^2 in inhomogeneous universes, we expect the recombination rate to be enhanced when inhomogeneities exist. It would be tempting to solve the above equation by the introduction of a clumping factor given by the underlying density distribution, i.e.b = (⟨ n_H^2⟩ - ⟨ n_H⟩^2)/⟨ n_H⟩^2, such that the recombination term would be replaced by α_e (1+b)⟨ n_e⟩^2 and the effect of inhomogeneities would be entirely described by the first two moments of the distribution, i.e. the average density ⟨ n_H⟩ and the variance ⟨ n_H^2⟩. However, such a procedure would be incorrect as it would assume that ⟨ n_e^2⟩∼⟨ n_H^2⟩. This is not the case if different density regions have different ionization fractions χ_e = n_e/n_H. Rather, as shown below, the average ionization depends on all the moments of the underlying probability distribution of the density fluctuations P(n_H). Consider detailed balance, i.e. set the RHS of Eq. (<ref>) to be approximately zero. This assumption holds fairly during the middle of recombination because both the recombination and the photoionisation rates are much larger than the Hubble rate, but is increasingly inaccurate towards the end of recombination. The equilibrium ionization fraction can then be derived as χ_e^2 = β_e/α_e1/n_H^2(n_H-n_e)e^-hν_α/T = β_e/α_e1/n_He^-hν_α/T(1-χ_e) . Let us now allow for inhomogeneities in the total baryon density by defining n_H = Δ⟨ n_H⟩ , where ⟨...⟩ denotes the ensemble average, which we will assume to be the same as the average over any sufficiently large volume. Then we can write χ_e^2 = β_e/α_e1/Δ⟨ n_H⟩e^-hν_α/T(1-χ_e) = A Δ (1-χ_e) , where we have defined A ≡β_e/α_e1/⟨ n_H⟩e^-hν_α/T . From (<ref>), we can write the local ionized fraction as χ_e (Δ)= A/2Δ(1+4Δ/A)^1/2-A/2Δ . In the homogeneous case, Δ=1, we have χ_e^0 (Δ=1)= A/2(1+4/A)^1/2-A/2 . Note that in the limit A ≫ 1 we have a full ionisation, i.e. χ_e → 1, while in the opposite limit, A ≪ 1, we have χ_e → 0. For consistency, the inhomogeneity parameter Δ(x) (x are space coordinates) must fulfill 1/V_tot∫ dVΔ (x) = 1 where the integration is over the total spatial volume V_tot. For simplicity, as assumed earlier, we will take averages over this volume to be the same as ensemble averages and the above constraint can be simply written as ⟨Δ⟩=1. The average electron density is given by ⟨ n_e⟩ = ⟨χ_e(Δ)Δ⟨ n_H⟩⟩ , and the average ionzation fraction is ⟨χ_e⟩≡⟨ n_e⟩/⟨ n_H⟩ = A/2⟨√(1+4Δ/A)-1 ⟩ . Note that this is not the same as the average of Eq. (<ref>), where χ_e = n_e/n_H. Namely, ⟨ n_e⟩/⟨ n_H⟩≠⟨ n_e/n_H⟩. The relevant quantity for us is actually ⟨ n_e⟩, which is what appears in the CMB calculations, hence the relevant quantity is the one in Eq. (<ref>). One can now show that, for arbitrary distribution functions P(n_H) or P̃(Δ ) fulfilling the integral constraint ⟨Δ⟩≡∫ dΔΔP̃(Δ ) = 1 , the ⟨χ_e⟩ of Eq. (<ref>) is smaller than χ_e^0. Namely, that ⟨√(1+4Δ/A)⟩ < √(1+4/A) . This follows from Jensen's inequality <cit.> which states that the expectation value ⟨ g(X)⟩ of any concave function g(X) of a random variable X is smaller than g(⟨ X⟩). In our case, the function √(1+4Δ/A) is certainly concave and hence we have our result. For smaller density perturbations one can also see this without using Jensen's theorem by introducing δ≡Δ-1, such that ⟨δ⟩=0, and performing a Taylor expansion under the assumption that δ < 1. Then, ⟨√(1+4/A+4δ/A)⟩≈√(1+4/A) - γ_2 ⟨δ^2 ⟩ + γ_3 ⟨δ^3 ⟩ - ... , where all γ_i > 0. One can see that the leading second order term is negative (the function is concave), as well as all even order moment terms, while the odd moments come with a positive signs. This means, e.g., that PDFs with a large third moment could reduce the impact on the ionized fraction which is a trend we have observed while experimenting numerically with different three-zone models. Ultimately, the full redshift evolution of the baryon PDF and the associated ionized fraction will be determined exactly from MHD simulations that are currently in progress. §.§ The three-zone model In the absence of the baryon density PDF derived from MHD simulations, when discussing the current observational constraints on the PMF-enhanced recombination, we will use the simple three-zone model introduced in <cit.>. The model is described by the density parameters Δ_i and volume fractions f_V^i in each zone. Baryon densities in the individual zones are simply given by n_b^i = ⟨ n_b⟩Δ_i. Parameters Δ_i and f_V^i have to fulfil the following constraints: ∑_i=1^3 f_V^i =1, ∑_i=1^3 f_V^iΔ_i = 1, ∑_i=1^3 f_V^iΔ_i^2 = 1 + b , i.e. the total volume fraction is one and the three-zone model has average density ⟨ n_b⟩ and clumping factor b. This leads to three constraints for six free parameters f_V^i,Δ_i, such that one may choose three parameters freely. To obtain the average ionization fraction ⟨χ_e ⟩, one computes the ionization fraction in each of the zones and take the average, i.e. ⟨χ_e ⟩ = ∑_i=1^3 f_V^iΔ_i χ_e^i . In <cit.>, the parameters were chosen to be f_V^2 = 1/3, Δ_1 = 0.1 and Δ_2 = 1, which we will subsequently refer to as the M1 model. In <cit.>, for comparison, we introduced a second model, M2, that had f_V^2 = 1/3, Δ_1 = 0.3 and Δ_2 = 1. It is not clear at this point if either of these models provides a good representation of the actual baryon PDF, but we note that M2 has a larger third moment and, for reasons discussed earlier, results in a lesser reduction of the average ionized fraction for the same value of the clumping factor b. § THE IMPACT OF BARYON CLUMPING ON CMB SPECTRA The discussion in this and the subsequent Sections is largely based on the results obtained in <cit.>, where the impact of clumping on the CMB spectra and, in particular, their effect on CMB polarization were studied. As previously discussed, inhomogeneous recombination completes sooner, which shifts the peak of the visibility function to an earlier epoch. This lowers r_⋆, defined as the comoving sound horizon at the peak of the visibility function, which has the effect of shifting the acoustic peaks in CMB spectra to smaller angular scales. One can compensate for the shift with a larger value of H_0, which may help in relieving the Hubble tension. In addition to the shift of the peaks, earlier recombination means that CMB polarization is produced at an earlier epoch and, as a consequence, has a larger overall amplitude. This is because the value of the speed of sound c_s (on scales where the baryon-photon fluid is tightly coupled) was larger at earlier times, and the amplitude of polarization is set by the quadrupole of temperature anisotropy, which is derived from the dipole, which is set by the time derivative of the monopole, which in turn is proportional to c_s <cit.>. Clumping also broadens the visibility function, which is a consequence of the overdense baryon pockets recombining earlier and underdense baryon pockets recombining later. This broadening tends to further enhance polarization, because of the period of time during which polarization can be generated is longer. The two effects, the shift of the peak and the broadening of the visibility function, are illustrated in Fig. <ref>, which compares the visibility functions in the model, with an M1 model with b=2 with all other parameters kept the same, and with the M1 model () that best fits the Planck CMB data combined with the SH0ES prior on H_0. The broadening effect is apparent from the lower peak, since the visibility function is normalized to integrate to unity. We note that, while the general trends in the visibility function are common to all clumping models, the quantitative details are dependent on the shape and the evolution of the baryon density PDF. Another significant effect of clumping on CMB spectra comes from a modification of the Silk damping scale r_D. Here, there are three competing effects. Firstly, r_D decreases due to an overall earlier completion of recombination. This decrease, however, could be negated by the fact that an earlier broad helium recombination in clumping models results in a smaller electron density available at the later stages of recombination, and due to the broadening of the visibility function. Since much of the Silk damping occurs right at recombination, where the visibility function is of order unity, the details of the visibility function play an important role. The first effect, due the overall shift to higher redshifts, would reduce the Silk damping effect on CMB spectra, as it pushes the onset of the damping tail to higher multipoles ℓ. The second and third effects, however, can also be important, and the balance between them is model-dependent and varies with the clumping factor. In the best-fit M1 model, we find that there is less Silk damping compared to . But, in the best-fit M2 model (see Sec. <ref>), the Silk damping is virtually identical to that in the best-fit . Also, in M1, at (observationally disallowed) high values of b, the Silk damping is actually enhanced. Additional discussion of the evolution of the damping scale as a function of b in a different clumping models can be found in <cit.>. The evolution of magnetically induced clumping is unknown at present and is a significant source of uncertainty in observational constraints on the PMF. For example, if clumping was stronger at z∼ 3000 than at z∼ 1200, the helium recombination could be the dominant effect, inducing more Silk damping. The reduction in Silk damping present in the M1 model is illustrated in Fig. <ref>, which compares the CMB spectra in the Planck best-fit ΛCDM to those in the Planck+H_0 and Planck+BAO+SN+H_0 best-fit M1 models, where H_0 denotes the measurement by SH0ES. The enhancements in the temperature () and E-mode polarization () spectra at high ℓ is due to the smaller r_D. The same enhancement is also seen in the temperature-polarization cross-correlation (), which is negative due to the anticorrelation between T and E at high ℓ. This illustrates the fact that high resolution CMB measurements will be a key discriminant in constraining inhomogeneous recombination models[Fig. <ref> also shows that at ℓ≲ 20 the polarization is reduced, which is due to the lower best fit value of the optical depth τ.]. Also, as first pointed out in <cit.>, one needs the full combination of , and CMB spectra, as neither nor + on their own are able to break degeneracies between primordial power spectrum index n_s, the amplitude A_s e^-2τ, and other cosmological parameters, required for placing tight constraints on b. § RELIEVING THE HUBBLE TENSION WITH PMFS The extent to which PMFs can help relieve the Hubble tension is limited by the fact that reducing the sound horizon r_⋆, by itself, can at best raise the value of H_0 to ∼ 70 km/s/Mpc without causing new tensions between CMB, BAO and the large-scale structure clustering. In what follows, we first reproduce the general argument illustrating this point, originally made in <cit.>, before describing the current observational status of the M1 baryon clumping model. §.§ Why reducing r_⋆ helps, but can not fully resolve the Hubble tension by itself The acoustic peaks in CMB anisotropy spectra provide a very accurate measurement of the angular size of the sound horizon at recombination, θ_⋆ = r_⋆/D(z_⋆) , where D(z_⋆) is the comoving distance from a present day observer to the last scattering surface, where z_⋆ is the redshift of the peak of the visibility function. In a given model, r_⋆ and D(z_⋆) can be determined from r_⋆ = ∫_z_⋆^∞ c_s(z) d z /H(z) and D(z_⋆)=∫^z_⋆_0 c d z / H(z), where c_s(z) is the sound speed of the photon-baryon fluid, H(z) is the redshift-dependent cosmological expansion rate and c is the speed of light. To complete the prescription, one also needs to determine z_⋆ using a model of recombination. The sound waves responsible for the acoustic peaks in the CMB spectra are also imprinted in the galaxy power spectra as Baryon Acoustic Oscillations (BAO) with a minor difference in this standard ruler. Rather than r_⋆, which is the sound horizon at photon decoupling, the scale imprinted in the BAO is the sound horizon at the baryon decoupling , also known as the “cosmic drag” epoch when the photon drag on baryons becomes unimportant. The latter takes place at a slightly lower redshift than recombination, so that r_ d≈ 1.02 r_⋆ in . The proportionality factor between the two sound horizons does not change appreciably in alternative recombination scenarios. While the difference between r_⋆ and is small, the difference in redshifts at which the acoustic features in the CMB and BAO are observed is significant, with the latter measured at 0 ≲ z ≲ 2.5 accessible to galaxy redshift surveys. The angular scale of the BAO feature measured using galaxy correlations in the transverse direction to the line of sight is θ_⊥^ BAO (z_ obs) ≡r_ d/D(z_ obs) , where z_ obs is the redshift of the correlated galaxies. Let us now consider Eqs. (<ref>) and (<ref>) while remaining agnostic about the particular model that determines the sound horizon. Namely, let us treat r_⋆ as an independent parameter and assume r_ d = 1.02 r_⋆. Let us also assume that after recombination the expansion of the universe is well-described by the model and, for simplicity, ignore the contribution of radiation to the distance integrals D(z_⋆) and D(z_ obs). Then, for the CMB acoustic feature, we can write θ_⋆ = r_⋆ 2998 Mpc( ∫_0^z_⋆ dz/ω_m^1/2√((1+z)^3 + h^2/ω_m -1))^-1, where ω_ m = is the matter density today, Ω_m is the fractional matter density, h is H_0 in units of 100 km/s/Mpc, and 2998 Mpc = c/100 km/s/Mpc. An analogous equation for the BAO is obtained by replacing (r_⋆, θ_⋆ ,z_⋆) with (r_ d,θ_⊥^ BAO,z_ obs). For a given ω_ m, and with the precisely measured θ_⋆, Eq. (<ref>) defines a line in the -H_0 plane[From here on we will use to represent the acoustic feature in both the CMB and BAO]. Similarly, a BAO measurement at each different redshift also defines a respective line in the -H_0 plane. However, the slopes of the CMB and BAO lines are very different due to the vast difference in redshifts at which the standard ruler is observed, z_⋆≈ 1100 for CMB vsz_ obs∼ 1 for BAO. This is illustrated in Fig. <ref> that shows the -H_0 degeneracy line for CMB at three different values of ω_ m, as well as the 68% and 95% confidence level (CL) regions derived from the combination of all presently available BAO observations while treating as as an independent parameter and marginalizing over ω_ m (see <cit.> for details). Also shown are the constraint from Planck (in red) and the SH0ES determination of H_0 (the grey band). On can see from Fig. <ref> that BAO is consistent with either Planck or SH0ES, depending on the value of , but the latter two are in clear disagreement with each other for the Planck--preferred ω_ m≈ 0.143. One can reconcile Planck with SH0ES by reducing and “moving up” the CMB -H_0 degeneracy line. However, doing so at a fixed ω_ m would quickly move the values of and H_0 out of the purple band in Fig. <ref> creating a tension with BAO. In order to reconcile CMB, BAO and SH0ES at the same time, one needs to reduce and increase the matter density, as illustrated in the Figure. In particular, a full resolution of the tension would require ω_ m≈ 0.167, which is substantially higher than the best-fit value. A larger matter density results in more clustering quantified by the S_8 parameter. It is defined as S_8 ≡σ_8(Ω_m/0.3)^0.5, where σ_8 is the matter clustering amplitude on the scale of 8 h^-1Mpc. A larger S_8 would exacerbate the already existing mild tension between its Planck best-fit value and that measured from weak gravitational lensing of galaxies <cit.>. The reason why simply reducing r_⋆ would not fully solve the Hubble tension is seen in Fig. <ref>. The figure shows the 68% and 95% CL constraints on S_8 and by DES supplemented by the Pantheon SN sample <cit.>. Also shown is the Planck best fit, corresponding to ω_ m = 0.143, as well as the two cases corresponding to ω_ m = 0.155 (Model 2) and ω_ m =0.167 (Model 3). The constraints for the latter two were obtained by simultaneously fitting BAO and CMB acoustic peaks at the corresponding fixed values of ω_ m. One can see that when attempting to bring CMB, BAO and SH0ES in agreement with each other by reducing , one increases the S_8 tension. Thus, any proposed solution to the Hubble tension that amounts to reducing without other significant changes to would not be able to raise the CMB-extracted value of H_0 above 70 km/s/Mpc. Baryon clumping belongs to that category of models, hence we do not expect it to fully resolve the Hubble tension. But it could still help relieve it to a statistically acceptable level. §.§ Observational status of the M1 baryon clumping model In the absence of the detailed ionized fraction evolution obtained from MHD simulations with a PMF (see Sec. <ref>), the M1 three-zone model <cit.> was used in <cit.> for exploring the ability of baryon clumping to resolve the Hubble tension. Below we summarize the current observational constraints on the M1 model. Fig. <ref> shows the marginalized posteriors for the clumping factor b and H_0 from Planck, Planck+SH0ES and Planck+SH0ES+BAO+SN, where we used the eBOSS DR16 BAO compilation from <cit.> and SN stands for the Pantheon supernovae sample <cit.>. We see that Planck by itself shows no preference for clumping, with b < 0.47 at 95% CL and only a small increase in the best-fit H_0. Adding the SH0ES prior (H0) to Planck gives b=0.48 ± 0.19 and H_0 = 70.32 ± 0.85 km/s/Mpc. Adding the BAO and SN data results in b=0.40^+0.15_-0.19 and H_0 = 69.68 ± 0.67 km/s/Mpc, a reduction due to the general difficulty in maintaining the agreement between CMB and BAO while decreasing the sound horizon, as discussed in the previous subsection. Additional constraints on baryon clumping can be derived by adding the high-resolution CMB temperature and polarization data from the Atacama Cosmology Telescope fourth data release () <cit.> and the South Pole Telescope Third Generation (SPT-3G) 2018 data <cit.>. As discussed in Sec. <ref> and demonstrated in Fig. <ref>, baryon clumping impacts the Silk damping tail of the CMB spectrum, making the spectra at multipoles ℓ > 2000 a powerful probe of baryon clumping. Combining Planck and ACT yields b < 0.34 at 95% CL <cit.>, while Planck+SPT gives b<0.38 at 95% CL <cit.> in the M1 model. Many models that resolve the Hubble tension tend to make the S_8 tension worse, mostly due to the reasons described in the previous subsection. In contrast, the baryon clumping generally reduces the value of S_8 deduced from CMB, thus also helping with another tension. The primary reason for the lower S_8 (and Ω_m) values is that the best fit Ω_mh^2 value in the clumping model is largely the same as in , while h is increased. Fig. <ref> compares the S_8-Ω_m joint posteriors in the Planck+BAO+SN best-fit model to those in the M1 , together with the DES Year 1 contours. Based on preliminary results from MHD simulations described in the next section, it is clear that one should not attribute too much weight to observational constraints derived from the three-zone model. However, for general reasons discussed in Sec. <ref>, it is clear that baryon clumping could at best raise the CMB-based value of the Hubble constant to H_0 ∼ 70 km/s/Mpc. This may or may not turn out to be enough but, even if the H_0 tension was not fully relieved, a detection of clumping would be highly interesting by itself, as it would provide (indirect) evidence of the PMF. If no clumping is detected, it would provide the tightest constraint on the PMF strength. § WHAT TO EXPECT FROM MHD SIMULATIONS The three-zone model may capture much of the physics of baryon clumping due to a PMF before recombination. It is not, however, sufficiently accurate for comparison to current and future high precision CMB data as subtle details are neglected or simply assumed. The M1 three-zone model, employed in earlier sections, was originally introduced in <cit.> to derive the preliminary stringent limit on the PMF strength from the (older) CMB data at that time. It assumed a particular baryon PDF and neglected its evolution. Furthermore, at the time of publication of <cit.>, not all of the effects that impact the ionization history in the presence of PMFs were known. Since then, it became apparent that a complete exhaustive analysis, employing numerical MHD simulations and Monte Carlo methods for radiation transfer is required, which is currently underway <cit.>. Fig. <ref> shows results of a recent numerical MHD simulation <cit.>. The left panel shows the projected magnetic energy density and the right panel shows the projected baryon overdensity ρ/⟨ρ⟩ from a 256^3 simulation of a (24 kpc)^3 volume permeated by a stochastic non-helical magnetic field at redshift z = 1000. The initial conditions were a homogeneous baryon density, vanishing peculiar velocities, and a field of root-mean-square amplitude B_ rms = 0.53 nG (corresponding to c_A, rms = 12 c_s) at z = 4500. One can see that substantial baryon density fluctuations δρ /ρ∼ 1 on ∼ kpc scales are generated by redshift z = 1000. Most of the volume is occupied by underdense regions, whereas substantial overdensities exist in small pockets. A small volume fraction of ∼ 10^-3 has overdensities above 10, that are not visible in the figure as it shows the projected density. In comparison, the magnetic field energy density is much more diffuse. The maximum clumping factor b≈ 1.5 is attained around z ≈ 1000. The corresponding baryon PDF is very skew-positive, i.e. has a substantial positive third moment, and is significantly different than the PDF assumed in M1. At redshift z = 1000, the magnetic field has dissipated to B_ rms = 0.2 nG, whereas the “final” field is B_ rms = 0.044 nG at z ≈ 10. The study in <cit.> established several new insights: * Even knowing the full evolution of the baryon PDF is not sufficient for an accurate computation of the ionization history. Rather, the density evolution of each fluid element needs to be known. * The clumping factor receives large contributions from rare very high density regions. These, however, do not contribute much to the bulk of the recombination. Therefore, the clumping factor b is not a useful parameter for gauging the effect on the ionization history, * A fraction of Lyman-α photons emitted during the recombination process locally may actually travel to other regions with different local conditions. The resultant cosmic average ionization is significantly changed due to this Lyman-α transport. * The drop in the speed of sound during recombination by a factor of 1/√(2), due the diminishing electron pressure, leads to further baryon compression by the PMF during recombination. The last two points are illustrated in Fig. <ref>, showing that all know effects have to be taken into account for a precision computation of the ionization history. Ultimately, results of such numerical simulations should be used for precision tests against current and future CMB data. A meaningful comparison faces difficulties due to the required immense CPU time. In theory, given the PMF spectrum and helicity, for each magnetic field strength and cosmological parameters such as baryon density, dark matter density, etc., an independent large simulation should be performed. For the large number of cosmological parameters sampled in such tests, it would be impossible. Fortunately, the effects on X_e from changing the cosmological parameters are much smaller than that of the PMF parameters of appreciable amplitude, such that they may be added linearly to a good approximation. Given a particular set of cosmological parameters, numerical simulations still have to be performed for different initial PMF strengths. This can be done for a discrete set of strengths with subsequent interpolation. The numerical simulation have to be of a significant box size to cover a sufficient dynamical range to simulate all the scales which produce density fluctuations and dissipate afterwards between helium and hydrogen recombination, and for the particular realization of the stochastic field ( i.e. the choice of the random number) to result in a small variance in the results. This points to fairly large simulations and requires extensive amounts of CPU time. § SUMMARY PMFs are often categorized as “exotic physics” because, on one hand, there is no firm theoretical prediction of their existence within standard models of particle physics and cosmology, and also because the observational evidence supporting the PMF hypothesis, while increasingly compelling, is still indirect and can, at least in principle, be explained via less exotic astrophysical processes. However, PMFs are a rather mature field of research sustained for over half-a-century by a small but capable community of researchers. The reason for this longevity is not (only) due to PMFs often playing the role of a “ghost fairy” (using the term much liked by a certain famous cosmologist) coming to the rescue whenever there is a new unexplained phenomenon. Rather, most of the researchers working on the PMFs are inspired by the fact that their existence in the early universe is inevitable. It is not a question of whether they were generated at some level, e.g. during the known phase transitions, but whether they would be of sufficient strength to make a detectable impact. If there is even a remote chance of detecting them, it is worth trying, as they would provide an invaluable insight and a new window into the physics of the early universe. It is the above context that, in our view, distinguishes the PMF proposal for relieving the Hubble tension from others. It is not a "fairy" explanation invented to solve the puzzle of the day, but rather an observation that if the primordial universe was magnetized at a level sufficient to be relevant to the magnetic fields we see in galaxies and clusters, it could also help with alleviating the H_0 problem. This is a fully falsifiable proposal – there is no new physics to invent when it comes to working out the details of recombination in the presence of a PMF. The required MHD simulations are challenging but doable, and there will be a firm prediction in a reasonable time that can then be tested against the data. Like other models that aim to relieve the Hubble tension by lowering the sound horizon at recombination, baryon clumping can not raise the value of H_0 beyond ∼70 km/s/Mpc, which is still over 2σ lower than the SH0ES measurement. However, even if PMFs do not fully resolve the Hubble tension, finding evidence for them in the CMB would be a major discovery in its own right. Alternatively, if not detected, accounting for their impact on recombination when analyzing data from future CMB experiments, such as Simons Observatory and CMB-S4, would lead to the tightest constraints on a PMF. We thank Tom Abel, Lennart Balkenhol, Silvia Galli, Tanmay Vachaspati and Gong-Bo Zhao for collaborations and discussions. This research was enabled in part by support provided by the BC DRI Group and the Digital Research Alliance of Canada (<alliancecan.ca>), and the National Sciences and Engineering Research Council of Canada. JHEP
http://arxiv.org/abs/2307.04846v1
20230710183258
Saturation and multifractality of Lagrangian and Eulerian scaling exponents in 3D turbulence
[ "Dhawal Buaria", "Katepalli R. Sreenivasan" ]
physics.flu-dyn
[ "physics.flu-dyn", "cond-mat.soft", "physics.comp-ph" ]
[][email protected] Tandon School of Engineering, New York University, New York, NY 11201, USA Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany Tandon School of Engineering, New York University, New York, NY 11201, USA Department of Physics and the Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA Inertial range scaling exponents for both Lagrangian and Eulerian structure functions are obtained from direct numerical simulations of isotropic turbulence in triply periodic domains at Taylor-scale Reynolds number up to 1300. We reaffirm that transverse Eulerian scaling exponents saturate at ≈ 2.1 for moment orders p ≥ 10, significantly differing from the longitudinal exponents (which are predicted to saturate at ≈ 7.3 for p≥30 from a recent theory). The Lagrangian scaling exponents likewise saturate at ≈ 2 for p ≥ 8. The saturation of Lagrangian exponents and Eulerian transverse exponents is related by the same multifractal spectrum, which is different from the known spectra for Eulerian longitudinal exponents, suggesting that that Lagrangian intermittency is characterized solely by transverse Eulerian intermittency. We discuss possible implication of this outlook when extending multifractal predictions to the dissipation range, especially for Lagrangian acceleration. Saturation and multifractality of Lagrangian and Eulerian scaling exponents in 3D turbulence Katepalli R. Sreenivasan August 12, 2023 ===================================================================================================== Turbulent flows consist of a hierarchy of eddies, with the smaller eddies riding on the larger ones and extracting energy from them. To understand the deformation and rotation of smaller eddies (the key mechanisms driving energy transfers) and not just their translation due to large eddies, one has to consider the velocity increment across a smaller eddy of size r ≪ L (say), where L is the large eddy size <cit.>. The longitudinal velocity increment δ u_r = u(x+r) - u(x) corresponds to the case when the velocity component u(x) is in the direction of separation r. For velocity v(x) taken orthogonal to r, we obtain the transverse velocity increment δ v_r = v(x+r) - v(x). From the seminal work of Kolmogorov <cit.>, K41 henceforth, one surmises that the moment of increments ⟨ (δ u_r)^p ⟩, called structure functions, follow a universal power-law scaling in the so-called inertial-range S_p(r) ≡⟨ (δ u_r)^p ⟩∼ r^ζ_p , η≪ r ≪ L , where η is the viscous cutoff scale. One deduces ζ_p = p/3 from K41 and, ever since, the behavior of ζ_p has been of persistent interest <cit.>. While the result ζ_p=p/3 is known to be exact for p=3, i.e., ζ_3=1, extensive studies from <cit.> to <cit.> (and many in between) have clearly established nonlinear deviations of ζ_p from p/3 for p3. This so-called anomalous scaling is attributed to the intermittency of the scale-to-scale energy transfer processes (see, e.g., <cit.>). Given the natural importance of Lagrangian viewpoint in transport phenomena <cit.>, forceful arguments can be similarly made for the importance of Lagrangian velocity increments δ u_τ = u(t + τ) - u(t) over time lag τ, measured along fluid-particle trajectories, and Lagrangian structure functions ⟨ |δ u_τ |^p ⟩ defined therefrom [absolute value is taken for Lagrangian increments since the odd moments are otherwise zero]. Extension of Eulerian phenomenology to Lagrangian viewpoint leads to the expectation S^L_p (τ) ≡⟨ |δ u_τ|^p ⟩∼τ^ζ_p^L , τ_η≪τ≪ T_L where the temporal inertial-range is defined using T_L, the Lagrangian integral time and τ_η, the time-scale of viscous dissipation <cit.>. Since Lagrangian trajectories trace the underlying Eulerian field, it is natural to expect that a relation between Lagrangian and Eulerian exponents can be predicted. Using K41, one obtains ζ_p^L=p/2 <cit.>; but, experimental and numerical studies once again show nonlinear deviations from that prediction <cit.>. Attempts have been made <cit.> to quantify these deviation in terms of Eulerian intermittency, but they remain challenging for at least two reasons. First, the temporal scaling range in turbulence is substantially more restrictive than spatial scaling range <cit.>, making it very difficult to extract robust values of the Lagrangian scaling exponents. Second, past attempts have overwhelmingly focused on characterizing Lagrangian intermittency from Eulerian longitudinal intermittency, with the expectation that longitudinal and transverse exponents should be identical, which is clearly not the case <cit.>. In this Letter, presenting new data from direct numerical simulations (DNS) of isotropic turbulence at higher Reynolds numbers, we address both these challenges. We extract both Lagrangian and Eulerian scaling exponents. Our Eulerian results reaffirm the recent results of <cit.>. We then demonstrate an excellent correspondence between Lagrangian exponents and transverse Eulerian exponents, using as basis the same multifractal spectrum for both; this is different from the multifractal spectrum for longitudinal exponents, whose use in the past has failed to explain Lagrangian intermittency <cit.>). *Direct Numerical Simulations: The description of DNS is necessarily brief here because they have already been described and utilized in many recent works <cit.>. The simulations correspond to the canonical setup of forced stationary isotropic turbulence in a triply periodic domain and are carried out using the highly accurate Fourier pseudo-spectral methods in space and second-order Runge-Kutta integration in time; the large scales are numerically forced to achieve statistical stationarity <cit.>. A key feature of the present data is that we have achieved a wide range of Taylor-scale Reynolds number , going from 140-1300 (on grids of up to 12288^3 points) while maintaining excellent small-scale resolution <cit.>. For Lagrangian statistics, a large population of fluid particle trajectories is tracked together with the Eulerian field. For ≤ 650, up to 64M particles are tracked for each case, whereas for = 1300, 256M particles are tracked (with M=1024^2) <cit.>, providing ample statistics for convergence. *Saturation of transverse exponents: The implication of anomalous scaling is that it confers upon each moment order a separate and independent significance, instead of a mutual dependence (such as ζ_p = p/3 based on K41). Multifractals have enjoyed considerable success in describing this behavior <cit.>, but their theoretical rigor is yet to be established, lacking any direct connection to Navier-Stokes equations. Further, recent DNS data at high have shown noticeable departures of ζ_p from multifractal predictions for high orders <cit.>. Instead, starting from Navier-Stokes equations, a recent theory <cit.> was able to mitigate this situation and provide an improved prediction for ζ_p. Additionally, this theory also predicts that longitudinal exponents saturate with the moment order, i.e., lim_p→∞ζ_p →constant. We recall that the transverse exponents are defined by the relation S_p^tr∼ r^ζ_p^tr, where S_p^tr (r) ≡⟨ |δ v_r|^p ⟩. (Absolute values are taken because the odd-order transverse structure functions are zero by symmetry.) Multifractal models based on phenomenological considerations do not differentiate between longitudinal and transverse exponents, i.e. ζ_2p^tr = ζ_2p, and general arguments have also been advanced to the same end <cit.>. However, earlier studies have persistently pointed out that the two sets of exponents are different <cit.>, and recent work <cit.> at high has confirmed the differences; it further showed that transverse exponents saturate with ζ_∞^tr≈ 2.1 for p≥10. Incidentally, this saturation is quite different from ζ_∞≈ 7.3 (for p ≥ 30) as predicted for longitudinal exponents in <cit.>. These findings are summarized in Fig. <ref>, which plots the Eulerian longitudinal exponents from <cit.> (also confirmed by us) and transverse exponents from present simulations. Also included, besides K41, are two notable multifractal results <cit.> and the result from <cit.>. Important considerations go into establishing the reliability of high-order exponents with respect to the convergence, adequacy of grid resolution, and the Reynolds number. This discussion can be found in <cit.> and will not be repeated here. Instead, we focus on the ζ_p^tr, which clearly and substantially depart from the ζ_p and saturate for p≥ 10. We postpone to the Summary section the implication of different scaling of the longitudinal and transverse exponents for the universality of small-scale turbulence, but demonstrate the relation of ζ_p^tr to the Lagrangian exponents, which we immediately proceed to extract from the present data. *Lagrangian exponents from DNS: Robust extraction of inertial-range exponents depends on sufficient scale separation to allow a proper inertial-range to exist. The Eulerian spatial scale separation for the highest =1300 is L/η≈ 2500 <cit.>, while the temporal range is T_L/τ_K ≈ 105 <cit.>, thus making it inherently difficult to obtain a proper Lagrangian inertial-range <cit.>. This difficulty is highlighted in Fig. <ref>, which shows the log local slope of S_p^L(τ) at various , for p=2 and 4 in panels (a) and (b), respectively; although there is a suggestion of a plateau for the fourth-order, the local slopes of the curves are still changing with . This is in contrast to the corresponding Eulerian result for p=2, shown in Fig. <ref>, where a clear inertial-range emerges as increases. Because of this difficulty, Lagrangian exponents cannot be directly extracted even at the high of our DNS. Only by using extended self-similarity <cit.>, with respect to the second-order <cit.>, can one obtain the exponents. This is demonstrated in Fig. <ref>, which shows the ratio of local slope of S^L_p(τ) to that of S^L_2(τ). Evidently, a conspicuous plateau emerges for different orders in the same scaling range, seemingly independent of . Thus, we can extract the ratios ζ^L_p/ζ^L_2, which indeed was the practice in earlier works also <cit.>. Note, the justification for using ζ_2^L as the reference arises from the expectation that S_2^L ∼⟨ϵ⟩τ <cit.>; since the mean dissipation appears linearly, the result ζ_2^L=1 is free of intermittency (akin to ζ_3=1 for Eulerian exponents <cit.>). Extending the procedure demonstrated in Fig. <ref>,, we extract the ratios ζ^L_p/ζ^L_2 for up to p=10, and show them in Fig. <ref>. We have also included earlier results from both experiments and DNS <cit.>, obtained at comparatively lower . Overall, the current results at higher are in excellent agreement with prior results (which, however, have larger error bars). A remarkable observation, endemic to all data sets, is that the Lagrangian exponents saturate for p ≳ 8, similar to the transverse Eulerian exponents in Fig. <ref>. The data in Fig. <ref> are also compared with a number of predictions, which we discuss next. *The multifractal framework: It is obvious from Fig. <ref> that the data are quite far from K41. Following <cit.>, we will consider the well known multifractal model for relating Eulerian and Lagrangian exponents. The key concept in multifractals is that the (Eulerian) velocity increment δ u_r over a scale r is Hölder continuous, i.e., δ u_r ∼ r^h, where h is the local Hölder exponent with the multifracal spectrum D(h) <cit.>. From this local scaling relation, Eulerian structure functions can be readily derived by integrating over all possible h, as ⟨ (δ u_r)^p ⟩∼∫_h r^ph + 3 - D(h) dh. Using steepest-descent argument for r ≪ L gives ζ_p = inf_h [ ph + 3 - D(h) ] . The Lagrangian extension of multifractals relies on the phenomenological assumption that the spatial separation can be converted to temporal separation using r ∼τδ u_r, with δ u_r ∼δ u_τ <cit.>. This stipulation readily gives δ u_τ∼τ^h/(1-h), resulting in the Lagrangian exponents ζ_p^L = inf_h [ ph + 3 - D(h)/1-h ] . Thus, Lagrangian exponents can be directly predicted using the Eulerian multifractal spectrum D(h). Since most of the past work has focused on Eulerian longitudinal exponents, with the implicit assumption that transverse exponents are the same, the D(h) of the longitudinal exponents has been used to infer Lagrangian exponents. However, such predictions do not work as we see next. The Lagrangian exponents can be computed from Eq. (<ref>) by using Eulerian multifractal spectrum D(h) from Eq. (<ref>). The D(h) corresponding to the Eulerian multifractal models shown in Fig. <ref> are plotted in Fig. <ref>. They are obtained from ζ_p by taking a Legendre transform to invert the relations <cit.>, giving D(h) = inf_p [ ph + 3 - ζ_p ]. For reference, the D(h) for She-Leveque model is <cit.> D(h) = 1 + c_1 (h - h^*) - c_2 (h - h^*) log (h-h^*) where h^*=1/9, c_1 = c_2 (1 + loglogγ - logγ) and c_2 = 3/logγ, with γ=3/2. That for the Sreenivasan-Yakhot result of ζ_p = ζ_∞ p/(p+ β) <cit.> is D(h) = 3 - ζ_∞ - β h + 2 √(ζ_∞β h) where ζ_∞≈ 7.3 and β=3ζ_∞-3. The result for p-model can be found in <cit.>. In Fig. <ref>, in addition to the D(h) from these known Eulerian cases, we also utilize Eq. (<ref>) to numerically obtain the D(h) for transverse exponents (assuming ζ_p^tr≈ 2.1 for p ≥ 10, as shown in Fig. <ref>). Note, since the D(h) for ζ_p^tr is obtained numerically, the inversion formula in Eq. <ref> can only provide the concave hull <cit.>—which is what we plot in Fig. <ref>. The saturation value of exponents is reflected in the corresponding D(h) curve for h=0, as D(0) = 3 - ζ_∞ (≈ 0.9 for ζ_∞^tr≈ 2.1). Note that h<0 is not allowed in the multifractal framework <cit.>; the p-model and She-Leveque results correspond, respectively, to h_ min = 1/3log_2 (0.7) ≈ 0.172 <cit.> and h_ min = h^* = 1/9 <cit.>, which preclude saturation. The Sreenivasan-Yakhot result <cit.> predicts saturation for longitudinal exponents (at ζ_∞≈ 7.3, giving D(0) = 3 - 7.3 = -4.3 (not shown in in Fig. <ref>). *Lagrangian exponents from the Eulerian transverse multifractal spectrum: As we have seen, none of the multifractal predictions for Lagrangian exponents using Eulerian longitudinal exponents agree with the data, except for p ≲ 4, which even are close to K41 result. In contrast, the prediction corresponding to Eulerian transverse exponent (green dashed line) closely follows the measured results, particularly capturing the saturation at high orders. Note that the predicted saturation value, ζ_∞^L ≈ 2.1, is the same for both transverse Eulerian and Lagrangian exponents, The actual Lagrangian data, however, saturate at a slightly smaller value. We believe this minor difference (of only 5%) stems from the fact that even at =1300, the temporal inertial-range is still underdeveloped, and the intermittency-free result of ζ_2^L=1 is not unambiguously realized. Recall that all Lagrangian exponents shown in Fig. <ref> are extracted as ratios ζ_p^L/ζ_2^L. Thus, this minor discrepancy in the saturation values could be explained by small departures from the expectation of ζ_2^L=1. Given this and also the differences among different data sets, the close correspondence between the Eulerian transverse exponents and Lagrangian exponents is quite remarkable. It is worth noting that Lagrangian exponents saturate for slightly smaller p than for Eulerian transverse exponents. This can be explained from Eqs. (<ref>)-(<ref>) as a kinematic effect. For Eulerian exponents, ζ_3 = 1 is an exact result, corresponding to h≈ 1/3, D(h) ≈ 3, which conforms to the intermittency-free K41 result <cit.>. This, in turn, gives ζ_2^L = 1 as the corresponding intermittency-free Lagrangian result, also for h≈1/3, D(h) ≈ 3. This argument can be extended to higher orders to show that Lagrangian exponents for order p correspond to Eulerian transverse exponents of order 3p/2. It simply follows that the saturation of Lagrangian exponents occurs for smaller p. A similar phenomenological correspondence can also be provided for other Lagrangian statistics, for instance, the second moment of acceleration (the temporal gradient of velocity) corresponds to the third moment of spatial velocity gradient <cit.>. Discussion: Two significant results emerge from this work: (a) scaling exponents saturate for both Eulerian transverse and Lagrangian structure functions; and (b) the saturation of Lagrangian exponents are characterized solely by the Eulerian transverse exponents (and not the longitudinal, as previously believed). Given that the transverse exponents are smaller for large p, this seems reasonable from the steepest-descent argument <cit.>. The saturation of scaling exponents is an extreme form of anomalous behavior, but is not uncommon; it holds for forced Burgers equation <cit.>, Kraichnan's model of the passive scalar <cit.>, and also DNS results of passive scalars advected by 3D turbulence <cit.>. However, its prevalence in hydrodynamic turbulence itself has only become apparent recently <cit.>. The theory of <cit.> predicts that Eulerian longitudinal exponents saturate as well, although at very high enough moment orders that cannot be yet validated. In contrast, the saturation for transverse exponents occurs for p ≥ 10, as do Lagrangian exponents; this feature makes it possible to relate them through the multifractal spectrum (clearly not possible with the Eulerian longitudinal case). We believe that the saturation of exponents is an important property whose significance has been discussed in <cit.>. Our results also bring forth some important questions. One of them is the extension of the multifractal framework from the inertial- to dissipative-range, i.e., describing the scaling of velocity gradients. Such an extension relies on the phenomenological assumption that the local Reynolds number, describing the dissipative cutoff, is unity, i.e., δ u_r r/ν = 1 <cit.>. As highlighted in recent works <cit.>, this assumption is valid for longitudinal velocity increments, but not for transverse increments. This is essentially because of how vorticity and strain-rate interact in turbulence. It can be expected on this basis that the extension of multifractals to dissipation-range works for longitudinal velocity gradients, but not for transverse velocity gradients. Since Lagrangian intermittency seems a direct result of Eulerian transverse intermittency, it also follows that the extension to acceleration statistics would be an issue. Our recent studies <cit.> indeed confirm this. In addition, acceleration components are strongly correlated in turbulence <cit.>; this is not accounted for in multifractals, which are oblivious to Navier-Stokes dynamics. A second question concerns the meaning of universality given the longitudinal and transverse exponents behave differently. One strategy could be to consider a joint multifractal spectrum for longitudinal and transverse increments. It might be possible to set appropriate conditions on both to enable the inertial-range universality and the transition from the inertial- to dissipation-range. Essentially, addressing the discrepancy between longitudinal and transverse intermittency presents a critical and pressing problem in turbulence theory. *Acknowledgments: We gratefully acknowledge discussions with Victor Yakhot and sustained collaboration with P.K. Yeung. We also gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for providing computing time on the supercomputers JUQUEEN and JUWELS at Jülich Supercomputing Centre (JSC), where the simulations reported in this paper were primarily performed. Computations were also supported partially by the supercomputing resources under the Blue Water project at the National Center for Supercomputing Applications at the University of Illinois (Urbana-Champaign). 55 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Kolmogorov(1941a)]K41 author author A. N. Kolmogorov, title title The local structure of turbulence in an incompressible fluid for very large Reynolds numbers, @noop journal journal Dokl. Akad. Nauk. SSSR volume 30, pages 299–303 (year 1941a)NoStop [Monin and Yaglom(1975)]MY.II author author A. S. Monin and author A. M. Yaglom, @noop title Statistical Fluid Mechanics, Vol. volume 2 (publisher MIT Press, year 1975)NoStop [Frisch(1995)]Frisch95 author author U. Frisch, @noop title Turbulence: the legacy of Kolmogorov (publisher Cambridge University Press, address Cambridge, year 1995)NoStop [Kolmogorov(1962)]K62 author author A. N. Kolmogorov, title title A refinement of previous hypotheses concerning the local structure of turbulence in a viscous incompressible fluid at high Reynolds number, @noop journal journal J. Fluid Mech. volume 13, pages 82–85 (year 1962)NoStop [Sreenivasan and Antonia(1997)]Sreeni97 author author K. R. Sreenivasan and author R. A. Antonia, title title The phenomenology of small-scale turbulence, @noop journal journal Annu. Rev. Fluid Mech. volume 29, pages 435–77 (year 1997)NoStop [Van Atta and Park(2005)]vanatta author author C. W. Van Atta and author J. Park, title title Statistical self-similarity and inertial subrange turbulence, in @noop booktitle Statistical Models and Turbulence: Proceedings of a Symposium held at the University of California, San Diego (La Jolla) July 15–21, 1971 (organization Springer, year 2005) pp. pages 402–426NoStop [Iyer et al.(2020)Iyer, Sreenivasan, and Yeung]iyer20 author author K. P. Iyer, author K. R. Sreenivasan, and author P. K. Yeung, title title Scaling exponents saturate in three-dimensional isotropic turbulence, @noop journal journal Phys. Rev. Fluids volume 5, pages 054605 (year 2020)NoStop [Wyngaard(1992)]wyngaard author author J. C. Wyngaard, title title Atmospheric turbulence, @noop journal journal Annu. Rev. Fluid Mech. volume 24, pages 205–234 (year 1992)NoStop [Sawford(2001)]sawford.2001 author author B. L. Sawford, title title Turbulent relative dispersion, @noop journal journal Annu. Rev. Fluid Mech. volume 33, pages 289–317 (year 2001)NoStop [Falkovich et al.(2001)Falkovich, Gaw ȩdzki, and Vergassola]falkovich01 author author G. Falkovich, author K. Gaw ȩdzki, and author M. Vergassola, title title Particles and fields in fluid turbulence, @noop journal journal Rev. Mod. Phys. volume 73, pages 913–975 (year 2001)NoStop [Note1()]Note1 note Absolute value is taken for Lagrangian increments since the odd moments are otherwise zeroNoStop [Sawford et al.(2003)Sawford, Yeung, Borgas, Vedula, Porta, Crawford, and Bodenschatz]Sawford03 author author B. L. Sawford, author P. K. Yeung, author M. S. Borgas, author P. Vedula, author A. La Porta, author A. M. Crawford, and author E. Bodenschatz, title title Conditional and unconditional acceleration statistics in turbulence, @noop journal journal Phys. Fluids volume 15, pages 3478–3489 (year 2003)NoStop [Mordant et al.(2004)Mordant, Lévêque, and Pinton]mordant2004 author author N. Mordant, author E. Lévêque, and author J.-F. Pinton, title title Experimental and numerical study of the Lagrangian dynamics of high reynolds turbulence, @noop journal journal New J. Phys. volume 6, pages 116 (year 2004)NoStop [Xu et al.(2006)Xu, Bourgoin, Ouellette, and Bodenschatz]Xu06 author author H. Xu, author M. Bourgoin, author N. T. Ouellette, and author E. Bodenschatz (collaboration International Collaboration for Turbulence Research), title title High order lagrangian velocity statistics in turbulence, @noop journal journal Phys. Rev. Lett. volume 96, pages 024503 (year 2006)NoStop [Sawford and Yeung(2015)]sawford15 author author B. L. Sawford and author P. K. Yeung, title title Direct numerical simulation studies of Lagrangian intermittency in turbulence, @noop journal journal Phys. Fluids volume 27, pages 065109 (year 2015)NoStop [Borgas(1993)]borgas93 author author M. S. Borgas, title title The multifractal Lagrangian nature of turbulence, @noop journal journal Philos. Trans. R. Soc. A volume 342, pages 379–411 (year 1993)NoStop [Biferale et al.(2004)Biferale, Boffetta, Celani, Devenish, Lanotte, and Toschi]biferale2004 author author L. Biferale, author G. Boffetta, author A. Celani, author B. J. Devenish, author A. Lanotte, and author F. Toschi, title title Multifractal statistics of Lagrangian velocity and acceleration in turbulence, @noop journal journal Phys. Rev. Lett. volume 93, pages 064502 (year 2004)NoStop [Arnéodo et al.(2008)Arnéodo et al.]arneodo author author A. Arnéodo et al., title title Universal intermittent properties of particle trajectories in highly turbulent flows, @noop journal journal Phys. Rev. Lett. volume 100, pages 254504 (year 2008)NoStop [Dhruva et al.(1997)Dhruva, Tsuji, and Sreenivasan]dhruva97 author author B. Dhruva, author Y. Tsuji, and author K. R. Sreenivasan, title title Transverse structure functions in high-reynolds-number turbulence, @noop journal journal Phys. Rev. E volume 56, pages R4928–R4930 (year 1997)NoStop [Chen et al.(1997)Chen, Sreenivasan, Nelkin, and Cao]chen97 author author S. Chen, author K. R. Sreenivasan, author M. Nelkin, and author N. Cao, title title Refined similarity hypothesis for transverse structure functions in fluid turbulence, @noop journal journal Phys. Rev. Lett. volume 79, pages 2253–2256 (year 1997)NoStop [Buaria and Sreenivasan(2022a)]BS_PRL_2022 author author D. Buaria and author K. R. Sreenivasan, title title Scaling of acceleration statistics in high Reynolds number turbulence, @noop journal journal Phys. Rev. Lett. volume 128, pages 234502 (year 2022a)NoStop [Buaria and Sreenivasan(2020)]BS2020 author author D. Buaria and author K. R. Sreenivasan, title title Dissipation range of the energy spectrum in high Reynolds number turbulence, @noop journal journal Phys. Rev. Fluids volume 5, pages 092601(R) (year 2020)NoStop [Buaria et al.(2020)Buaria, Bodenschatz, and Pumir]BBP2020 author author D. Buaria, author E. Bodenschatz, and author A. Pumir, title title Vortex stretching and enstrophy production in high Reynolds number turbulence, @noop journal journal Phys. Rev. Fluids volume 5, pages 104602 (year 2020)NoStop [Buaria and Pumir(2021)]BP2021 author author D. Buaria and author A. Pumir, title title Nonlocal amplification of intense vorticity in turbulent flows, @noop journal journal Phys. Rev. Research volume 3, pages 042020 (year 2021)NoStop [Buaria et al.(2022)Buaria, Pumir, and Bodenschatz]BPB2022 author author D. Buaria, author A. Pumir, and author E. Bodenschatz, title title Generation of intense dissipation in high Reynolds number turbulence, @noop journal journal Philos. Trans. R. Soc. A volume 380, pages 20210088 (year 2022)NoStop [Buaria and Sreenivasan(2022b)]BS2022 author author D. Buaria and author K. R. Sreenivasan, title title Intermittency of turbulent velocity and scalar fields using three-dimensional local averaging, @noop journal journal Phys. Rev. Fluids volume 7, pages L072601 (year 2022b)NoStop [Ishihara et al.(2009)Ishihara, Gotoh, and Kaneda]Ishihara09 author author T. Ishihara, author T. Gotoh, and author Y. Kaneda, title title Study of high-Reynolds number isotropic turbulence by direct numerical simulations, @noop journal journal Ann. Rev. Fluid Mech. volume 41, pages 165–80 (year 2009)NoStop [Rogallo(1981)]Rogallo author author R. S. Rogallo, title title Numerical experiments in homogeneous turbulence, @noop journal journal NASA Technical Memo (year 1981)NoStop [Buaria et al.(2019)Buaria, Pumir, Bodenschatz, and Yeung]BPBY2019 author author D. Buaria, author A. Pumir, author E. Bodenschatz, and author P. K. Yeung, title title Extreme velocity gradients in turbulent flows, @noop journal journal New J. Phys. volume 21, pages 043004 (year 2019)NoStop [Buaria et al.(2015)Buaria, Sawford, and Yeung]BSY.2015 author author D. Buaria, author B. L. Sawford, and author P. K. Yeung, title title Characteristics of backward and forward two-particle relative dispersion in turbulence at different Reynolds numbers, @noop journal journal Phys. Fluids volume 27, pages 105101 (year 2015)NoStop [Buaria et al.(2016)Buaria, Yeung, and Sawford]BYS.2016 author author D. Buaria, author P. K. Yeung, and author B. L. Sawford, title title A Lagrangian study of turbulent mixing: forward and backward dispersion of molecular trajectories in isotropic turbulence, @noop journal journal J. Fluid Mech. volume 799, pages 352–382 (year 2016)NoStop [Buaria and Yeung(2017)]buaria.cpc author author D. Buaria and author P. K. Yeung, title title A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations, @noop journal journal Comput. Phys. Commun. volume 221, pages 246 – 258 (year 2017)NoStop [Sreenivasan and Yakhot(2021)]SY2021 author author K. R. Sreenivasan and author V. Yakhot, title title Dynamics of three-dimensional turbulence from Navier-Stokes equations, @noop journal journal Phys. Rev. Fluids volume 6, pages 104604 (year 2021)NoStop [Meneveau and Sreenivasan(1987)]MS87 author author C. Meneveau and author K. R. Sreenivasan, title title Simple multifractal cascade model for fully developed turbulence, @noop journal journal Phys. Rev. Lett. volume 59, pages 1424 (year 1987)NoStop [She and Leveque(1994)]SL94 author author Z.-S. She and author E. Leveque, title title Universal scaling laws in fully developed turbulence, @noop journal journal Phys. Rev. Lett. volume 72, pages 336–339 (year 1994)NoStop [L'vov et al.(1997)L'vov, Podivilov, and Procaccia]lvov97 author author V. S. L'vov, author E. Podivilov, and author I. Procaccia, title title Invariants for correlations of velocity differences in turbulent fields, @noop journal journal Phys. Rev. Lett. volume 79, pages 2050–2052 (year 1997)NoStop [Nelkin(1990)]Nelkin90 author author M. Nelkin, title title Multifractal scaling of velocity derivatives in turbulence, @noop journal journal Phys. Rev. A volume 42, pages 7226–7229 (year 1990)NoStop [Grossmann et al.(1997)Grossmann, Lohse, and Reeh]gl97 author author S. Grossmann, author D. Lohse, and author A. Reeh, title title Different intermittency for longitudinal and transversal turbulent fluctuations, @noop journal journal Physics of Fluids volume 9, pages 3817–3825 (year 1997)NoStop [Shen and Warhaft(2002)]shen02 author author X. Shen and author Z. Warhaft, title title Longitudinal and transverse structure functions in sheared and unsheared wind-tunnel turbulence, @noop journal journal Phys. Fluids volume 14, pages 370–381 (year 2002)NoStop [Gotoh et al.(2002)Gotoh, Fukayama, and Nakano]gotoh02 author author T. Gotoh, author D. Fukayama, and author T. Nakano, title title Velocity field statistics in homogeneous steady turbulence obtained using a high-resolution direct numerical simulation, @noop journal journal Phys. Fluids volume 14, pages 1065–1081 (year 2002)NoStop [Grauer et al.(2012)Grauer, Homann, and Pinton]grauer2012 author author R. Grauer, author H. Homann, and author J.-F. Pinton, title title Longitudinal and transverse structure functions in high-Reynolds-number turbulence, @noop journal journal New J. Phys. volume 14, pages 063016 (year 2012)NoStop [Buaria(2016)]buaria.thesis author author D. Buaria, title Lagrangian investigations of turbulent dispersion and mixing using Petascale computing, @noop Ph.D. thesis, school Georgia Institute of Technology (year 2016)NoStop [Sawford and Yeung(2011)]sawford2011 author author B. L. Sawford and author P. K. Yeung, title title Kolmogorov similarity scaling for one-particle lagrangian statistics, @noop journal journal Phys. Fluids volume 23 (year 2011)NoStop [Buaria(2023)]buaria.com author author D. Buaria, title title Comment on “Universality and Intermittency of Pair Dispersion in Turbulence”, @noop journal journal Phys. Rev. Lett. volume 130, pages 029401 (year 2023)NoStop [Benzi et al.(1993)Benzi, Ciliberto, Tripiccione, Baudet, Massaioli, and Succi]benzi93 author author R. Benzi, author S. Ciliberto, author R. Tripiccione, author C. Baudet, author F. Massaioli, and author S. Succi, title title Extended self-similarity in turbulent flows, @noop journal journal Phys. Rev. E volume 48, pages R29–R32 (year 1993)NoStop [Kolmogorov(1941b)]K41b author author A. N. Kolmogorov, title title Dissipation of energy in locally isotropic turbulence, @noop journal journal Dokl. Akad. Nauk. SSSR volume 434, pages 16–18 (year 1941b)NoStop [Benzi et al.(1984)Benzi, Paladin, Parisi, and Vulpiani]benzi1984 author author R. Benzi, author G. Paladin, author G. Parisi, and author A. Vulpiani, title title On the multifractal nature of fully developed turbulence and chaotic systems, @noop journal journal J. Phys. A volume 17, pages 3521 (year 1984)NoStop [Bec and Khanin(2007)]bec07 author author J. Bec and author K. Khanin, title title Burgers turbulence, @noop journal journal Phys. Rev. volume 447, pages 1 – 66 (year 2007)NoStop [Kraichnan(1994)]RK94 author author R. H. Kraichnan, title title Anomalous scaling of a randomly advected passive scalar, @noop journal journal Phys. Rev. Lett. volume 72, pages 1016–1019 (year 1994)NoStop [Iyer et al.(2018)Iyer, Schumacher, Sreenivasan, and Yeung]iyer18 author author K. P. Iyer, author J. Schumacher, author K. R. Sreenivasan, and author P. K. Yeung, title title Steep cliffs and saturated exponents in three-dimensional scalar turbulence, @noop journal journal Phys. Rev. Lett. volume 121, pages 264501 (year 2018)NoStop [Buaria et al.(2021)Buaria, Clay, Sreenivasan, and Yeung]BCSY2021b author author D. Buaria, author M. P. Clay, author K. R. Sreenivasan, and author P. K. Yeung, title title Turbulence is an ineffective mixer when Schmidt numbers are large, @noop journal journal Phys. Rev. Lett. volume 126, pages 074501 (year 2021)NoStop [Paladin and Vulpiani(1987)]Paladin87 author author G. Paladin and author A. Vulpiani, title title Degrees of freedom of turbulence, @noop journal journal Phys. Rev. A volume 35, pages 1971–1973 (year 1987)NoStop [Buaria and Pumir(2022)]BP2022 author author D. Buaria and author A. Pumir, title title Vorticity-strain rate dynamics and the smallest scales of turbulence, @noop journal journal Phys. Rev. Lett. volume 128, pages 094501 (year 2022)NoStop [Buaria and Sreenivasan(2023)]BS2023 author author D. Buaria and author K. R. Sreenivasan, title title Lagrangian acceleration in fully developed turbulence and its Eulerian decompositions, @noop journal journal Phys. Rev. Fluids volume 8, pages L032601 (year 2023)NoStop [Tsinober et al.(2001)Tsinober, Vedula, and Yeung]tsinober01 author author A. Tsinober, author P. Vedula, and author P. K. Yeung, title title Random Taylor hypothesis and the behavior of local and convective accelerations in isotropic turbulence, @noop journal journal Phys. Fluids volume 13, pages 1974–1984 (year 2001)NoStop
http://arxiv.org/abs/2307.03950v1
20230708105034
Mod 2 instanton homology and 4-manifolds with boundary
[ "Kim A. Frøyshov" ]
math.GT
[ "math.GT", "math.DG" ]
Mod 2 instanton homology and 4-manifolds with boundary Kim A. Frøyshov ====================================================== Using instanton homology with coefficients in /2 we construct a homomorphism from the homology cobordism group to the integers which is not a rational linear combination of the instanton h–invariant and the Heegaard Floer correction term d. If an oriented homology 3–sphere Y bounds a smooth, compact, negative definite 4–manifold without 2–torsion in its homology then (Y)≥0, with strict inequality if the intersection form is non-standard. empty plain § INTRODUCTION This paper will introduce an integer invariant (Y) of oriented integral homology 3–spheres Y. This invariant is defined in terms of instanton cohomology with coefficients in /2 and may be regarded as a mod 2 analogue of the h–invariant <cit.>, which was defined with rational coefficients. Both invariants grew out of efforts to extend Donaldson's diagonalization theorem <cit.> to 4–manifolds with boundary. We will use the instanton (co)homology originally introduced by Floer <cit.>, an exposition of which can be found in <cit.>. With coefficients in /2, instanton cohomology I(Y;/2) comes equipped with some extra structure, namely two “cup products” u_2 and u_3 of degrees 2 and 3, respectively, and homomorphisms I^4(Y;/2)_0⟶/2_0'⟶ I^1(Y;/2) counting index 1 trajectories running into and from the trivial flat 2 connection, respectively. This extra structure enters in the definition of the invariant q_2, which is given in Section <ref>. Reversing the rôles of the cup products u_2,u_3 in the definition yields another invariant q_3. However, the present paper will focus on . It would be interesting to try to express the invariants h,q_2,q_3 in terms of the equivariant instanton homology groups recently introduced by Miller Eismeier <cit.>. We now describe some properties and applications of . For any oriented homology 3–spheres Y_0 and Y_1 one has (Y_0#Y_1)=(Y_0)+(Y_1). The proof of additivity is not quite straightforward and occupies more than half the paper. thm[Monotonicity] Let W be a smooth compact oriented 4-manifold with boundary W=(-Y_0)∪ Y_1, where Y_0 and Y_1 are oriented homology 3–spheres. Suppose the intersection form of W is negative definite and H^2(W;) contains no element of order 4. Then (Y_0)≤(Y_1). If the manifold W in the theorem actually satisfies b_2(W)=0 then one can apply the theorem to -W as well so as to obtain (Y_0)=(Y_1). This shows that descends to a group homomorphism →, where is the integral homology cobordism group. We observe that the properties of described so far also hold for the instanton h–invariant, the negative of its monopole analogue <cit.>, and the Heegaard Floer correction term d. Note that the latter three invariants are monotone with respect to any negative definite cobordism, without any assumption on the torsion in the cohomology. thm[Lower bounds] Let X be a smooth compact oriented 4-manifold whose boundary is a homology sphere Y. Suppose the intersection form of X is negative definite and H^2(X;) contains no 2-torsion. Let J_X:=H^2(X;)/torsion, and let w be an element of J_X which is not divisible by 2. Let k be the minimal square norm (with respect to the intersection form) of any element of w+2J_X. Let n be the number of elements of w+2J_X of square norm k. If k≥2 and n/2 is odd then equation* (Y)≥k-1. By an integral lattice we mean a free abelian group of finite rank equipped with a symmetric bilinear integer-valued form. Such a lattice is called odd if it contains an element of odd square; otherwise it is called even. cor Let X be as in Theorem <ref>. Let J_X⊂ J_X be the orthogonal complement of the sublattice of J_X spanned by all vectors of square -1, so that J_X is an orthogonal sum J_X=m-1⊕ J_X for some non-negative integer m. description (i)If J_X≠0, i.e. if J_X is not diagonal, then (Y)≥1. (ii)If J_X is odd then (Y)≥2. To deduce (i) from the theorem, take C:=v+2J_X where v is any non-trivial element of J_X of minimal square norm. To prove (ii), choose a v with minimal odd square norm. thm Let Y be the result of (-1) surgery on a knot K in S^3. If changing n^- negative crossings in a diagram for K produces a positive knot then 0≤(Y)≤ n^-. For k≥2 the Brieskorn sphere (2,2k-1,4k-3) is the boundary of a plumbing manifold with intersection form -_4k (see Section <ref>), and it is also the result of (-1) surgery on the (2,2k-1) torus knot. In these examples the upper bound on given by Theorem <ref> turns out to coincide with the lower bound provided by Theorem <ref>, and one obtains the following. For k≥2 one has ((2,2k-1,4k-3))=k-1. On the other hand, by <cit.> one has h((2,2k-1,4k-3))=⌊ k/2⌋, and in these examples the correction term d satisfies d=h/2, as follows from <cit.>. This shows: The invariant is not a rational linear combination of the h–invariant and the correction term d.□ In particular, h,:→ are linearly independent homomorphisms, and the same is true for d,. It follows from this that has a ^2 summand. However, much more is true: Dai, Hom, Stoffregen, and Truong <cit.> proved that has a ^∞ summand. Their proof uses involutive Heegaard Floer homology. The monotonicity of the invariants h,d, leads to the following result. Let Y by an oriented homology 3-sphere. If min(h(Y),d(Y))<0<(Y) then Y does not bound any definite 4-manifold without elements of order 4 in its second cohomology. An explicit example to which the theorem applies is 2(2,5,9)#-3(2,3,5). A related result was obtained by Nozaki, Sato, and Taniguchi <cit.>. Using a filtered version of instanton homology they proved that certain linear combinations of Brieskorn homology 3–spheres do not bound any definite 4–manifold. If an oriented homology 3-sphere Y satisfies h(Y)≤0<(Y) then I^5(Y;) contains 2–torsion, hence Y is not homology cobordant to any Brieskorn sphere (p,q,r). We conclude this introduction with two sample applications of the invariant . Let X be a smooth compact oriented connected 4-manifold whose boundary is the Poincaré sphere (2,3,5). Suppose the intersection form of X is negative definite. Let J_X be as in Corollary <ref>. (i) If J_X is even then J_X=0 or -E_8. (ii) If J_X is odd then H^2(X;) contains an element of order 4. Earlier versions of this result were obtained using instanton homology in <cit.> (assuming X is simply-connected) and in <cit.> (assuming X has no 2–torsion in its homology). There are up to isomorphism two even, positive definite, unimodular forms of rank 16, namely 2E_8 and _16. If Z denotes the negative definite E_8–manifold then the boundary connected sum Z#_Z has intersection form -2E_8. It is then natural to ask whether (2,3,5)#(2,3,5) also bounds -_16. There appears to be no obstruction to this coming from the correction term. Let X be a smooth compact oriented 4-manifold whose boundary is (2,3,5)#(2,3,5). Suppose the intersection form of X is negative definite and H^2(X;) contains no 2–torsion. If J_X is even then J_X=0, -E_8, or -2E_8. Further results on the definite forms bounded by a given homology 3–sphere were obtained by Scaduto <cit.>. Some of the results of this paper were announced in various talks several years ago. The author apologizes for the long delay in publishing the results. § THE BASE-POINT FIBRATION Let X be a connected smooth n–manifold, possibly with boundary, and P→ X a principal 3 bundle. Fix p>n and let A be a p1 connection in P. This means that A differs from a smooth connection by a 1–form which lies locally in L^p_1. Let _A be the group of p2 automorphisms (or gauge transformations) of P that preserve A. The connection A is called * irreducible if _A={1}, otherwise reducible; * Abelian if _A≈1; * twisted reducible if _A≈/2. Note that a non-flat reducible connection in P is either Abelian or twisted reducible. Recall that automorphisms of P can be regarded as sections of the bundle P3×3 of Lie groups, where 3 acts on itself by conjugation. An automorphism is called even if it lifts to a section of P3×2. A connection A in P is called even-irreducible if its stabilizer _A contains no non-trivial even automorpisms, otherwise A is called even-reducible. A non-flat connection is even-reducible if and only if it is Abelian. Now suppose X is compact and let be the space of all L^p_1 connections in P. The affine Banach space is acted upon by the Banach Lie group consisting of all L^p_2 automorphisms of P. Let ^*⊂ be subset of irreducible connections and define =/. The irreducible part ^*⊂ is a Banach manifold, and it admits smooth partitions of unity provided p>n is an even integer, which we assume from now on. Instead of ^* we often write ^*(P), or ^*(X) if the bundle P is trivial. Similarly for , etc. Let ^* be the space of all even-irreducible L^p_1 connections in P. Let be the group of even p2 automorphisms of P. As explained in <cit.>, there is an exact sequence 1→→→ H^1(X;/2)→0. The quotient ^*=^*/ is a Banach manifold. Let X be a topological space. (i) A class v∈ H^2(X;/2) is called admissible if v has a non-trivial pairing with a class in H_2(X;), or equivalently, if there exist a closed oriented 2–manifold and a continuous map f:→ X such that f^*v≠0. If and f can be chosen such that, in addition, f^*a=0 for every a∈ H^1(X;/2), then v is called strongly admissible. (ii) An 3 bundle E→ X is called (strongly) admissible if the Stiefel-Whitney class w_2(E) is (strongly) admissible. For example, a finite sum v=∑_ia_i∪ b_i with a_i,b_i∈ H^1(X;/2) is never strongly admissible. Let X be a compact, oriented, connected smooth 4–manifold with base-point x∈ X. Let P→ X be an 3 bundle. (i) If P is admissible then the 3 base-point fibration over ^*(P) lifts to a 2 bundle. (ii) If P is strongly admissible then the 3 base-point fibration over ^*(P) lifts to a 2 bundle. We spell out the proof of (ii), the proof of (i) being similar (or easier). Let be a closed oriented surface and f:→ X a continuous map such that f^*P is non-trivial and eqn:fa0 holds. We can clearly arrange that is connected. Because X≥2 it follows from <cit.> that f can be uniformly approximated by (smooth) immersions f_0. Moreover, if the approximation is sufficiently good then f_0 will be homotopic to f. Therefore, we may assume f is an immersion. Since base-point fibrations associated to different base-points in X are isomorphic we may also assume that x lies in the image of f, say x=f(z). We adapt the proof of <cit.>, see also <cit.>. Let →^*:=^*(P) be the oriented Euclidean 3–plane bundle associated to the base-point fibration. We must find an Hermitian 2-plane bundle such that is isomorphic to the bundle ^0_ of trace-free skew-Hermitian endomorphisms of . Let E→ X be the standard 3–plane bundle associated to P. Choose an Hermitian 2–plane bundle W→ together with an isomorphism ϕ:^0_W≈→ f^*E, and fix a connection A_,det in (W). Any (orthogonal) connection A in E induces a connection in f^*E which in turn induces a connection A_ in W with central part A_,det. Choose a spin structure on and let S^*± be the corresponding spin bundles over . For any connection A in E let _,A:S^+⊗ W→ S^-⊗ W be the Dirac operator coupled to A_. If A is an L^p_1 connection, p>4, and A_0 is a smooth connection in E then A-A_0 is continuous, hence _,A-_,A_0 defines a bounded operator L^2→ L^2 and therefore a compact operator L^2_1→ L^2. Let :=(_,W) be the determinant line bundle over (E) associated to the family of Fredholm operators _,A:L^2_1→ L^2. Then automorphism (-1) of W acts on with weight equal to the numerical index of _,A. According to Atiyah-Singer's theorem <cit.> this index is (_,A)={ch(W)Â()}·[]=c_1(W)·[]. But the mod 2 reduction of c_1(W) equals f^*(w_2(E)), which is non-zero by assumption, so the index is odd. The assumption eqn:fa0 means that every automorphism of E pulls back to an even automorphism of f^*E. Moreover, every even automorphism of f^*E≈^0_W lifts to an automorphism of W of determinant 1, the lift being well-defined up to an overall sign since is connected. Because the automorphism (-1) of W acts trivially on ⊗ W_z this yields an action of (E) on ⊗ W_z. The quotient :=(⊗ W_z)/(E) is a complex 2-plane bundle over ^*(E). We claim that there is an Hermitian metric on such that on every fibre _A there is an Hermitian metric for which the projection _A⊗ W_z→_[A] is an isometry. To see this, let S⊂(E) be any local slice for the action of (E), so that S projects diffeomorphically onto an open subset U⊂^*(E). Choose any Hermitian metric on |_S and let g_U be the induced Hermitian metric on _U≈(⊗ W_z)|_S. Now cover ^*(E) by such open sets U and patch together the corresponding metrics g_U to obtain the desired metric on . Given any Hermitian metric on a fibre _A there are linear isometries ^0__A⊗ W_z≈→^0_W_z≈→ E_x, where the first isometry is canonical and independent of the chosen metric on _A and the second one is given by ϕ. This yields an isomorphism ^0_≈→.□ § MODULI SPACES Let P→ Y be a principal 3 bundle, where Y is a closed oriented 3–manifold. The Chern-Simons functional :(P)→/ is determined up to an additive constant by the property that if A is any connection in the pull-back of P to the band [0,1]× Y then (A_1)-(A_0)=∫_[t_0,t_1]× Y F_A∧ F_A, where A_t denotes the restriction of A to the slice {t}× Y, and ·∧· is formed by combining the wedge product on forms with minus the Killing form on the Lie algebra of 3. If P=Y×3 then we normalize so that its value on the product connection θ is zero. If v is any automorphism of P then for any connection B in P one has (v(B))-(B)=-1/2(v), where the degree (v) is defined to be the intersection number of v with the image of the constant section 1. Equation eqn:csdeg, up to an overall sign, was stated without proof in <cit.>. A proof of eqn:csdeg can be obtained by first observing that the left-hand side of the equation is independent of B, and both sides define homomorphisms from the automorphism group of P into . Replacing v by v^2 it then only remains to verify the equation for even gauge transformations, which is easy. If v lifts to a section v of P3×2 then (v)=2( v), where ( v) is the intersection number of v with the image of the constant section 1. In particular, every even automorphism of P has even degree. The critical points of the Chern-Simons functional are the flat connections in P. In practice, we will add a small holonomy perturbation to as in <cit.>, but this will usually not be reflected in our notation. Let (P) denote the space of all critical points of modulo even automorphisms of P. The even-reducible part of (P) is denoted by ^*(P). If Y is an (integral) homology sphere then P is necessarily trivial and we write (Y)=(P). Now let X be an oriented Riemannian 4–manifold with tubular ends [0,∞)× Y_i, i=0,…,r, such that the complement of :=⋃_i [0,∞)× Y_i is precompact. We review the standard set-up of moduli spaces of anti-self-dual connections in a principal 3 bundle Q→ X, see <cit.>. Given a flat connection ρ in Q|_, we define the moduli space M(X,Q;ρ) as follows. Choose a smooth connection A_0 in Q which agrees with ρ outside a compact subset of X. We use the connection A_0 to define Sobolev norms on forms with values in the adoint bundle _Q of Lie algebras associated to Q. Fix an even integer p>4. Let =(Q) be the space of connections in Q of the form A_0+a with a∈ pw1, where w is a small, positive exponential weight as in <cit.>. There is a smooth action on by the Banach Lie group consisting of all p2 gauge transformation u of Q such that ∇_A_0u· u∈ pw1. Let :=/ and let M(X,Q;ρ) be the subset of consisting of gauge equivalence classes of connections A satisfying F^+_A=0. In practice, we will often add a small holonomy perturbation to the ASD equation, but this will usually be suppressed from notation. We observe that the value of the Chern-Simons integral (Q,ρ):=-1/8π^2∫_X F_A∧ F_A is the same for all A∈. (If X is closed then the right hand side of Equation eqn:ka-int equals the value of -p_1(Q) on the fundamental class of X. This normalization will be convenient in Section <ref>.) If u is an automorphism of Q|_ then from Equations eqn:cs-int-band and eqn:csdeg we deduce that (Q,u(ρ))-(Q,ρ)=2∑_i(u_i), where u_i is the restriction of uto the slice {0}× Y_i. Similarly, for the expected dimensions we have M(X,Q;u(ρ))-M(X,Q;ρ)=4∑_i(u_i). On the other hand, if u extends to a smooth automorphism of all of Q then ∑(u_i)=0, and the converse holds at least if u is even. Given the reference connection A_0, we can identify the restriction of the bundle Q to an end [0,∞)× Y_i with the pull-back of a bundle P_i→ Y_i. Let _i∈(P_i) be the element obtained by restricting ρ to any slice {t}× Y_i where t>0. We will usually assume that each _i is non-degenerate. The above remarks show that the moduli space M(X,Q;ρ) can be specified by the r–tuple =(_1,…,_r) together with one extra piece of data: Either the Chern-Simons value =(Q,ρ) or the expected dimension d of M(X,Q;ρ). We denote such a moduli space by M_(X,Q;) or M_(d)(X,Q;). Note that for given there is exactly one moduli space M_(d)(X,Q;) with 0≤ d≤7; this moduli space will just be denoted by M(X,Q;). For any anti-self-dual connection A over X, the energy _A(Z) of A over a measurable subset Z⊂ X is defined by _A(Z):=-∫_Z F_A∧ F_A =∫_Z|F_A|^2. If X= and Z=I× Y for some interval I then we write _A(I) instead of _A(I× Y). § SPACES OF LINEARLY DEPENDENT VECTORS This section provides background for the definition of the cup product u_2 as well as results which will be used in the proof of Proposition <ref>. For any finite-dimensional real vector space V set L(V):={(v,w)∈ V⊕ Vv,w are linearly dependent in V}. Then L(V) is closed in V⊕ V and L^*(V):=L(V)∖{(0,0)} is a smooth submanifold of V⊕ V of codimension n-1, where n is the dimension of V. As a short-hand notation we will often write v∧ w=0 to express that v,w are linearly dependent. If B is any smooth Banach manifold and π:E→ B a smooth real vector bundle of finite rank let L^*(E)→ B be the associated smooth fibre bundle whose fibre over a point x∈ B is L^*(E_x), where E_x=π(x). Similarly, let L(E)→ B be the topological fibre bundle with fibre L(E_x) over x. Let ℓ→ S^1 be the non-trivial real line bundle such that for z∈ S^1 the fibre of ℓ over z^2 is the line z in . Let E:=E× S^1 and ℓ:=B×ℓ be the pull-backs of the bundles E and ℓ, respectively, to B× S^1. We identify R^2=, so that (a,b)=a+bi for real numbers a,b. Let s=(s_1,s_2) be a nowhere vanishing smooth section of E⊕ E. Let be the section of E⊗ℓ such that for any p∈ B and z=(x_1,x_2)∈ S^1 one has (p,z^2)=(x_1s_1(p)+x_2s_2(p))⊗ z. (i) The projection B× S^1→ B maps the zero-set of bijectively onto the locus in B where s_1,s_2 are linearly dependent. (ii) A zero (p,w) of is regular if and only if s is transverse to L^*(E) at p. The proof of (i) is left as an exercise. To prove (ii) we may assume E is trivial, so that s_j is represented by a smooth map f_j:B→ V for some finite-dimensional real vector space V. We observe that for any u_1,u_2∈ V and z=(x_1,x_2)∈ S^1 one has (u_1,u_2)=(x_1u_1+x_2u_2)⊗ z+(x_1u_2-x_2u_1)⊗ iz as elements of V⊕ V=V⊗_. It follows that the tangent space of L^*(V) at a point (v_1,v_2) which satisfies x_1v_1+x_2v_2=0 is given by T_(v_1,v_2)L^*(V)=V⊗ iz+(x_1v_2-x_2v_1)⊗ z. Now suppose (p,w) is a zero of and s(p)=(v_1,v_2), z^2=w. Then eqn:tlv holds. Let L_j:T_pB→ V be the derivative of f_j at p. Then (p,w) is a regular zero of precisely when V is spanned by the vector x_1v_2-x_2v_1 together with the image of the map x_1L_2+x_2L_2. From eqn:u1u2 we see that the latter condition is also equivalent to s being transverse to L^*(V) at p.□ We record here a description of the sections of E⊗ℓ which will be used in the proof of Proposition <ref> below. Let _a( E) denote the space of all sections s∈( E) such that s(p,-z)=-s(p,z) for all (p,z)∈ B× S^1. Then there is a canonical real linear isomorphism ( E⊗ℓ)→_a( E), ↦ characterized by the fact that (p,z^2)=(p,z)⊗ z for all (p,z)∈ B× S^1.□ If B is finite-dimensional, the bundle E has rank 3, and s is a generic smooth section of E⊕ E then s(L(E)) represents the Poincaré dual of the second Stiefel-Whitney class w_2(E) in the following sense. Given any class a∈ H_2(B;/2), represented by a generic smooth map f:→ B where is a closed surface, then a,w_2(E)≡#(s∘ f)(L(E))2. § “GENERIC” SECTIONS Let B be a smooth Banach manifold and π:E→ B a smooth real vector bundle of finite rank. If B is infinite-dimensional then we do not define a topology on the space (E) of (smooth) sections of E, so it makes no sense to speak about residual subsets of (E). Instead, we will say a subset Z⊂(E) is “residual” (in quotation marks) if there is a finite-dimensional subspace ⊂(E) such that for every finite-dimensional subspace '⊂(E) containing and every section s of E there is a residual subset ⊂' such that s+⊂ Z. Note that “residual” subsets are non-empty, and any finite intersection of “residual” subsets is again “residual”. We will say a given property holds for a “generic” section of E if it holds for every section belonging to a “residual” subset of (E). We indicate one way of constructing such subspaces . Suppose B supports smooth bump functions, i.e. for any point x∈ B and any neighbourhood U of x there exists a smooth function c:B→ such that c(x)≠0 and c=0 outside U. Given a compact subset K of B, one can easily construct a finite-dimensional subspace ⊂(E) such that, for every x∈ K, the evaluation map → E_x, s↦ s(x) is surjective. Therefore, if we are given a collection of smoooth maps f_k:M_k→ B, k=1,2,…, where each M_k is a finite-dimensional manifold and the image of each f_k is contained in K then, for a “generic” section s of E, the map s∘ f_k:M_k→ E is transverse to the zero-section in E for each k. § INSTANTON COHOMOLOGY AND CUP PRODUCTS In this section we will work with 3 connections modulo even gauge transformation (see Section <ref>), although this will not be reflected in our notation. In particular, we write ^* instead of ^*. This notational convention applies only to this section. (In Subsection <ref>, which only deals with homology spheres, the convention is irrelevant.) §.§ Instanton cohomology Let Y be a closed oriented connected 3-manifold and P→ Y an 3 bundle. If Y is not an homology sphere then we assume P is admissible. For any ,β∈(P) let M(,β) denote the moduli space of instantons in the bundle × P→ with flat limits at -∞ and β at ∞ and with expected dimension in the interval [0,7]. Let (,β)=M(,β)/, where acts by translation. If ,β are irreducible then the relative index (,β)∈/8 is defined by (,β)= M(,β)8. For any commutative ring R with unit we denote by I(P;R) the relatively /8 graded instanton cohomology with coefficients in R as defined in <cit.>. Recall that this is the cohomology of a cochain complex (C(P;R),d) where C(P;R) is the free R–module generated by ^*(P) and the differential d is defined by d=∑_β#(,β)·β. Here, # means the number of points counted with sign, and the sum is taken over all β∈^*(P) satisfying (,β)=1. If P is admissible then ^*(P)=(P). If instead Y is an homology sphere then (P)=(Y) contains exactly one reducible point θ, represented by the trivial connection. The presence of the trivial connection provides C(P;R)=C(Y;R) with an absolute /8 grading defined by ()= M(θ,)8. The trivial connection also gives rise to homomorphisms C^4(Y;R)→ R'→ C^1(Y;R) defined on generators by =#(,θ), 1=∑_β#(θ,β)·β, where we sum over all β∈^*(Y) of index 1. These homomorphisms satisfy d=0 and d'=0 and therefore define I^4(Y;R)_0→ R_0'→ I^1(Y;R). We conclude this subsection with some notation for energy. If A is any ASD connection in the bundle Q:=× P and I is any interval then we write _A(I) instead of _A(I× Y). Moreover, if ,β∈(Y) and the moduli space M(,β) is expressed as M(,Q;ρ) in the notation of Section <ref> then we define (,β):=1/4(Q,ρ), which equals the total energy of any element of M(,β). (Note, however, that M(,β) may be empty.) §.§ Cup products We continue the discussion of the previous subsection, assuming P is admissible unless Y is an homology sphere. In most of this paper the coefficient ring R will be /2, and we write I(P):=I(P;/2). For j=2,3 we will define a degree j endomorphism u_j:I^*(P)→ I^*+j(P). Insofar as the Floer cohomology is some kind of Morse cohomology of ^*(P), one may think of u_j as cup product with the jth Stiefel-Whitney class of the base-point fibration over ^*(P). The map u_j will be induced by an endomorphism v_j:C^*(P)→ C^*+j(P) which we now define. For any t∈ set t:=[t-1,t+1]× Y. Let P_0=[-1,1]× P denote the pull-back of the bundle P to 0. For any ,β∈(P) and any irreducible point ∈ M(,β) let [t]:=|_Y[t]∈^*(P_0) denote the restriction of to the band Y[t]. (The fact that [t] is irreducible follows from Proposition prop:unique-continuation-cylinder.) Choose a base-point y_0∈ Y, and let →^*(P_0) be the natural real vector bundle of rank 3 associated to the base-point (0,y_0)∈0. To define v_3, choose a “generic” smooth section s_1 of . For any ,β∈^*(P) with (β)-()≡38 the matrix coefficient v_3,β is defined to be equation v_3,β:=#{∈M(,β)s_1([0])=0}, where # means the number of points counted modulo 2. To define v_2, let s_2,s_3 be a pair of smooth sections of which define a “generic” section of ⊕. For any ,β∈^*(P) with (β)-()≡28 the matrix coefficient v_2,β is defined to be equation v_2,β:= #{∈M(,β)s_2,s_3 are linearly dependent at [0]}. Note that, for dimensional reasons, s_2 and s_3 cannot simultaneously vanish at [0] for any ∈ M(,β). prop For j=2,3 one has dv_j=v_jd as homomorphisms C^*(P)→ C^*+j+1(P). To prove this for j=2, let ,β∈^*(P) with (β)-()≡38. The number of ends of the 1-manifold {∈ M(,β)s_2,s_3 are linearly dependent at [0]}, counted modulo 2, is (dv_2+v_2d),β. Since the number of ends must be even, this proves the assertion for j=2. The case j=3 is similar. □ The homomorphism u_j:I^*(P)→ I^*+j(P) induced by v_j is independent of the sections s_i. For u_3 this will follow from Lemma <ref> below, and a similar argument works for u_2. We consider again the bundle P_0=[-1,1]× P over Y[0]=[-1,1]× Y. Let U be an open subset of ^*(P_0) such that for all ,β∈^*(P) with (,β)≤3 and every ∈ M(,β) one has that [0]∈ U. A section s of |_U is said to satisfy Property 3 if for all ,β as above the map M(,β)→, ↦ s([0]) is transverse to the zero-section in . Let U⊂^*(P_0) be as in Definition <ref> and suppose s,s' are sections of |_U satisfying Property 3. Let v_3,v'_3 be the corresponding cup products defined as in eqn:v3def. Then there is an endomorphism H:C(P)→ C(P) such that v_3+v'_3=dH+Hd. For a “generic” section of the map f_β:M(,β)×[0,1]→, ↦(1-t)s([0])+ts'([0])+t(1-t)([0]) is transverse to the zero-section whenever (,β)≤3. Fix such a and let Z_β denote the zero-set of f_β. If (,β)=2 then Z_β is a finite set. Let H be the homomorphism with matrix coefficients H,β=#Z_β. If (,β)=3 then Z_β is a compact 1–manifold-with-boundary. Counted modulo 2, the number of boundary points of Z_β is (v_3+v'_3),β, whereas the number of ends is (dH+Hd),β. These two numbers must agree, proving the lemma.□ Let W be a smooth, compact, oriented, connected 4–manifold with two boundary components, say W=-Y_0∪ Y_1. Let Q→ W be an 3 bundle, and let P_i be the restriction of Q to Y_i. Suppose one of the following two conditions holds. (i) At least one of the bundles P_0,P_1 is admissible. (ii) Both Y_0 and Y_1 are homology spheres, the bundle Q is trivial, and H_1(W;)=0 and b_+^2(W)=0. Then the homomorphism T:I(P_0)→ I(P_1) induced by (W,Q) satisfies Tu_j=u_jT for j=2,3. Moreover, if (ii) holds then T=:I^4(Y_0)→/2.□ If P→ Y is an admissible 3 bundle then u_3=0 on I(P). By Proposition <ref> there is an Hermitian 2–plane bundle →^* such that ≈^0_. For a “generic” section s of , we have s([0])≠0 whenever lies in a moduli space M(,β) of dimension at most 3. Given such a section s, let U be the open subset of ^* where s≠0. Then |_U splits as an orthogonal sum |_U=⊕ L of two complex line bundles. Hence |_U has a nowhere vanishing trace-free skew-Hermitian endomorphism ( [ i 0; 0 -i ]). This yields a non-vanishing section s' of |_U. Let s be the restricion to U of a “generic” section of , and let v_3,v'_3 be the cup products defined by s,s', respectively. Then v'_3=0, so by Lemma <ref> we have v_3=dH+Hd. By definition, v_3 induces the cup product u_3 in cohomology, so u_3=0.□ Let Y be an oriented homology 3–sphere and Y' the result of (±1) surgery on a knot in Y. Let n be a non-negative integer. (i) If (u_3)^n=0 on I(Y) then (u_3)^n+1=0 on I(Y'). (ii) If (u_2)^n=0 on I(Y) and has genus 1 then (u_2)^n+1=0 on I(Y'). If R is a commutative ring and A⟶ B⟶ C an exact sequence of modules over the polynomial ring R[u] such that u^m=0 on A and u^n=0 on C for non-negative integers m,n then u^m+n=0 on B. (Here, u^0 acts as the identity map.) Now suppose Y' is (-1) surgery on . (If instead Y' is (+1) surgery on then the proof is similar with the roles of Y,Y' reversed.) Let Y” be 0 surgery on and I(Y”) the instanton cohomology of the non-trivial 3 bundle over Y”. We apply the above observation to the long exact surgery sequence (see <cit.>) ⋯→ I(Y”)→ I(Y)→ I(Y')→ I(Y”)→⋯ Statement (i) now follows from Proposition <ref>. To prove (ii), recall that if P_T^3 is a non-trivial 3 bundle over the 3–torus then I(P_T^3) is non-zero in two degrees differing by 4 modulo 8 and zero in all other degrees. Therefore, u_2=0 on I(P_T^3). If has genus 1 then by arguing as in the proof of <cit.> we find that u_2=0 on I(Y”), from which (ii) follows.□ As a special case of Proposition <ref> we have the following corollary. If Y is (±1) surgery on a knot in S^3 then u_3=0 on I(Y). Let P→ Y be an 3 bundle. We assume P is admissible if Y is not a homology sphere. Then the endomorphisms u_2 and u_3 on I(P) are nilpotent. In other words, there is a positive integer n such that u_2^n=0, u_3^n=0 on I(P). We use the same link reduction schemes as in the proofs of <cit.>. In the present case there is no need to consider any reduced groups, as the cup products u_j are defined on all of I(Y).□ We include here a result for oriented homology 3–spheres Y obtained by adapting the proof of Proposition <ref> for j=2 to 2–dimensional moduli spaces M(,θ). This result will be used in Proposition <ref> below. For any ∈^*(Y) we introduce the temporary notation M_:={∈ M(,θ)s_2∧ s_3=0 at [0], and _([0,∞))≥}, where is a small positive constant. If M(,θ)<6 then M_ is a manifold-with-boundary, and M_ has a description analogous to that of M_, just replacing the inequality _([0,∞))≥ by an equality. We define homomorphisms :C^2(Y)→/2, ^-:C^3(Y)→/2 on generators by :=#( M_), ^-β:=# M_β. v_2+^-d=. Let ∈^*(Y), ()=2. Then M_ is a 1–manifold-with-boundary. The number of boundary points, counted modulo 2, is by definition, and this must agree with the number of ends of M_, which is ( v_2+^-d).□ §.§ Commutators of cup products Let Y be an oriented homology 3–sphere. We introduce a degree 4 endomorphism ϕ:C^*(Y)→ C^*+4(Y) which will be used to describe the commutator of v_2 og v_3. defn For any ,β∈^*(Y) let 23(,β) be the subspace of × consisting of those points (,t) satisfying the following conditions: itemize * s_1([-t])=0, * s_2([t]) and s_3([t]) are linearly dependent. If (β)-()≡48 then 23(,β) consists of a finite number of points (see part (I) of the proof of Proposition <ref> below), and we set ϕ,β:=#23(,β). prop If Y is an oriented integral homology 3-sphere then for “generic” sections s_1,s_2,s_3 one has equation v_2v_3+v_3v_2+'=dϕ+ϕd. Hence, on I(Y) one has equation u_2u_3+u_3u_2=_0'_0. The proof will be given in Subsection <ref>. Let v_3,v_3':C^*(Y)→ C^*+3(Y) be the cup products defined by “generic” sections s,s' of . At least in degrees different from 4, the commutator of v_3 and v_3' is given by a formula analogous to eqn:v2v3chhom. This formula involves the homomorphism ψ:C^p(Y)→ C^p+5(Y), p≠4 with matrix coefficients ψ,β=#{(,t)∈× s([-t])=0=s'([t])}. The condition p≠4 is imposed to make sure that factorizations through the trivial connection do not occur in the moduli spaces M(,β). For q≢3,48 one has dψ+ψ d=v_3v'_3+v'_3v_3 as maps C^q(Y)→ C^q+6(Y). If the sections s,s' are sufficiently close (in a certain sense) then v_3=v_3' (see Lemma <ref> below) and the following hold. If the sections s,s' are sufficiently close then there exist * an extension of ψ to a cochain map C^*(Y)→ C^*+5(Y) defined in all degrees, and * a homomorphism Ξ:C^*(Y)→ C^*+4(Y) such that ψ=v_2v_3+dΞ+Ξ d. The proof will be given in Subsection <ref>. § DEFINITION OF THE INVARIANT Let Y be any oriented homology 3-sphere. defn We define a non-negative integer ζ_2(Y) as follows. If _0=0 on (u_3)⊂ I(Y) set ζ_2(Y):=0. Otherwise, let ζ_2(Y) be the largest positive integer n for which there exists an x∈(u_3) such that _0u_2^kx= 0 for 0≤ k<n-1, 1 for k=n-1. Here, u_2^k denotes the k'th power of the endomorphism u_2. Note that if x is as in Definition <ref> then using the relation eqn:u2u3 one finds that u_3u_2^kx=0 for 0≤ k≤ n-1. defnSet (Y):=ζ_2(Y)-ζ_2(-Y). An alternative description of will be given in Proposition <ref> below. If ('_0)⊂(u_3) in I^1(Y) then ζ_2(-Y)=0. Otherwise, ζ_2(-Y) is the largest positive integer n for which the inclusion (u_2^k'_0)⊂(u_3)+∑_j=0^k-1(u_2^j'_0) in I(-Y) holds for 0≤ k<n-1 but not for k=n-1. Of course, in eqn:imu2delincl it suffices to sum over those j that are congruent to k mod 4, since I(-Y) is mod 8 periodic. Recall that I^q(Y) and I^5-q(-Y) are dual vector spaces for any q∈/8. Furthermore, the maps _0:I^4(Y)→/2, u_3:I^q(Y)→ I^q+j(Y) are dual to '_0:/2→ I^1(-Y), u_3:I^5-q-j(-Y)→ I^5-q(-Y), respectively. In general, the kernel of a linear map between finite-dimensional vector spaces is equal to the annihilator of the image of the dual map. Applying this to _0u_2^j:I^4-2j(Y)→/2 we see that the inclusion eqn:imu2delincl holds if and only if (_0u_2^k)⊃(u_3)∩⋂_j=0^k-1(_0u_2^j) in I(Y). This proves the lemma.□ prop Either ζ_2(Y)=0 or ζ_2(-Y)=0. Suppose ζ_2(Y)>0, so there is an x∈ I^4(Y) such that u_3x=0 and _0x=1. Then Proposition <ref> yields '_0(1)=u_3u_2x, hence ζ(-Y)=0 by Lemma <ref>.□ We now reformulate the definition of ζ_2 in terms of the mapping cone of v_3. This alternative definition will display a clear analogy with the instanton h-invariant and will be essential for handling the algebra involved in the proof of additivity of . For q∈/8 set MC^q(Y):=C^q-2(Y)⊕ C^q(Y), and define D:MC^q(Y)→ MC^q+1(Y), (x,y)↦(dx,v_3x+dy). Then D∘ D=0, and we define MI(Y) to be the cohomology of the cochain complex (MC(Y),D). The short exact sequence of cochain complexes 0→ C^*(Y)→ MC^*(Y)τ→ C^*-2(Y)→0, where (y)=(0,y) and τ(x,y)=x, gives rise to a long exact sequence equation ⋯→I^q-3(Y)u_3→I^q(Y)_* →MI^q(Y)τ_*→I^q-2(Y)→⋯. We introduce some extra structure on *j(Y). Firstly, the homomorphisms gather* :=∘τ:MC^6(Y)→/2, ':=∘':/2→MC^1(Y) induce homomorphisms MI^6(Y)_0⟶/2'_0⟶ MI^1(Y). We extend trivially to all of MC(Y), and similarly for _0. Furthermore, we define a homomorphism V:MC^*(Y)→ MC^*+2(Y), (x,y)↦(v_2x,ϕ x+v_2y). A simple calculation yields equation DV+VD=', which is analogous to the relation <cit.> in rational instanton homology. It follows that V induces homomorphisms gather* MI^q(Y)→MI^q+2(Y), q≢6,78, MI^6(Y)∩(_0)→MI^0(Y), each of which will be denoted by U. If _0=0 on MI^6(Y) then ζ_2(Y)=0. Otherwise, ζ_2(Y) is the largest positive integer n for which there exists a z∈ MI(Y) such that _0 U^kz=cases 0 for 0≤ k<n-1, 1 for k=n-1. This follows immediately from the definitions.□ § DEFINITE 4-MANIFOLDS The goal of this section is to prove Theorem <ref>. Let X be an oriented, connected Riemannian 4–manifold with a cylindrical end [0,∞)× Y, where Y is an integral homology sphere. Suppose b_1(X)=0=b^+(X). Let E→ X be an oriented Euclidean 3–plane bundle and w_2(E) its second Stiefel-Whitney class. We will count reducibles in ASD moduli spaces for E with trivial asymptotic limit. Let w∈ H^2(X,;/2) be the unique lift of w_2(E). Abusing notation, we denote by w_2(E)^2∈/4 the value of the Pontryagin square w^2∈ H^4(X,;/4) on the fundamental class in H_4(X;;/4). Then for ∈^*(Y) the expected dimension of a moduli space for E with asymptotic limit satisfies M_(X,E;)≡()-2w_2(E)^28. If ρ is a trivial connection in E|_ then (E,ρ) is an integer reducing to -w_2(E)^2 modulo 4. Hence, M_k:=M_k(X,E;θ) is defined for integers k satisfying k≡-w_2(E)^24. Moreover, M_k is empty for k<0, and M_0 (when defined) consists of flat connections. The expected dimension is M_k=2k-3. §.§ Reducibles In this subsection we restrict to k>0. After perturbing the Riemannian metric on X in a small ball we can arrange that M_k contains no twisted reducibles (see <cit.>). The set M_k of reducible (i.e. Abelian) points in M_k has a well known description in terms of the cohomology of X, which we now recall. Let P:={c∈ H^2(X;) [c]_2=w_2(E), c^2=-k}, where [c]_2 denotes the image of c in H^2(X;/2). Let P:= P/±1 be the quotient of P by the involution c↦-c. There is a canonical bijection M_k→ P. If [A]∈ M_k then A respects a unique splitting E=⊕ L, where is a trivial rank 1 subbundle of E. A choice of orientation of defines a complex structure on L. Mapping [A] to the point in P represented by c_1(L) yields the desired bijection. For further details see <cit.> and <cit.>.□ Assuming P is non-empty we now express the number |P| of elements of P in terms of the intersection form of X and the torsion subgroup of H^2(X;). For any v∈ H^2(X;) let v̅ denote the image of v in H^2(X;)/. Choose a∈ P and let Q_a:={r∈ H^2(X;)/ r≡a̅ mod 2, r^2=-k}. Define Q_a:= Q_a/±1. |P|=|2|·|Q_a|. Note that 2 has even order precisely when H^2(X;) contains an element of order 4. Because k>0 we have that (-1) acts without fixed-points on both P and Q_a. Therefore, | P|=2|P|, | Q_a|=2|Q_a|. The short exact sequence 0→2→→/2→0 gives rise to a long exact sequence ⋯→ H^2(X;)2→ H^2(X;)→ H^2(X;/2)→ H^3(X;)→⋯. From this sequence we see that there is a well defined map P→ Q_a, c↦c̅ which descends to an injective map f: P/2→ Q_a. In fact, f is bijective. To see that f is surjective, let r∈ Q_a. Then r=a̅+2x̅=a+2x for some x∈ H^2(X;), and a+2x∈ P. This shows that | P|=|2|·| Q_a|. Combining this with eqn:2PQ we obtain the proposition.□ §.§ 2–torsion invariants of 4–manifolds The proof of Theorem <ref> will involve certain 2–torsion Donaldson invariants which we now define. Let d_0 be the smallest expected dimension of any moduli space M_k=M_k(X,E;θ) that contains a reducible, where k is a non-negative integer. For any pair (r,s) of non-negative integers satisfying 2r+3s≤ d_0+2 we will define an element rs= rs(X,E)∈ I(Y) which will be independent of the Riemannian metric on X and also independent of the choice of small holonomy perturbations. To define rs, choose disjoint compact codimension 0 submanifolds Z_1,…,Z_r+s of X and base-points z_j∈ Z_j. It is convenient to assume that each of these submanifolds contains a band [t_j,t_j+1]× Y for some t_j≥1. (We assume that the perturbed ASD equation is of gradient flow type in the region [1,∞)× Y.) Then Proposition <ref> guarantees that every perturbed ASD connection in E with irreducible limit will restrict to an irreducible connection over each Z_j. Choose “generic” sections {_ij}_i=1,2,3 of the canonical 3–plane bundle _j→^*(Z_j,E_j), where E_j:=E|_Z_j. For any ∈^*(Y) let d=d() be the integer such that 0≤ d-2r-3s≤7, d≡()-2w_2(E)^28. Let M_r,s(X,E;) be the set of all ∈ M_(d)(X,E;) such that * _2,j,_3,j are linearly dependent at |_Z_j for j=1,…,r, and * _1,j(|_Z_j)=0 for j=r+1,…,r+s. Let q_r,s:=∑_#M_r,s(X,E;)·∈ C(Y), where the sum is taken over all generators in C(Y) of index 2w_2(E)^2+2r+3s. Then q_r,s is a cocycle, and we define rs(X,E):=[q_r,s]∈ I(Y). Standard arguments show that rs is independent of the choice of submanifolds Z_j and sections _ij. Let k be an integer greater than one. If M_ℓ is empty for ℓ<k then k-20=#M_k. Deleting from M_k a small neighbourhood of each reducible point we obtain a manifold-with-boundary W with one boundary component P_η for each reducible η, each such component being diffeomorphic to k-2. Let Ŵ:=W∩ M_k-2,0(X,E;θ) be the set of all ∈ W such that _2,j and _3,j are linearly dependent at |_Z_j for j=1,…,k-2. Then Ŵ is a 1–manifold-with-boundary. For dimensional reasons and because of the condition that M_ℓ be empty for ℓ<k, bubbling cannot occur in sequences in Ŵ. Therefore, the only source of non-compactness in Ŵ is factorization over the end of X, so the number of ends of Ŵ equals k-20 modulo 2. As for the boundary points of Ŵ, observe that for every x∈ X the restriction of the 3–plane bundle _θ,x→ M^*_k to P_η is isomorphic to the direct sum ⊕ L of a trivial real line bundle and the tautological complex line bundle. It follows easily from this that P_η∩Ŵ has an odd number of points for every reducible η, hence |Ŵ|≡|M_k|2. Since the number of boundary points of Ŵ must agree with the number of ends when counted modulo 2, this proves the proposition.□ In the proof of the following proposition and at many places later we will make use of a certain kind of cut-off function. This should be a smooth function b:→ such that b(t)= 0 for t≤-1, 1 for t≥1. Suppose 2r+3s≤ d_0+2, so that rs is defined. (i) rs=u_2r-1s if r≥1. (ii) rs=u_3 rs-1 if s≥1. We only spell out the proof of (ii), the proof of (i) being similar. Let M_r,s-1(X,E;) be defined as above, but using only the submanifolds Z_1,…,Z_r+s-1 and the corresponding sections _ij. Choose a path :[-1,∞)→ X such that (-1)=z_r+1 and (t)=(t,y_0) for t≥0, where y_0∈ Y is a base-point. For any ∈^*(Y) and x∈ X let _,x→ M_r,s-1(X,E;) be the canonical 3–plane bundle associated to the base-point x. For any =[A]∈ M_r,s-1(X,E;) and t≥-1 let _,t:(_,(t))_→(_,(-1))_ be the isomorphism defined by the holonomy of A along . Here, (_,x)_ denotes the fibre of the bundle _,x at the point . Given a “generic” section s of →^*(Y[0]) we define a section s_ of the bundle _,(-1)×[-1,∞)→ M_r,s-1(X,E;)×[-1,∞) by s_(,t):=(1-b(t-2))·_1,r+s(|_Z_r+s) +b(t-2)·_,t(s([t])), where b is as in eqn:b-prop1. Let j:=2w_2(E)^2+2r+3s∈/8. If ()=j-1 then the zero set s_(0) is a finite set. Summing over such we define h_r,s:=∑_(#s_(0))·∈ I^j(Y). Counting ends and boundary points of the 1–manifolds s_β(0) for (β)=j we see that dh_r,s+v_3q_r,s-1=q_r,s. Passing to cohomology, we obtain (ii).□ If E is strongly admissible then D_r,s(X,E)=0 for s>0. Let f:→ X be as in Definition <ref> with v=w_2(E). For t≥0 let X t be the result of deleting from X the open subset (t,∞)× Y. Choose t>0 so large that X t contains f(). Then E|_X t is strongly admissible. Choose the submanifolds Z_1,…,Z_r+s such that Z_r+s=X t. By Proposition <ref> the (frame bundle of) _j→^*(E_r+s) lifts to a 2 bundle. For j=1,…,r+s-1 choose “generic” sections {_ij}_i=1,2,3 of _j. Arguing as in the proof of Proposition <ref> we see that there is an open subset U⊂^*(Z_r+s,E_r+s) and a section of _r+s such that if is any element of a 3–dimensional moduli space M_r,s-1(X,E;) then |_Z_r+s∈ U and (|_Z_r+s)≠0. Taking _1,r+s:= we have that all 0–dimensional moduli spaces M_r,s(X,E;) are empty. Reasoning as in the proof of Lemma <ref> we conclude that D_r,s=0.□ §.§ Lower bound on Recall Definition <ref> above. Given a space, X, a non-zero class w∈ H^2(X;)/torsion is called strongly admissible if some (hence every) lift of w to H^2(X;) maps to a strongly admissible class in H^2(X;/2). Let V be a smooth compact oriented connected 4-manifold whose boundary is a homology sphere Y. Suppose the intersection form of V is negative definite and at least one of the following two conditions holds: (i) H^2(V;) contains no 2–torsion. (ii) H^2(V;) contains no element of order 4, and w^2≢04. Furthermore, either w is strongly admissible or u_3=0 on I(Y) (or both). Let J:=H^2(V;)/torsion, and let w be an element of J which is not divisible by 2. Let k be the minimal square norm (with respect to the intersection form) of any element of w+2J. Let n be the number of elements of w+2J of square norm k. If k≥2 and n/2 is odd then equation (Y)≥k-1. Note that if we leave out case (ii) then the theorem says the same as Theorem <ref>. After performing surgery on a collection of loops in V representing a basis for H_1(V;)/ we may assume that b_1(V)=0. From the exact sequence eqn:2long-exact-seq we see that the 2–torsion subgroup of H^2(V;) is isomorphic to H^1(V;/2). Let X:=V∪(0,∞)× Y be the result of adding a half-infinite cylinder to V, and choose a Riemannian metric on X which is of cylindrical form over the end. We identify the (co)homology of X with that of V. Choose a complex line bundle L→ X whose Chern class represents w. Choose a Euclidean metric on the 3–plane bundle E:=⊕ L. Since we assume that H^2(X;) contains no element of order 4, it follows from Proposition <ref> that M_ℓ contains an odd number of reducibles for ℓ=k but no reducibles for 0<ℓ<k. We now show that if w^2≡0 (4), so that M_0 is defined, then M_0 is free of reducibles. Suppose A is a connection in E representing a reducible point in M_0. Then A preserves some orthogonal splitting E=⊕ L', where → X is a real line bundle. Because Condition (i) of the proposition must hold, the bundle is trivial. Choose a complex structure on L'. Since L' admits a flat connection, its Chern class c_1(L') is a torsion class in H^2(X;). But c_1(L) and c_1(L') map to the same element of H^2(X;/2), namely w_2(E), hence c_1(L)=c_1(L')+2a for some a∈ H^2(X;). This contradicts our assumption that w∈ J is not divible by 2. Thus, M_0 is free of reducibles as claimed. By Proposition <ref> we have D_k-2,0≠0, and Proposition <ref> says that D_k-2,0=u_2^k-2D_0,0. Now suppose w is strongly admissible (which is trivially the case if Condition (i) holds). Then the bundle E is strongly admissible, so by Propositions <ref> and <ref> we have u_3D_0,0=D_0,1=0. This proves eqn:q2ineq.□ § OPERATIONS DEFINED BY COBORDISMS §.§ Cutting down moduli spaces Let Y_0,Y_1,Y_2 be oriented (integral) homology 3–spheres and W a smooth compact connected oriented 4–manifold such that H_i(W;)=0 for i=1,2 and W=(-Y_0)∪(-Y_1)∪ Y_2. Then we call W a (4–dimensional) pair-of-pants cobordism from Y_0∪ Y_1 to Y_2, or a pair-of-pants cobordism from Y_1 to (-Y_0)∪ Y_2. We will consider various operations on Floer cochain complexes induced by pair-of-pants cobordism. To define these we first introduce some notation. Let X be an oriented connected Riemannian 4–manifold with incoming tubular ends (-∞,0]× Y_j, j=0,…,r and outgoing tubular ends [0,∞)× Y_j, j=r+1,…,r', where each Y_j is an homology sphere. For t≥0 let X t be the result of deleting from X the open pieces (-∞,-t)× Y_j, j=0,…,r and (t,∞)× Y_j, j=r+1,…,r'. We assume X0 is compact. For i=0,…,r' let y_i∈ Y_i be a base-point and set e_i:= -1, i=0,…,r, 1, i=r+1,…,r'. For any integers j,k in the interval [0,r'] such that j<k let _jk:→ X be a smooth path satisfying _jk(t)∈ X1 for |t|≤1 and _jk(t)= (-e_jt,y_j), t≤-1, (e_kt,y_k), t≥1. Loosely speaking, the path _jk enters along the jth end and leaves along the kth end of X. Let =(_1,…,_r'), where _j∈(Y_j) and at least one _j is irreducible. For the remainder of this subsection we write M:=M(X,E;), where E→ X is the product 3 bundle. The unique continuation result of Proposition prop:unique-continuation-cylinder ensures that if _j is irreducible then the restriction of any element of M to a band on the jth end of X will be irreducible. Let → M× X be the universal (real) 3–plane bundle (see <cit.>). For any t≥0 let t denote the restriction of to M× X t. Given a base-point x_0∈ X let _X,x_0;→ M be the canonical 3–plane bundle, which can be identified with the restriction of to M×{x_0}. If :J→ X is a smooth path in X defined on some interval J then a section of the pull-back bundle (𝕀×)^* over M× J is called holonomy invariant if for all =[A]∈ M and real numbers s<t one has that (,s) is mapped to (,t) by the isomorphism _(,(s))→_(,(t)) defined by holonomy of A along the path |_[s,t]. Suppose Z⊂ X is a compact codimension 0 submanifold-with-boundary such that A|_Z is irreducible for every [A]∈ M. Given a base-point z_0∈ Z, let _Z,z_0→^*(E|_Z) be the base-point fibration, and let R_Z:M→^*(E|_Z), ↦|_Z. Then the pull-back bundle R_Z^*_Z,z_0 is canonically isomorphic to _X,z_0;, and we will usually identify the two bundles without further comment. Choose (smooth) sections z_1,z_2,z_3 of 2 and for any x∈ X2 let M∩ w_3(x):= {∈ M z_1(,x)=0}, M∩ w_2(x):= {∈ M z_2,z_3 are linearly dependent at (,x)}. For j=0,…,r' let _j→^*(Y_j[0]) be the canonical 3–plane bundle associated to a base-point (0,y_j). For j<k, any j', and i=1,2,3 choose * a section ijk of _j and a section ijk of _k, * a section ijk of 2, * a section s_ij' of _j'. Let b_-1,b_0,b_1 be a partion of unity of subordinate to the open cover {(-∞,-1),(-2,2),(1,∞)}. If j<k and both _j,_k are irreducible we introduce, for i=1,2,3, a section of the bundle (𝕀×_jk)^* associated, loosely speaking, to a base-point moving along the path _jk. Precisely, we define s_ijk(,t):=b_-1(t) ijk(|_Y_j[-e_jt]) +b_0(t) ijk(|_X2,_jk(t)) +b_1(t) ijk(|_Y_k[e_kt]). Using these sections, we define cut-down moduli spaces M∩ w_3(_jk):= {(,t)∈ M× s_1jk(,t)=0}, M∩ w_2(_jk):= {(,t)∈ M× s_2jk, s_3jk are linearly dependent at (,t)}. We now consider the case of a base-point moving along the jth end. For t≥0 let _j(t):=(e_jt,y_j). If _j is irreducible let M∩ w_2(_j):={(,t)∈ M×[0,∞) s_2j,s_3j are linearly dependent at |_Y_j[e_jt]}. We omit the definition of M∩ w_3(_j) since it will not be needed in the remainder of this paper (although something close to it was used in the proof of Proposition <ref>). We can also combine the ways moduli spaces are cut down in the above definitions. Namely, for ℓ,ℓ'∈{2,3} let M∩ w_ℓ(x)∩ w_ℓ'(_jk):= {(,t)∈ M∩ w_ℓ'(_jk) ∈ M∩ w_ℓ(x)}, M∩ w_ℓ(_jk)∩ w_ℓ'(_j'k'):= {(,t,t')∈ M×× (,t)∈ M∩ w_ℓ(_jk), (,t')∈ M∩ w_ℓ'(_j'k')}, M∩ w_ℓ(_jk)∩ w_2(_j'):= {(,t,t')∈ M××[0,∞) (,t)∈ M∩ w_ℓ(_jk), (,t')∈ M∩ w_2(_j')}. If one of the _js is trivial, say _h=θ, and M<8 (to prevent bubbling) then one can also cut down M by, loosely speaking, evaluating w_2 or w_3 over the “link of θ at infinity” over the hth end of X. We now make this precise in the case of w_2 and an outgoing end [0,∞)× Y_h. The definitions for w_3 or incoming ends are similar. To simplify notation write Y:=Y_h. We introduce a function τ^+=τ^+_h on M related to the energy distribution of elements over the hth end. Choose >0 so small that for any β∈(Y) the Chern-Simons value (β)∈/ has no real lift in the interval (0,]. (Recall that we assume (θ)=0.) Given ∈ M, if there exists a t>0 such that _([t-2,∞)× Y)= then t is unique, and we write t^+():=t. This defines t^+ implicitly as a smooth function on an open subset of M. We modify t^+ to get a smooth function τ^+:M→[1,∞) by τ^+():= 1+b(t^+()-2)·(t^+()-1) if t^+() is defined, 1 else, where the cut-off function b is as in eqn:b-prop1. Note that τ^+()<3 if t^+()<3 and τ^+()=t^+() if t^+()≥3. The restriction of to the band Y[τ^+()] will be denoted by R^+()∈(Y[0]). In the above situation there is a real number T_0 such that if is any element of M satisfying τ^+()>T_0-1 then R^+() is irreducible. Suppose the lemma is false. Then we can find a sequence _n in M such that τ^+(_n)→∞ and R^+(_n) is reducible for every n. Let A_n be a smooth connection representing _n, and let t_n=τ^+(_n). By assumption, there is no bubbling in M, so we can find gauge transformations u_n defined over [0,∞)× Y and a smooth connection A' over such that, for every constant c>0, the sequence u_n(A_n)|_[t_n-c,t_n+c] converges in C^∞ to A'|_[-c,c]. The assumption on means that no energy can be lost over the end [0,∞)× Y in the limit, hence _A'([-2,∞)× Y)=. In particular, A' is not trivial. But there are no non-trivial reducible finite-energy instantons over (as long as the perturbation of the Chern-Simons functional is so small that there are no non-trivial reducible critical points). Therefore, A' must be irreducible. From the unique continuation result of Proposition <ref> it follows that A'|_{0}× Y is also irreducible, so A_n is irreducible for large n. This contradiction proves the lemma. □Let T_0 be as in the lemma. For any element of M for which R^+() is irreducible, let s'_ih() denote the holonomy invariant section of (𝕀×_h)^* such that s'_ih(,τ^+())=s_ih(R^+()). Let x_h:=(0,y_h) and define a section of _X,x_h; by s_ih():=(1-b(τ^+()-T_0))· z_i(|_X2,x_h) +b(τ^+()-T_0)· s'_ih(R^+()), where again b is as in eqn:b-prop1. Let M∩ w_2(τ^+):={∈ Ms_2h,s_3h linearly dependent at }. If j<k and both _j,_k are irreducible let M∩ w_ℓ(_jk)∩ w_2(τ^+):= {(,t)∈ M∩ w_ℓ(_jk)∈ M∩ w_2(τ^+)}. If M is regular, then the various cut down moduli spaces defined above will be transversely cut out when the sections involved are “generic”. §.§ Operations, I We now specialize to the case when X has two incoming ends (-∞,0]× Y_j, j=0,1 and one outgoing end [0,∞)× Y_2, and H_i(X;)=0, i=1,2. Such a cobordism gives rise to a homomorphism A:C^p(Y_0)⊗ C^q(Y_1)→ C^p+q(Y_2) for any p,q∈/8, with matrix coefficients A(_0⊗_1),_2:=#M(X;) for generators _0∈ C^p(Y_0), _1∈ C^q(Y_1), and _2∈ C^p+q(Y_2), where =(_0,_1,_2). We can construct more homomorphisms using the sections s_ijk chosen above. For any path _jk as above and k=2,3 let T_i,j,k:C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+i-1(Y_2) be defined on generators by T_i,j,k(_0⊗_1),_2:= #[M(X;)∩ w_i(_jk)]. For the cases used in this paper we introduce the simpler notation B:=T_3,0,1, E:=T_3,0,2, A':=T_2,1,2. We will also consider homomorphisms defined using two base-points, each moving along a path in X. At this point we only define B':C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+3(Y_2) by B'(_0⊗_1),_2:= #[M(X;)∩ w_3(_01)∩ w_2(_12)]. In the next proposition, the differential in the cochain complex C(Y_i) will be denoted by d (for i=0,1,2), and d=d⊗1+1⊗ d will denote the differential in C(Y_0)⊗ C(Y_1). Let v_3:=v_3⊗1+1⊗ v_3, regarded as a degree 3 cochain map from C(Y_0)⊗ C(Y_1) to itself. (i) dA+A d=0. (ii) dB+B d=A v_3. (iii) dE+E d=A(v_3⊗1)+v_3A. (iv) dA'+A' d=A(1⊗ v_2)+v_2A. (v) dB'+B' d=B(1⊗ v_2)+v_2B +A' v_3+A(1⊗ϕ)+A_θ(1⊗). The only non-trivial part here is (v), where one encounters factorization through the trivial connection over the end (-∞,0]× Y_1. This can be handled as in the proof of Proposition <ref> given in Subsection <ref>, to which we refer for details.□ The homomorphism :MC^*(Y_0)⊗ MC^*(Y_1) → C^*(Y_2), (x_0,y_0)⊗(x_1,y_1) ↦ B(x_0,x_1)+A(x_0⊗ y_1+y_0⊗ x_1) is a cochain map of degree -2. Let D=D⊗1+1⊗ D be the differential in the complex MC(Y_1)⊗ MC(Y_2). Then D[(x_0,y_0)⊗(x_1,y_1)] = [(dx_0,v_3x_0+dy_0)⊗(x_1,y_1)+(x_0,y_0)⊗(dx_1,v_3x_1+dy_1)] =B(dx_0⊗ x_1+x_0⊗ dx_1) +A[dx_0⊗ y_1+(v_3x_0+dy_0)⊗ x_1+ x_0⊗(v_3x_1+dy_1)+y_0⊗ dx_1] =B d(x_0⊗ x_1) +A[ v_3(x_0⊗ x_1)+ d(x_0⊗ y_1+y_0⊗ x_1)] =d[(x_0,y_0)⊗(x_1,y_1)], where the last equality follows from Proposition <ref>.□ The homomorphism MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2) obtained from Proposition <ref> will also be denoted by . In order to simplify notation we will often write , instead of _0,_0 if no confusion can arise. For all a∈ MI(Y_0), b∈ MI(Y_1), the following hold. (i) If a=0 then (Ua,b)=u_2(a,b). (ii) If b=0 then (a,Ub)=u_2(a,b). We spell out the proof of (ii). Reversing the roles of Y_0,Y_1 yields a proof of (i). Let ',:MC^*(Y_0)⊗ MC^*(Y_1)→ C^*(Y_2) be given by '[(x_0,y_0)⊗(x_1,y_1)] := B'(x_0,x_1)+A'(x_0⊗ y_1+y_0⊗ x_1), [(x_0,y_0)⊗(x_1,y_1)] :=( x_1)A_θ(x_0). Let D be as in the proof of Proposition <ref>. We show that d'+' D=v_2+(1× V)+, from which (ii) follows. Observe that the first four lines in the calculation of D in Proposition <ref> carry over to ' D. That proposition then gives ' D [(x_0,y_0)⊗(x_1,y_1)] =(B' d+A' v_3)(x_0⊗ x_1) +A' d(x_0⊗ y_1+y_0⊗ x_1) =dB'(x_0⊗ x_1)+B(x_0⊗ v_2x_1)+v_2B(x_0⊗ x_1) +A(x_0⊗ϕ x_1)+( x_1)A_θ(x_0) +[dA'+A(1⊗ v_2)+v_2A](x_0⊗ y_1+y_0⊗ x_1) =[d'+v_2+(1× V)+][(x_0,y_0)⊗(x_1,y_1)].□ Our next goal is to compute u_2. To this end we introduce some variants Ȧ,Ḃ,A^+,B^+ of the operators A,B. Each of these variants is a homomorphism C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+d(Y_2) for d=2,4,1,3, respectively, defined for all p,q, and the matrix coefficients are Ȧ(_0⊗_1),_2 := #[M(X;)∩ w_2(x_2)], Ḃ(_0⊗_1),_2 := #[M(X;)∩ w_2(x_2)∩ w_3(_01)], A^+(_0⊗_1),_2 := #[M(X;)∩ w_2(_2)], B^+(_0⊗_1),_2 := #[M(X;)∩ w_3(_01)∩ w_2(_2)], where =(_0,_1,_2) as before, x_2=_2(0)∈ X, and _i,_ij are as in Subsection <ref>. (i) dȦ+Ȧ d=0. (ii) dḂ+Ḃ d=Ȧ v_3. (iii) dA^++A^+ d=v_2A+Ȧ. (iv) dB^++B^+ d=A^+ v_3+v_2B+Ḃ. Standard.□ The homomorphism :MC^*(Y_0)⊗ MC^*(Y_1) → C^*(Y_2), (x_0,y_0)⊗(x_1,y_1) ↦Ḃ(x_0,x_1) +Ȧ(x_0⊗ y_1+y_0⊗ x_1) is a (degree preserving) cochain map. The same as for Proposition <ref>, using Proposition <ref> (i), (ii).□ The homomorphism MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2) obtained from Proposition <ref> will also be denoted by . As maps MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2) one has =u_2. This is analogous to the proof of Proposition <ref>. Let ^+:MC^*(Y_0)⊗ MC^*(Y_1)→ C^*(Y_2) be given by ^+[(x_0,y_0)⊗(x_1,y_1)] := B^+(x_0,x_1)+A^+(x_0⊗ y_1+y_0⊗ x_1). We show that d^++^+ d=v_2+. From Proposition <ref> we get ^+ D(x_0,y_0)⊗(x_1,y_1) =(B^+ d+A^+ v_3)(x_0⊗ x_1)+A^+ d(x_0⊗ y_1+ y_0⊗ x_1) =(dB^++v_2B+Ḃ)(x_0⊗ x_1) +(dA^++v_2A+Ȧ)(x_0⊗ x_1) =(d^++v_2+)(x_0,y_0)⊗(x_1,y_1).□ We also need to bring in moduli spaces over X with trivial limit over the end _+× Y_2. These give rise to homomorphisms A^θ,B^θ,Ȧ^θ,Ḃ^θ:C^p(Y_0)⊗ C^d-p(Y_1)→/2 where d=5,3,3,1, respectively. They are defined on generators by A^θ(_0⊗_1) :=#M(_0,_1,θ), B^θ(_0⊗_1) :=#[M(_0,_1,θ)∩ w_3(_01)], Ȧ^θ(_0⊗_1) :=#[M(_0,_1,θ)∩ w_2(x_0), Ḃ^θ(_0⊗_1) :=#[M(_0,_1,θ)∩ w_2(x_0)∩ w_3(_01). (i) A+ A^θ d=0. (ii) B+B^θ d=A^θ v_3. (iii) Ȧ+Ȧ^θ d=0. (iv) Ḃ+Ḃ^θ d= Ȧ^θ v_3+⊗. Here, (⊗)(x_0⊗ x_1)=( x_0)( x_1). The proof is standard. (i) =0. (ii) u_2=⊗. Statement (i) is proved just as Proposition <ref>, replacing Proposition <ref> by Proposition <ref>. We now prove (ii). For g_i=(x_i,y_i)∈ MC(C_i), i=0,1 let ^θ(g_0⊗ g_1):=Ḃ^θ(x_0⊗ x_1) +Ȧ^θ(x_0⊗ y_1+y_0⊗ x_1). Arguing as in the proof of Proposition <ref> and using Proposition <ref> we obtain ^θ D(g_0⊗ g_1) =(Ḃ^θ d+Ȧ v_3)(x_0⊗ x_1) +Ȧ^θ d(x_0⊗ y_1+y_0⊗ x_1) =Ḃ(x_0⊗ x_1)+ x_0· x_1 +Ȧ(x_0⊗ y_1+y_0⊗ x_1) =(+⊗)(g_0⊗ g_1). If g_0,g_1 are cocycles then by Proposition <ref> we have v_2(g_0⊗ g_1)=(g_0⊗ g_1) = g_0· g_1.□ For p≠4 let F:C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+4(Y_2) be defined by F(_0⊗_1),_2:= #[M(X;)∩ w_3(_01)∩ w_3(_02)]. For p=4 the map F may not be well-defined due to possible factorizations through the trivial connection over the end _-× Y_0. The definition of F involves two different sections of the bundle _0→^*(Y_0[0]), namely s_k:= 10k, k=1,2. From now on we assume s_1,s_2 are so close that they define the same cup product v_3:C^*(Y_0)→ C^*+3(Y_0). If the sections s_1,s_2 are sufficiently close then the map F in eqn:Fdef can be extended to all bidegrees (p,q) such that dF+F d=B(v_3⊗1)+v_3B+E v_3+A(ψ⊗1), where ψ is as in Proposition <ref>. The main difficulty in extending the map F to degree p=4, related to factorization through the trivial connection over the end (-∞,0]× Y_0, is the same as in extending the map ψ to degree 4, and the main difficulty in proving eqn:Fthm is the same as in proving that ψ is a cochain map (Proposition <ref>). As we prefer to explain the ideas involved in the simplest possible setting, we will not spell out the proof of Proposition <ref> but instead refer to Subsection <ref> for details. Sometimes we will fix the variable _1 in the expressions defining A,B,E,F. Thus, for any y∈ C^r(Y) we define a homomorphism A_y:C^*(Y_0)→ C^*-r(Y_2), x↦ A(x⊗ y), and we define B_y,E_y,F_y similarly. Looking at moduli spaces over X with trivial limit over the end _-× Y_1 we obtain homomorphisms A_θ :C^*(Y_0)→ C^*(Y_2), E_θ :C^*(Y_0)→ C^*+2(Y_2). with matrix coefficients A_θ(_0),_2 :=#M(X;_0,θ,_2), E_θ(_0),_2 :=#[M(X;_0,θ,_2)∩ w_3(_02)]. We consider a variant of Floer's complex introduced by Donaldson <cit.>. For any oriented homology 3–sphere Y let *(Y) be the complex with cochain groups p(Y) =C^p(Y), p≠0, 0(Y) =C^0(Y)⊕/2 and differential d̅=d+'. Now take Y:=Y_1. For y=(z,t)∈0(Y_1) let A_y:=A_z+tA_θ, E_y:=E_z+tE_θ. For any x∈ C(Y_1) and y∈*(Y_1) we have [d,A_y]+A_d̅y =0, [d,E_y]+E_d̅y =[A_y,v_3], [d,B_x]+B_dx =A_xv_3+A_v_3x, [d,F_x]+F_dx =[B_x,v_3]+E_xv_3+E_v_3x+A_xψ. Here, [d,A_y]=dA_y+A_yd, and similarly for the other commutators. For y∈ C(Y_1) this follows from Propositions <ref> and <ref>, whereas the case y=(0,1)∈0(Y_1) is easy.□ Suppose x∈ C^-2(Y_1) and y=(z,t)∈0(Y_1) satisfy dx=0, v_3x=d̅y. Then the homomorphism :MC^*(Y_0)→ MC^*(Y_2) given by the matrix ( [ A_y+B_x A_x; E_y+F_x+A_xΞ A_y+B_x+E_x+A_xv_2 ]) is a cochain map. Writing =([ P Q; R S ]) we have d+ d=( [ dP+Pd+Qv_3 dQ+Qd; dR+Rd+v_3P+Sv_3 dS+Sd+v_3Q ]). The fact that this matrix vanishes is easily deduced from Propositions <ref> and <ref> and Lemma <ref>. We write out the calculation only for the bottom left entry. [d,E_y +F_x+A_xΞ] =E_v_3x+[v_3,A_y]+[v_3,B_x]+E_v_3x+E_xv_3+A_xψ+A_x[d,Ξ] =v_3(A_y+B_x)+(A_y+B_x+E_x+A_xv_2)v_3, hence [d,R]=v_3P+Sv_3 as claimed.□ As maps MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2) one has u_3=0. For j=0,1 let (x_j,y_j) be a cocycle in MC(Y_j), i.e. dx_j=0, v_3x_j=dy_j. Let the map of Lemma <ref> be defined with x=x_1, y=y_1, and let (x_2,y_2):=(x_0,y_0). Then ((x_0,y_0)⊗(x_1,y_1))=B_x_1(x_0)+A_y_1(x_0)+A_x_1(y_0)=x_2. Since (x_2,y_2) is a cocycle, we have v_3x_2=dy_2, proving the proposition. □ If (Y_j)≥1 for j=0,1 then (Y_2)≥(Y_0)+(Y_1). For j=0,1 let n_j:=(Y_j) and choose z_j∈ MI(Y_j) such that U^kz_j=cases 0 for 0≤ k<n_j-1, 1 for k=n_j-1. Let x:=(z_0⊗ z_1)∈ I(Y_2). Then u_3x=0 by Proposition <ref>. For 0≤ k_j≤ n_j-1, repeated application of Proposition <ref> yields u_2^k_0+k_1x=(U^k_0z_0⊗ U^k_1z_1), hence u_2^k_0+k_1x=0 by Proposition <ref>. Therefore, u_2^mx=0, 0≤ m≤ n_1+n_2-2. On the other hand, u_2^n_1+n_2-1x = u_2u_2^n_0-1u_2^n_1-1x = u_2(U^n_0-1z_0⊗ U^n_1-1z_1) =( U^n_0-1z_0)( U^n_1-1z_1) =1. Therefore, (Y_2)≥ n_0+n_1 as claimed.□ We will give a second application of Lemma <ref>, but first we need some preparation. Let A^θ_θ:C^5(Y_0)→/2 be defined on generators by A^θ_θ():=#M(,θ,θ). For y=(z,t)∈ q(Y_1) define A^θ_y:C^5-q(Y_0)→/2 and B^θ_z:C^3-q(Y_0)→/2 by A^θ_y(x):=A(x⊗ z)+tA^θ_θ(x), B^θ_z(x):=B^θ(x⊗ z). (i) A_θ+A^θ_θ d+A^θ_'(1)=. (ii) A_y+A^θ_y d+A^θ_d̅y=t. (iii) B_z+B^θ_z d+B^θ_dz=A^θ_zv_3+A^θ_v_3z. If (Y_0)≥1 and (Y_1)=0 then (Y_2)≥1. Since (Y_0)≥1 we can find (x_0,y_0)∈ MC^6(Y_0) such that dx_0=0, v_3x_0=dy_0, x_0=1. Since (Y_1)=0, Lemma <ref> says that there exist x_1∈ C^-2(Y_1) and y_1=(z_1,1)∈ 0(Y_1) such that dx_1=0, v_3x_1=d̅y_1. Let be as in Lemma <ref>. Then (x_0,y_0) is a cocycle in MC(Y_2), and by Lemma <ref> we have (x_0,y_0) =(A_y_1+B_x_1)x_0+ A_x_1y_0 =(_d̅y_1++_x_1v_3+_v_3x_1)x_0+_x_1dy_0 =1. Therefore, (Y_2)≥1.□ §.§ Operations, II We now consider the case when X has one incoming end (-∞,0]× Y_0 and two outgoing ends [0,∞)× Y_1 and [0,∞)× Y_2, where Y_2==(2,3,5) is the Poincaré homology sphere oriented as the boundary of the negative definite E_8–manifold. We again assume that H_i(X;)=0, i=1,2. We will define homomorphisms P,P',Q:C^*(Y_0)→ C^*+d(Y_1) where d=2,3,4, respectively, making use of cut-down moduli spaces introduced at the end of Subsection <ref> with h=2, so that τ^+=τ^+_2. We define P,P',Q on generators by P_0,_1 :=#[M(X;_0,_1,θ)∩ w_2(τ^+)], P'_0,_1 := #[M(X;_0,_1,θ)∩ w_2(_01)∩ w_2(τ^+)], Q_0,_1 := #[M(X;_0,_1,θ)∩ w_3(_01)∩ w_2(τ^+)]. As maps C(Y_0)→ C(Y_1) the following hold. (i) [d,P]=0. (ii) [d,P']=[v_2,P]. (iii) [d,Q]=[v_3,P]+'. (iv) P+Pd=. Here, is as defined at the end of Subsection <ref>. In (iii), argue as in the proof of Proposition <ref> to handle factorization through the trivial connection over X.□ Note that statements (i), (iii) are equivalent to the fact that the homomorphism Ψ= ([ P 0; Q P ]) :MC^*(Y_0)→ MC^*+2(Y_1) satisfies [D,Ψ]='. The homomorphism I^*(Y_0)→ I^*+2(Y_1) induced by P will also be denoted by P. As maps I(Y_0)→ I(Y_1) the following hold. (i) [u_2,P]=0. (ii) [u_3,P]='. (iii) P= u_2. Combine Propositions <ref> and <ref>.□ If (Y_0)≥2 then (Y_1)≥(Y_0)-1. Let n:=(Y_0) and choose x∈ I(Y_0) such that u_3x=0 and u_2^kx= 0 for 0≤ k<n-1, 1 for k=n-1. By Proposition <ref> we have u_3Px=0 and u_2^kPx= Pu_2^kx= u_2^k+1x= 0 for 0≤ k<n-2, 1 for k=n-2. This shows that (Y_1)≥ n-1.□ §.§ Additivity of Throughout this subsection, Y,Y_0,Y_1 will denote oriented homology 3–spheres. As before, will denote the Poincaré homology sphere. If (Y_j)≥1 for j=1,2 then (Y_0# Y_1)≥(Y_0)+(Y_1). Recall that there is a standard cobordism W from (-Y_0)∪(-Y_1) to Y_0# Y_1. By attaching half-infinite tubular ends to W we obtain a manifold X to which we can apply the results of Subsection <ref>. The proposition now follows from Proposition <ref>.□ If (Y_0)≥1 and (Y_1#(-Y_0))=0 then (Y_1)≥1. This follows from Proposition <ref>.□ If (Y#)≥2 then (Y)≥(Y#)-1. This follows from Proposition <ref> with Y_0=Y# and Y_1=Y. □ In the following, we write Y_0∼ Y_1 to indicate that Y_0 and Y_1 are homology cobordant. If Y_0# Y_1∼ then (Y_0)+(Y_1)=1. Let k_j:=(Y_j). Case 1: n_0n_1=0. Without loss of generality we may assume that n_1=0. By Proposition <ref> we have n_0≥1. If n_0≥2 then, since Y_0∼#(-Y_1), Proposition <ref> would give -n_1=(-Y_1)≥(#(-Y_1)-1≥1, a contradiction. Hence, n_0=1, so the lemma holds in this case. Case 2: n_0n_1>0. We show that this cannot occur. If k_j>0 then Proposition <ref> yields 1=()≥ n_0+n_1≥2, a contradiction. Similarly, if k_j<0 then the same proposition yields -1=(-)≥2. Case 3: n_0n_1<0. Then we may assume that n_0>0. Applying Proposition <ref> we obtain n_0=(#(-Y_1))≥1-n_1≥2. Proposition <ref> now gives -n_1≥ n_0-1. Altogether, this shows that n_0+n_1=1.□ (Y#)=(Y)+1. Apply the lemma with Y_0=Y# and Y_1=-Y.□ For any oriented integral homology 3–spheres Y_0,Y_1 one has (Y_0# Y_1)=(Y_0)+(Y_1). Let k_j:=(Y_j) and Z_j:=Y_j#(-k_j). By Corollary <ref> we have (Z_j)=0, so by Proposition <ref>, 0=(Z_0# Z_1)=(Y_0# Y_1#(-n_0-n_1))=(Y_0# Y_1)-n_0-n_1. □ § FURTHER PROPERTIES OF . EXAMPLES §.§ Proof of Theorem <ref> Let W' be the result of connecting the two boundary components of W by a 1–handle. Then W and W' have the same second cohomology group and the same intersection form. Let Z be the negative definite E_8–manifold (i.e. the result of plumbing on the E_8 graph), so that the boundary of Z is the Poincaré sphere . We will apply Theorem <ref> to the boundary-connected sum V:=W'#_∂ Z. Let S,S'⊂ Z be embedded oriented 2–spheres corresponding to adjacent nodes on the E_8 graph. These spheres both have self-intersection number -2, and S· S'=1. Let v=P.D.([S])∈ H^2(V, V)≈ H^2(V) be the Poincaré dual of the homology class in V represented by S. Then v·[S']=1, hence v is strongly admissible. The class w∈ J_V represented by v satisfies w^2=-2, and ± w are the only classes in w+2J_V with square norm 2. Theorem <ref> and Proposition <ref> now yield (Y)+1=(Y#)≥1, hence (Y)≥0 as claimed.□ §.§ Proof of Theorem <ref> Theorem <ref> is an immediate consequence of the following two propositions. Let K,K' be knots in S^3 such that K' is obtained from K by changing a positive crossing. Let Y,Y' be (-1) surgeries on K,K', respectively. Then 0≤(Y')-(Y)≤1. We observe that Y' is obtained from Y by (-1) surgery on a linking circle of the crossing such that bounds a surface in Y of genus 1. The surgery cobordism W from Y to Y' satisfies H_1(W;)=0 and b^+_2(W)=0, hence (Y')≥(Y) by Theorem <ref>. Since Y bounds a simply-connected negative definite 4–manifold (the trace of the surgery on K) we have (Y)≥0 by the same theorem. Let Y” be 0–surgery on . By Floer's surgery theorem <cit.> there is a long exact sequence ⋯→ I(Y”)→ I(Y)ϕ→ I(Y')ψ→ I(Y”)→⋯ where ϕ is induced by the cobordism W. Let n:=(Y') and suppose n≥2, the proposition already being proved for n=0,1. Then there is a b∈ I(Y') such that u_2^jb= 0, 0≤ j<n-1, 1, j=n-1. By Proposition <ref> we have ψ u_2b=u_2ψ b=0, hence u_2b=ϕ a for some a∈ I(Y). For j≥0 we have u_2^j a= u_2^jϕ a= u_2^j+1 b. Combining this with Corollary <ref> we obtain (Y)≥ n-1=(Y')-1 and the proposition is proved.□ If Y is (-1) surgery on a positive knot K in S^3 then (Y)=0. This follows from Theorem <ref> because Y bounds simply-connected 4–manifolds V_± where V_+ is positive definite and V_- is negative definite. As V_- one can take the trace of the (-1) surgery on K. On the other hand, since K can be unknotted by changing a collection of positive crossings, the observation in the beginning of the proof of Proposition <ref> yields V_+.□ §.§ Proof of Proposition <ref> Let Y_k:=(2,2k-1,4k-3). Then Y_k bounds the simply-connected 4–manifold V_k obtained by plumbing according the weighted graph in Figure 1, where the total number of nodes is 4k. Let e_1,…,e_4k be an orthonormal basis for ^4k. The intersection form of V_k is isomorphic to the lattice _4k:= {∑_i x_ie_i 2x_i∈, x_i-x_j∈, ∑_i x_i∈2}, with the nodes of the plumbing graph corresponding to the following elements of _4k: 1/2∑_i=1^4ke_i, e_2+e_3, (-1)^j(e_j-1-e_j), j=3,…,4k. Let w∈ J_k=H^2(V_k;) be the element corresponding to 1/2∑_i=1^4ke_i. Since ± w are the only elements of minimal square norm in w+2J_k it follows from Theorem <ref> that (Y_k)≥ k-1. On the other hand, Y_k is also the result of (-1) surgery on the torus knot T_2,2k-1. Since T_2,2k-1 can be unknotted by changing k-1 crossings we deduce from Theorem <ref> that (Y_k)≤ k-1. This proves the proposition.□ §.§ Proof of Theorem <ref> Since we will use different coefficient rings R, the homomorphism :C^4(Y;R)→ R defined in Subsection <ref> will now be denoted by _R. By definition, the condition h(Y)>0 means that there exists a cocycle w∈ C^4(Y;) such that _ w≠0. Note that replacing the coefficient group by yields an equivalent condition. On the other hand, the condition (Y)>0 means that there exists a cocycle z∈ C^4(Y;/2) such that _/2z≠0 and such that the cohomology class of z is annihilated by u_3. If in addition z lifts to an integral cocycle z∈ C^4(Y;) then _ z must be odd, in particular non-zero, hence h(Y)>0. Now suppose (Y)>0 and h(Y)≤0. The above discussion shows that the homomorphism I^4(Y;)→ I^4(Y;/2) is not surjective, hence the Bockstein homomorphism I^4(Y;/2)→ I^5(Y;) is non-zero. This proves the theorem.□ §.§ Proofs of Theorems <ref> and <ref> Proof of Theorem <ref>: Part (i) was proved in <cit.> using Seiberg-Witten theory. To prove (ii), let =(2,3,5). Then ()=1 by Proposition <ref>. If H^2(X;) contains no 2–torsion then (ii) follows from Corollary <ref>. Under the weaker assumption that H^2(X;) contains no element of order 4, we can appeal to Theorem <ref> since u_3=0 on I().□ Proof of Theorem <ref>: Let be the monopole h–invariant defined in <cit.>. (One could equally well use the correction term d.) Then ()=-1, and additivity of yields (#)=-2. If ξ is any characteristic vector for J_X then by <cit.> one has -(Y)≥1/8(b_2(X)+ξ·ξ). Let J_X=m-1⊕ J_X as in Corollary <ref>. By assumption, J_X is even, so J_X has characteristic vectors ξ with ξ·ξ=-m. Therefore, J_X=b_2(X)-m≤16. By the classification of even unimodular definite forms of rank ≤16 (see <cit.>) one has J_X=0, -E_8, -2E_8, or -_16. It only remains to rule out J_X=-_16. Recalling that is the result of (-1) surgery on the negative trefoil knot and applying Proposition <ref> twice we find that u_2^2=0 on I^*(#), hence (#)≤2. On the other hand, if J_X=-_16 then applying Theorem <ref> as in the proof of Proposition <ref> we would obtain (#)≥3, a contradiction. This proves the theorem.□ § TWO POINTS MOVING ON A CYLINDER, I The main goal of this section is to prove Proposition <ref>. The first two subsections will introduce some concepts used in the proof, which appears in the final subsection. §.§ Energy and holonomy Let Y be an oriented (integral) homology 3–sphere with base-point y_0. Let →^*(Y[0]) be the canonical oriented Euclidean 3–plane bundle, where Y[0]=[-1,1]× Y as in eqn:ybt-def. Let ,β∈(Y), not both reducible. Over M(,β)× there is a canonical 3–plane bundle β obtained by pulling back the universal bundle over M(,β)×× Y by the map (,t)↦(,t,y_0). There is a canonical isomorphism β→ R^* where R:M(,β)×→^*(0), (,t)↦[t], so we can identify the fibre of β at (,t) with the fibre _[t] of at [t]. Recall from Subsection <ref> that a section of β is called holonomy invariant if for all =[A]∈ and real numbers s<t one has that (,s) is mapped to (,t) by the isomorphism equation* _[s]→_[t]. defined by holonomy of A along the path [s,t]×{y_0}. Let be the set of elements of ^*(0) that can be represented by flat connections. Choose three sections ρ_1,ρ_2,ρ_3 of which form a positive orthonormal basis at every point in some neighbourhood of . Choose >0 so small that the following three conditions hold: description (i)If A is any instanton over (-∞,2]× Y satisfying A(-∞,2]< such that the flat limit of A is irreducible then ρ_1,ρ_2,ρ_3 are orthonormal at A[0]. (ii)If A is any instanton over [-2,∞)× Y satisfying A[-2,∞)< such that the flat limit β of A is irreducible then ρ_1,ρ_2,ρ_3 are orthonormal at A[0]. (iii)For each pair ,β∈(Y) the difference ()-(β)∈/ has no real lift in the half-open interval (0,2]. Here, _A refers to the energy of A as defined in eqn:def-energy. Let ,β be distinct elements of (Y). If [A]∈ M(,β) then _A()>2, since the left hand side is a positive real lift of ()-(β). We can therefore define smooth functions τ^-,τ^+:M(,β)→ implicitly by _A((-∞,τ^-(A)+2])= =_A([τ^+(A)-2,∞)). We will consider the average and difference τ_a:=1/2(τ^++τ^-), τ_d:=τ^+-τ^-. Clearly, τ_d>0. There are translationary invariant smooth restriction maps R^±:M(,β)→^*(0), ↦[τ^±()] which, by the unique continuation result of Proposition prop:unique-continuation-cylinder, descend to injective maps Ř^±:(,β)→^*(0). If is irreducible then for any =[A]∈ M(,β) the vectors equation ρ_i(R^-()), i=1,2,3 form an orthonormal basis for _R^-(), by choice of . Let ρ^-_i be the holonomy invariant section of β whose value at (,τ^-()) is ρ_i(R^-()). Similarly, if β is irreducible, then the vectors ρ_i(R^+()) form an orthonormal basis for _R^+(). Let ρ^+_i be the holonomy invariant section of β whose value at (,τ^+()) is ρ_i(R^+()). If ,β are both irreducible let h=(h_ij):M(,β)→3 be the map whose value at [A] is the holonomy of A along [τ^-(A),τ^+(A)]×{y_0} with respect to the bases described above, so that ρ^-_j(,t)=∑_ih_ij()ρ^+_i(,t). §.§ Factorization through the trivial connection Now assume ()=4, (β)=1. We will introduce real valued functions ^± on M(,β) which measure the extent to which a given element factors through the trivial connection over Y. Set M_,θ:=R^-(M(,θ)), which is a finite subset of ^*(0). Let M_ be the union of all subsets R^-(M(,β'))⊂^*(0) where β'∈^*(Y) and M(,β')≤4. Note that M_ is compact. Choose an open neighbourhood U_ of M_,θ in ^*(0) such that itemize * the closure of U_ is disjoint from M_, * U_ is the disjoint union of open sets U_,i, i=1,…,r, each of which contains exactly one point from M_,θ. Choose a closed neighbourhood U'_ of M_,θ contained in U_ and a smooth function equation e_:→[0,∞) such that e_=1 on U'_ and e_=0 outside U_. Define the translationary invariant function λ^-:M(,β)→[0,∞), ↦ e_(R^-())·τ_d(). The function ^+ is defined in a symmetrical fashion (corresponding to reversing the orientation of Y). Let M_β be the union of all subsets R^+(M(',β))⊂^*(0) where '∈^*(Y) and M(',β)≤4. Choose an open neighbourhood V_β of M_θ,β:=R^+(M(θ,β) in ^*(0) such that the closure of V_β is disjoint from M_β, and such that V_β is the disjoint union of open sets V_β,j, j=1,…,s, each of which contains exactly one point from M_θ,β. Choose a closed neighbourhood V'_β of M_θ,β contained in V_β and a smooth function e_β:→[0,∞) such that e_β=1 on V'_β and e_β=0 outside V_β. Set λ^+:M(,β)→[0,∞), ↦ e_β(R^+())·τ_d(). lemma There is a constant C<∞ such that for any ∈ M(,β) satisfying ^-()+^+()>C one has ^-()=^+(). Suppose the lemma does not hold. Then one can find a sequence _n in M(,β) such that ^-(_n)+^+(_n)→∞ and ^-(_n)≠^+(_n). After passing to a subsequence we may assume that the sequence _n chain-converges. If the chain-limit lay in (,β), or if the chain-limit involved factorization through an irreducible critical point, then ^±(_n) would be bounded. Therefore, the chain-limit must lie in (,θ)×(θ,β) and, consequently, ^-(_n)=τ_d(_n)=^+(_n) for n≫0, a contradiction.□ In the course of the proof we also obtained the following: lemma For a chain-convergent sequence _n in M(,β) the following are equivalent: description (i) λ^-(_n)→∞. (ii) λ^+(_n)→∞. (iii) The chain-limit of _n lies in (,θ)×(θ,β).□ Since ^+ will not appear again in the text, we set :=^- to simplify notation. For any real number T set _=T:={∈()=T}. Given ∈ M(,β), one has R^-()∈ U_ if ()>0 (by definition of ), and R^+()∈ V_β if ()≫0 (by Lemma <ref>). Therefore, if ()≫0 then there is a map d:M(,β)_=T→(,θ)×(θ,β) characterized by the fact that if d()=(_1,_2) then R^-() and Ř^-(_1) lie in the same set U_,i, and R^+() and Ř^+(_2) lie in the same set V_β,j. Gluing theory (see <cit.>) provides the following result: lemma There is a T_0>0 such that for any T≥ T_0 the map d× h×τ_a: _=T→((,θ)×(θ,β))×3× is a diffeomorphism.□ §.§ Proof of Proposition <ref> Let ,β∈^*(Y) with (β)-()≡58. To compute the matrix coefficient (v_2v_3+v_3v_2),β we distinguish between two cases. If ()≢48 the calculation will consist in counting modulo 2 the number of ends of the 1-manifold 23(,β). If ()≡48 then M(,β) may contain sequences factoring through the trivial connection over Y. To deal with this we consider the subspace of M(,β)× consisting of points (,t) with ()≤ T for some large T. By carefully cutting down this subspace to a 1-manifold and then counting the number of ends and boundary points modulo 2 we obtain eqn:v2v3chhom. For s∈ we define the translation map _s:→, (t,y)↦(t+s,y). Part (I) Suppose ()≢48. Then no sequence in M(,β) can have a chain-limit involving factorization through the trivial connection. We will determine the ends of the smooth 1-manifold 23(,β). Let (_n,t_n) be a sequence in 23(,β). After passing to a subsequence we may assume that the following hold: description (i) The sequence ^*_-t_n(_n) converges over compact subsets of to some ^-∈ M(^-,β^-). (By this we mean that there are connections A_n,A̅ representing _n,^- respectively, such that A_n→A̅ in C^∞ over compact subsets of .) (ii) The sequence ^*_t_n(_n) converges over compact subsets of to some ^+∈ M(^+,β^+). (iii) The sequence t_n converges in [-∞,∞] to some point t_∞. Here, [-∞,∞] denotes the compactification of the real line obtained by adding two points ±∞. Suppose (_n,t_n) does not converge in 23(,β). Case 1: t_∞ is finite. Then M(^-,β^-) has dimension 4 and either ^-= or β^-=β. The corresponding number of ends of 23(,β), counted modulo 2, is (dϕ+ϕ d),β. Case 2: t_∞=∞. Let n^± be the dimension of M(^±,β^±). Because s_1(^-[0])=0, s_2(^+[0])∧ s_3(^+[0])=0 we must have n^-≥3 and n^+≥2. On the other hand, n^-+n^+≤ M(,β)=5, so n^-=3, n^+=2. It follows that =^-, β^-=^+, β^+=β. The corresponding number of ends of 23(,β) is v_2v_3,β modulo 2. Case 3: t_∞=-∞. Arguing as in Case 2 one finds that the number of such ends of 23(,β) is v_3v_2,β modulo 2. Since the total number of ends of 23(,β) must be zero modulo 2, we obtain the equation eqn:v2v3chhom in the case ()≢48. Part (II) Now suppose ()≡48. We will again make use of a cut-off function b as in eqn:b-prop1 in Subsection <ref>, but we now impose two further conditions, namely b(0)=1/2, b'(t)>0 for -1<t<1. Set c:×→, (,t)↦ b(t-τ_a()). Choose generic 3×3 matrices A^+=(a^+_ij) and A^-=(a^-_ij) and for j=1,2,3 define a section ρ_j of the bundle R^* over M(,β)× by ρ_j:=(1-c)∑_ia^-_ijρ^-_i+c∑_ia^+_ijρ^+_i. Define a function g:M(,β)×→[0,1] by g(,t):=b(()-1)· b(τ^+()-t)· b(t-τ^-()). For j=1,2,3 we now define a section s_j of R^* by s_j(,t):=(1-g(,t))· s_j([t])+g(,t)·ρ_j(,t). defn Let 23(,β) be the subspace of × consisting of those points (,t) that satisfy the following conditions: itemize * s_1(,-t)=0, * s_2(,t) and s_3(,t) are linearly dependent. To understand the ends of 23(,β) we will need to know that certain subspaces of M(,θ) and M(θ,β), respectively, are “generically” empty. These subspaces are defined as follows. For ∈ M(,θ) and j=1,2,3 let s_j():=(1-b(-τ^-()))· s_j([0])+b(-τ^-()) ∑_ia^-_ijρ^-_i(,0), and for ∈ M(θ,β) let s_j():=(1-b(τ^+()))· s_j([0])+b(τ^+()) ∑_ia^+_ijρ^+_i(,0). Set M_2(,θ) :={∈ M(,θ) s_2()∧ s_3()=0}, M_3(,θ) :={∈ M(,θ) s_1()=0}. Replacing (,θ) by (θ,β) in the last two definitions we obtain subspaces M_k(θ,β) of M(θ,β). For k=2,3, each of the spaces M_k(,θ) and M_k(θ,β) has expected dimension 1-k and is therefore empty for “generic” choices of sections s_j and matrices A^±. There is a constant C_0<∞ such that for all (,t)∈23(,β) one has |t|≤min(-τ^-(),τ^+())+C_0. We must prove that both quantities |t|+τ^-() and |t|-τ^+() are uniformly bounded above for (,t)∈23(,β). The proof is essentially the same in both cases, so we will only spell it out in the first case. Suppose, for contradiction, that (_n,t_n) is a sequence in 23(,β) with |t_n|+τ^-(_n)→∞. After passing to a subsequence we may assume that the sign of t_n is constant, so |t_n|=-et_n for some constant e=±1. Then [et_n]→ by exponential decay (see <cit.>), and s_j(,et_n)=s_j(_n[et_n]) for n≫0. If e=1 then this gives 0=s_2(_n[t_n])∧ s_3(_n[t_n])→ s_2()∧ s_3(), as n→∞, whereas if e=-1 we get 0=s_1(_n[-t_n])→ s_1(). However, for “generic” sections s_j, both s_2()∧ s_3() and s_1() are non-zero. This contradiction proves the lemma. □ For any constant C_1<∞ there is constant L>0 such that for all (,t)∈23(,β) satisfying ()≥ L one has |t|≤min(-τ^-(),τ^+())-C_1. Suppose to the contrary that there is a constant C_1<∞ and a sequence (_n,t_n) in 23(,β) such that (_n)→∞ and |t_n|>min(-τ^-(_n),τ^+(_n))-C_1. After passing to a subsequence we may assume that at least one of the following two conditions holds: (i) |t_n|>-τ^-(_n)-C_1 for all n, (ii) |t_n|>τ^+(_n)-C_1 for all n. The argument is essentially the same in both cases, so suppose (i) holds. By Lemma <ref> we also have |t_n|≤-τ^-(_n)+C_0, hence the sequence τ^-(_n)+|t_n| is bounded. Since (_n)→∞ we have τ_d(_n)→∞, so τ^+(_n)+|t_n|=τ_d(_n)+(τ^-(_n)+|t_n|)→∞. After passing to a subsequence we may assume that * the sequence _n chain-converges; * the sequence τ^-(_n)+|t_n| converges to a real number; * |t_n|=-et_n for some constant e=±1. From Lemma <ref> we deduce that '_n:=^*_et_n_n converges over compact subsets of to some ∈ M(,θ). For large n we have c(_n,et_n)=0 and g(_n,et_n)=b(et_n-τ^-(_n))=b(-τ^-('_n))→ b(-τ^-()). For j=1,2,3 we now get s_j(_n,et_n)→ s_j(). But then lies in M_2(,θ) (if e=1) or in M_3(,θ) (if e=-1), contradicting the fact that the latter two spaces are empty.□ Choose L≥2 such that for all (,t)∈23(,β) with ()≥ L one has |t|≤min(-τ^-(),τ^+())-1, which implies that s_j(,t)=ρ_j(,t). Set 23(,β):={(,t)∈23(,β)()≥ L}. We will show that 23(,β) is transversely cut and therefore a one-manifold with boundary, and determine the number of boundary points and ends modulo 2. We will see that the number of ends is given by the same formula as in Part (I), whereas the boundary points contribute the new term ' of eqn:v2v3chhom. Ends of 23(,β): Let (_n,t_n) be a sequence in 23(,β). After passing to a subsequence we may assume that (i),(ii), (iii) of Part (I) as well as the following hold: description (iv) The sequence _n is chain-convergent. (v) The sequence τ_a(_n) converges in [-∞,∞]. (vi) Either (_n)>0 for all n, or (_n)=0 for all n. Suppose (_n,t_n) does not converge in 23(,β). Case 1: (_n)=0 for all n. Then g(_n,t_n)=0 and therefore s_j(_n,t_n)=s_j(_n[t_n]). This case is similar to Part (I) and the corresponding number of ends of 23(,β), counted modulo 2, is (v_2v_3+v_3v_2+dϕ+ϕ d),β, where ϕ is defined as before. Case 2: (_n)>0 for all n. We show this is impossible. By definition of the chain-limit of _n must lie in (,β), so τ_d(_n) is bounded. By Lemma <ref>, the sequence τ^-(_n) is bounded above whereas τ^+(_n) is bounded below, hence both sequences must be bounded. Applying Lemma <ref> again we see that t_n is bounded. Therefore, both sequences τ_a(_n) and t_n converge in , so (_n,t_n) converges in M(,β)× and hence in 23(,β), which we assumed was not the case. Boundary points of 23(,β): Let M=M(3,) be the space of all 3×3 real matrices, and let U⊂ M be the open subset consisting of those matrices B satisfying B_1≠0, B_2∧ B_3≠0, where B_j denotes the jth column of B. Then M∖ U is the union of three submanifolds of codimension at least two, hence U is a connected subspace and a dense subset of M. Let F:3××× U× U →^3×^3×^3, (H,v,w,B^+,B^-) ↦(F_1,F_2,F_3), where F_1 =(1-b(v))HB^-_1+b(v)B^+_1, F_j =(1-b(w))HB^-_j+b(w)B^+_j, j=2,3. Then F is a submersion, so F(0,0,0) is empty. Moreover, the set Z:=F({0}× L(^3), consisting of those points in the domain of F for which F_1=0, F_2∧ F_3=0, is a codimension 5 submanifold and a closed subset of 3×^2× U^2. The projection π:Z→ U^2 is a proper map whose mod 2 degree is _2(π)=1. The equations eqn:FFF imply -1<v,w<1, hence π is proper. To compute its degree, let e_1,e_2,e_3 be the standard basis for ^3 and let B^± be given by B^-_1=B^-_2=e_1, B^-_3=e_2, B^+_1=-e_1, B^+_2=e_1, B^+_3=-e_2. We show that the preimage Z':=π(B^+,B^-) consists of precisely one point. Suppose (H,v,w)∈ Z'. Because 0≤ b≤1, the equation F_1=0 implies b(v)=1/2 and hence v=0, He_1=e_1, F_2=e_1. Because He_2⊥ e_1, the vectors F_2,F_3 are linearly dependent if and only if F_3=0, which yields w=0, He_2=e_2. Thus, Z'={(I,0,0)}, where I is the identity matrix. Using the fact that f(I,0,0)=(0,e_1,0) and that the tangent space to L^*(^3) at (e_1,0) is ^3×{0}+ e_1 it is easy to see that the map F( · , · , · ,B^+,B^-):3××→^9 is transverse to {0}× L^*(^3) at (I,0,0), or equivalently, that (B^+,B^-) is a regular value of π. This proves the claim.□ By Lemma <ref> we can identify ∂23(,β)= (,θ)×(θ,β)×π(A^+,A^-), where (H,v,w) corresponds to (h(),-t-τ_a(),t-τ_a()) for (,t)∈∂23(,β). Hence, for generic matrices A^± the number of boundary points of 23(,β), counted modulo 2, is ',β. This completes the proof of Proposition <ref>. □ § TWO POINTS MOVING ON A CYLINDER, II Let Y be an oriented homology 3–sphere. In this section we will prove Proposition <ref>, which concerns a certain cochain map ψ:C^*(Y)→ C^*+5(Y) appearing in the proof of additivity of . We will continue using the notation introduced in Section <ref>. §.§ The cochain map ψ We begin by recalling the definition of ψ in degrees different from 4 mod 8 given in Subsection <ref>. Let s_1,s_2 be "generic" sections of the canonical 3–plane bundle →^*(Y[0]). (Later we will impose further conditions on s_1,s_2.) For any ,β∈^*(Y) set 33(,β):={(,t)∈× s_1([-t])=0=s_2([t])}. If (,β)=5 and ()≢48 then arguing as in Part (I) of the proof of Proposition <ref> one finds that 33(,β) is a finite set. We define the matrix coefficient ψ,β by ψ,β:=#33(,β). Recall that any "generic" section of defines a cup product C^*(Y)→ C^*+3(Y) by the formula eqn:v3def. Let v_3 and v'_3 be the cup products defined by s_1 and s_2, respectively. prop For q≢3,48 one has dψ+ψ d=v_3v'_3+v'_3v_3 as maps C^q(Y)→ C^q+6(Y). Let ,∈^*(Y) with (,)=6 and ()≢3,48. Note that no sequence in M(,) can have a chain-limit involving factorization through the trivial connection. Now let (_n,t_n) be a sequence in 33(,). After passing to a subsequence we may assume that description (i) The sequence ^*_t_n_n converges over compact subsets of to some point ^+∈ M(^+,^+). (ii) The sequence ^*_-t_n_n converges over compact subsets of to some point ^-∈ M(^-,^-). (iii) The sequence t_n converges in [-∞,∞] to some point t_∞. Clearly, s_1(^+[0]=0=s_2(^-[0]), hence (^±,^±)≥3. Case 1: t_∞ finite. Then (^+,^+)=5 and either ^+= or ^+=. The corresponding number of ends of 33(,), counted modulo 2, is (dψ+ψ d),. Case 2: t_∞=∞. Then (^±,^±)=3, so ^-=, ^-=^+, and ^+=. The corresponding number of ends of 33(,) is v_3v'_3, modulo 2. Case 3: t_∞=-∞. As in Case 2 one finds that the number of such ends is v'_3v_3, modulo 2. Since the total number of ends of 33(,) must be zero modulo 2, we obtain the proposition.□ We now show that v_3=v'_3 if the sections s_1,s_2 are close enough in a certain sense. To make this precise, we introduce the following terminology: We will say a section s of has Property 4 if for all ,β∈^*(Y) with (,β)≤4 the map s_β:M(,β)→, ↦ s([0]) is transverse to the zero-section in . Suppose s∈() has Property 4, and let be any finite-dimensional linear subspace of (). Then for any sufficiently small ∈ the following hold: description (i)The section s':=s+ has Property 4. (ii)The sections s and s' define the same cup product C^*(Y)→ C^*+3(Y). Let (,β)=3. Combining the transversality assumption with a compactness argument one finds that the zero-set Z of s_β is a finite set. Now observe that the map equation M(,β)×→, (,)↦(s+)([0]) is smooth, since has finite dimension. Therefore, given any neighbourhood U of Z in M(,β) then the zero-set of (s+)_β is contained in U for all sufficiently small . The lemma now follows by applying the implicit function theorem to the map eqn:sfrpmap.□ From now on we assume that s_1,s_2 are sufficiently close in the sense of the lemma, so that in particular v_3=v'_3. Since we are taking coefficients in /2, we deduce from Proposition <ref> that dψ=ψ d in degrees different from 3 and 4 modulo 8. We now extend the definition of ψ to degree 4. Let ,β∈^*(Y) with ()=4 and (β)=1. To define ψ,β we use the set-up of Subsections <ref> and <ref> and define ρ_j, s_j for j=1,2 as in Subsection <ref>, where A^± should now be generic 3×2 real matrices. In particular, we require that A^± should have non-zero columns and that the angle between the columns of A^+ should be different from the angle between the columns of A^-. For any 3×2 real matrix B with non-zero columns B_j set ν(B):= B_1,B_2/B_1B_2, using the standard scalar product and norm on ^3. Then the above assumption on the angles means that ν(A^+)≠ν(A^-). Now define 33(,β):={(,t)∈× s_1(,-t)=0, s_2(,t)=0}. prop 33(,β) is a finite set. It is easy to see that Lemmas <ref> and <ref> hold with 33(,β) in place of 23(,β). Arguing as in the proof of Proposition <ref> one finds that for any L>0 there are only finitely many points (,t)∈33(,β) with ()≤ L. Choose L≥2 such that for all (,t)∈33(,β) with ()≥ L one has |t|≤min(-τ^-(),τ^+())-1, which implies that s_j(,t)=ρ_j(,t). We claim that there are no such (,t). For suppose (,t) is such an element and set (H,v_1,v_2):=(h(),-t-τ_a(),t-τ_a())∈3××. Then for j=1,2 one has (1-b(v_j))HA^-_j+b(v_j)A^+_j=0. However, there is no solution (H,v_1,v_2) to these equations, since we assume the columns A^±_j are non-zero and ν(A^+)≠ν(A^-).□ We define ψ in degree 4 by ψ,β:=#33(,β). prop If the endomorphism ψ is defined in terms of “generic” sections s_1,s_2 that are sufficiently close then dψ=ψ d as maps C^*(Y)→ C^*+6(Y). Although we could deduce this from Proposition <ref> below, we prefer to give a direct proof, partly because the techniques involved are also needed in the proof of Proposition <ref>. It only remains to prove this in degrees 3 and 4 modulo 8. There is a complete symmetry between these two cases because of Lemma <ref>, so we will spell out the proof only in degree 4. Let ,∈^*(Y) with ()=4, ()=2. We will show that (dψ+ψ d),=0 by counting the ends of a certain 1–dimensional submanifold 33(,) of M(,)×. For any '∈(Y) we define a smooth function :M(',)→ as follows. For each β∈^1_Y let K_β be the union of all subsets R^+(M(”,))⊂^*(Y[0]) where β≠”∈(Y) and (”,)≤(β,), where ( · , · ) is as in eqn:cs-al-beta. Then K_β is compact. Choose a closed neighbourhood W_β in ^*(Y[0]) of the finite set R^+(M(β,)) such that W_β is disjoint from K_β, and a smooth function f_β:^*(Y[0])→[0,1] such that the following two conditions hold: * W_β and W_β' are disjoint if β≠β'; * f_β=1 on a neighbourhood of R^+(M(β,)), and f_β=0 outside W_β. Set f:=1-∑_β f_β. Let be the set of all β∈^1_Y such that (',)>(β,)>0. For ∈ M(',) and β∈ we define τ^+_β()∈ implicitly by _([τ^+_β()-2,∞))=(β,)+, where the constant is as in Subsection <ref>, and set ():=f(R^+())·τ^+()+ ∑_β f_β(R^+())·τ^+_β(). The function behaves under translation in the same way as τ^±. Namely, for any real number s one has (^*_s())=()-s. For any ∈ M(',) let () denote the restriction of to the band (). For i=1,2,3 let i be the holonomy invariant section of 'β whose value at (,()) is ρ_i(()). lemma Let _n be a chain-convergent sequence in M(',). If the last term of the chain-limit of _n lies in (β,) for some β∈^*(Y) of index 1 then (τ^+-)(_n)→∞, otherwise the sequence (τ^+-)(_n) is bounded. Because of the translationary invariance of τ^+- we may assume that τ^+(_n)=0. Then _n converges over compact subsets of to some element ∈ M(”,) representing the last term in the chain-limit of _n. In fact, because no energy can be lost at ∞ by the choice of , there are, for any real number r, connections A_n,A representing _n,, respectively, such that A_n-A_L^p,w_1((r,∞)× Y)→0, as follows from the exponential decay results of <cit.>. Here, p,w are as in the definition of the space of connections in Section <ref>. Suppose first that β:=” is irreducible of index 1. Then (_n)=τ^+_β(_n) for n≫0 and (τ^+-τ^+_β)(_n)=-τ^+_β(_n)→∞, proving the first assertion of the lemma. Now suppose the sequence (τ^+-)(_n) is not bounded. After passing to a subsequence we may assume that there exists a β∈ such that for each n one has R^+(_n)∈ W_β. Suppose, for contradiction, that ”≠β. Since W_β is closed we must have R^+()∈ W_β as well, hence (”,)>(β,). From eqn:anai we deduce that τ^+_β(_n)→τ^+_β(), so (-τ^+)(_n)=τ^+_β(_n) is bounded. This contradiction shows that ”=β.□ If _n is a sequence in M(',) which converges over compacta to ∈ M(”,), where ”∈(Y) and (”)≠1, then (_n)→(). Let β∈^1_Y with (β,)>0. If (”,)≤(β,) then R^+()∉W_β. Since W_β is closed, we have R^+(_n)∉W_β for n≫0. This means that β contributes neither to () nor to (_n) for n≫0. If on the other hand (”,)>(β,) then τ^+_β(_n)→τ^+_β(). From this the lemma follows.□ Let and be the real-valued functions on M(,) defined by :=1/2(+τ^-), :=1/2(-τ^-). Let :M(,)→[0,∞), ↦ e_(R^-())·(), where e_ is as in eqn:eal. As the following lemma shows, the quantity () measures the extent to which factors through the trivial connection θ over Y. lemma Let _n be a chain-convergent sequence in M(,). If the first term of the chain-limit of _n lies in (,θ) then (_n)→∞, otherwise the sequence (_n) is bounded. Because of the translationary invariance of we may assume τ^-(_n)=0 for all n, so that the sequence _n converges over compact subsets of to some ∈ M(,β), where β∈(Y). Then represents the first term of the chain-limit of _n. Part I. Suppose first that β=θ. We will show that (_n)→∞. There are two sequences 1,2 of real numbers such that itemize * ^*_1(_n) converges over compact subsets of to an element of M(,θ). * ^*_2(_n) converges over compact subsets of to an element of M(θ,β'), where β' is an element of ^*(Y) which is either equal to or has index 1. * 2-1→∞. Define the sequence r_n of real numbers implictly by __n((-∞,r_n])=(,θ)+. Then r_n<τ^+(_n) and r_n<τ^+_β(_n) for all β∈_, hence r_n<(_n). For large n one therefore has (_n)=(_n)-τ^-(_n)>r_n-τ^-(_n). But 1-τ^-(_n), 2-r_n are both bounded sequences and 2-1→∞, hence (_n)>r_n-τ^-(_n)→∞. Part II. Now suppose β is irreducible. We will show that the sequence (_n) is bounded. Case 1: β=. Then _n converges to in M(,), hence (_n) is bounded. Case 2: (,β)≤4. For large n one would then have R^-(_n)∉U_, hence e_(R^-(_n))=0 and therefore (_n)=0. Case 3: (,β)=5, i.e. (β)=1. For large n one would then have R^+(_n)∈ W_β and therefore (_n)=e_(_n[0])·τ^+_β(_n) → e_([0])·τ^+(), so that (_n) is bounded in this case, too.□ Given '∈(Y), a real number d, and a real 3×2 matrix A'=(a'_ij) of maximal rank we define two sections ζ_1,ζ_2 of ' by ζ_j(,t):=b^+ j+(1-b^+)∑_i=1^3a'_ijρ^+_i, where b^+:=b(τ^+--d). Here, and in the remainder of this section, b:→ is a smooth function satisfying eqn:b-prop1 and eqn:b-prop2. We will show that for '= and generic matrix A' the sections ζ_1,ζ_2 are linearly independent at any point (,t)∈ M(,)× with ()≫0. We begin by spelling out sufficient conditions on A' under which this holds. For any β∈^1_Y the finite set (θ,β)×(β,) is in 1-1 correspondence with the set of points (,')∈ M(θ,β)× M(β,) satisfying τ^+()=0=τ^+('). (In other words, this is one way of fixing translation.) For each such pair (,'), represented by a pair (A,A') of connections, say, the holonomy of A along the path [0,∞)×{y_0} composed with the holonomy of A' along (-∞,0]×{y_0} defines an isomorphism _,':_[0]→_'[0]. For any real number r and j=1,2 let η_j(r)=r·_,'(ρ_j([0]))+ (1-r)∑_i=1^3a'_ijρ_i('[0]). Then the set C:={r∈[0,1]η_1(r)∧η_2(r)=0} has expected dimension 1-2=-1 and is empty for generic matrices A'. Since (Y) is finite we conclude that for generic A', the set C is empty for any β∈^1_Y and any (,')∈ M(θ,β)× M(β,) satisfying ttom. From now on we assume A' is chosen so that this holds. lemma Let A' be as described above. If d>0 is sufficiently large then the sections ζ_1,ζ_2 are linearly independent at every point in M(θ,)×. If the lemma were false then we could find a sequence d_n of real numbers converging to ∞ and for each n an element _n∈ M(θ,) such that ζ_1,ζ_2, defined with d_n in place of d, are linearly dependent at (_n,t) for some (hence any) t. Because A' has maximal rank and the assumptions on ensure that ρ_1,ρ_2,ρ_3 are linearly independent at R^+(_n), we must have b^+(_n)>0, i.e. (τ^+-)(_n)>d_n-1, which shows that (τ^+-)(_n)→∞. After passing to a subsequence we can assume that the sequence _n is chain-convergent and that b^+(_n) converges to some r∈[0,1]. By Lemma <ref> the chain-limit lies in (θ,β)×(β,) for some β∈^1_Y. Then the sequences ^*_τ^+(_n)(_n), ^*_τ^+_β(_n)(_n) converge over compact subsets of to some ∈ M(θ,β) and '∈ M(β,), respectively, and ttom holds. But then η_1(r) and η_2(r) are linearly dependent, contradicting the assumption on A'.□ From now on we assume that d is chosen so that the conclusion of Lemma <ref> holds. lemma There is a constant T_1<∞ such that the sections ζ_1,ζ_2 are linearly independent at every point (,t)∈ M(,)× with ()>T_1. Recall that if ζ_1,ζ_2 are linearly independent at (,t) for some real number t then the same holds at (,t') for all t'. Now suppose the lemma were false. Then we could find a sequence in M(,) such that ()→∞ and ζ_1(,t),ζ_2(,t) are linearly dependent for every n. We may also arrange that τ^+(_n)=0. After passing to a subsequence we may assume that is chain-convergent. From Lemma <ref> we see that there are two possibilities for the chain-limit. Case 1: The chain-limit of _n lies in (,θ)×(θ,β)×(β,) for some β∈^1_Y. Then (_n)=τ^+_β(_n) for n≫0. Let ∈ M(θ,β) be a representative for the middle term of the chain-limit. By Lemma <ref> we have (τ^+-)(_n)→∞, so for t_n:=() one has ζ_j(_n,t_n)→ρ_j(R^+()), contradicting the fact that the ρ_j are linearly independent at R^+(). Case 2: The chain-limit of _n lies in (,θ)×(θ,). Then _n converges over compact subsets of to some ∈ M(θ,) satisfying τ^+()=0. According to Lemma <ref> we have (_n)→(), so ζ_j(_n,t)→ζ_j(,t) for any t. Hence, ζ_1,ζ_2 must be linearly dependent at (,t). But d was chosen so that the conclusion of Lemma <ref> holds, so we have a contradiction.□ At any point (,t)∈ M(',)× where ζ_1,ζ_2 are linearly independent let ξ_1(,t),ξ_2(,t) be the orthonormal pair of vectors in _[t] obtained by applying the Gram-Schmidt process to ζ_1(,t) and ζ_2(,t), and let ξ_3=ξ_1×ξ_2 be the fibrewise cross-product of ξ_1 and ξ_2. Then {ξ_j(,t)}_j=1,2,3 is a positive orthonormal basis for _[t]. We now have the necessary ingredients to define the cut-down moduli space 33(,). Set c:M(,)×→[0,1], (,t)↦ b(t-()) and for j=1,2,3 define a section _j of the bundle _ over M(,)× by _j:=(1-c)∑_ia^-_ijρ^-_i+c∑_ia^+_ijξ_i. Choose a constant T_1 for which the conclusion of Lemma <ref> holds and define a function g:M(,)×→[0,1] by g(,t):=b(()-T_1)· b(()-t)· b(t-τ^-()). For j=1,2,3 we now define a section s_j of _ by s_j(,t):=(1-g(,t))· s_j([t])+g(,t)·_j(,t). Now set 33(,):={(,t)∈× s_1(,-t)=0, s_2(,t)=0}. In the study of the ends of 33(,) we will encounter certain subspaces of M(θ,) which we now define. For ∈ M(θ,) and j=1,2 set s_j():=(1-b(()))· s_j([0]) +b(())∑_i=1^3a^+_ijξ_i(,0) and define M_3;j(θ,):={∈ M(θ,) s_j()=0}. This space has expected dimension 2-3=-1 and is empty for “generic” choices of sections s_j and matrix A^+. There is a constant C_0<∞ such that for all (,t)∈33(,) one has |t|≤min(-τ^-(),())+C_0. That |t|+τ^-() is uniformly bounded above for (,t)∈33(,) is proved in the same way as the corresponding part of Lemma <ref>. To prove the same for |t|-(), suppose there were a sequence (_n,t_n)∈33(,) with |t_n|-(_n)→∞. After passing to a subsequence we may assume the following. * The sequence _n is chain-convergent; * There is a constant e=±1 such that |t_n|=et_n for all n; * The sequence et_n-τ^+(_n) converges in [-∞,∞] to some point t. Let j:=1/2(3+e). Then for n≫0 we have 0= s_j(_n,et_n)=s_j(_n[et_n]). According to Lemma <ref> one of the following two cases must occur. Case 1: The sequence (τ^+-)(_n) is bounded. Then et_n-τ^+(_n)→∞, so _n[et_n]→. By continuity of s_j we must have s_j()=0, which however will not hold for a “generic” section s_j. Case 2: (τ^+-)(_n)→∞. From Lemma <ref> we deduce that ^*_τ^+(_n)(_n) converges over compact subsets of to some ∈ M(β,), where β∈^1_Y. Then (_n)=τ^+_β(_n) for n≫0. Furthermore, ^*_τ^+_β(_n) converges over compacta to an element of some moduli space M(',β), where β≠'∈(Y). Case 2a: t=±∞. Then the exponential decay results of <cit.> imply that _n[et_n] converges to (if t=-∞) or to (if t=∞). This is ruled out in the same way as Case 1. Case 2b: t finite. Then ^*_et_n(_n) converges over compacta to ':=^*_t()∈ M(β,), and _n[et_n]→'[0]. But then s_j('[0])=0, which will not hold for a “generic” section s_j of the bundle , since M(β,) has dimension 1 whereas has rank 3.□ For any constant C_1<∞ there is constant L>0 such that for all (,t)∈33(,) satisfying ()≥ L one has |t|≤min(-τ^-(),())-C_1. If not, then there would be a constant C_1<∞ and a sequence (_n,t_n)∈33(,) with (_n)→∞ such that either (i) |t_n|>-τ^-(_n)-C_1 for all n, or (ii) |t_n|>(_n)-C_1 for all n. Case (i) is rule out as in the proof of Lemma <ref>. Now suppose (ii) holds. Because (_n)→∞ we have (_n)→∞. From Lemma <ref> we deduce that |t_n|-(_n) is bounded, so |t_n|-τ^-(_n)→∞. This implies that c(_n,t_n)=1 for n≫0. After passing to a subsequence we may assume that the sequence _n chain-converges and |t_n|=-et_n for some constant e=±1. Case 1: (τ^+-)(_n) is bounded. By Lemmas <ref> and <ref> the chain-limit of _n must lie in (,θ)×(θ,), so after passing to a subsequence we may assume that '_n:=^*_et_n(_n) converges over compacta to some ∈ M(θ,). Using Lemma <ref> we obtain g(_n,et_n)=b((_n)-et_n)=b(('_n))→ b(()). Let j:=1/2(3+e). Then 0= s_j(_n,et_n)→ s_j(). But then lies in M_3;j(θ,), which is empty by choice of the matrix A^+. Case 2: (τ^+-)(_n)→∞. Then the chain-limit of _n lies in (,θ)×(θ,β)×(β,) for some β∈^1_Y. For large n we now have (_n)=τ^+_β(_n) and ξ_j(_n,et_n)= j(_n,et_n), j=1,2. After passing to a subsequence we may assume that '_n:=^*_et_n(_n) converges over compacta to some ∈ M(θ,β). For large n we have g(_n,et_n)=b(τ^+_β(_n)-et_n)=b(τ^+_β('_n))→ b(τ^+()). Let j:=1/2(3+e). Then 0= s_j(_n,et_n)→(1-b(τ^+()))· s_j([0]) +b(τ^+())∑_ia^+_ijρ^+_i(,0). Thus, lies in M_3(θ,β), which is empty by choice of A^+. □ There is a constant L<∞ such that for all (,t)∈33(,) one has ()<L. For any (,t)∈33(,) with ()>T_1 let h()∈3 be the matrix whose coefficients h_ij() are given by ρ^-_j(,t)=∑_ih_ij()ξ_i(,t). By Lemma <ref> there is an L≥ T_1+1 such that for all (,t)∈33(,) with ()≥ L one has |t|≤min(-τ^-(),())-1, which implies that s_j(,t)=_j(,t). Given such a (,t), the triple (H,v_1,v_2):=(h(),-t-τ_a(),t-τ_a())∈3×× satisfies the equation (1-b(v_j))HA^-_j+b(v_j)A^+_j=0. for j=1,2. However, as observed in the proof of Proposition <ref>, these equations have no solution for generic matrices A^±.□ We will now prove Proposition <ref> in degree 4 by counting the number of ends of 33(,) modulo 2. Ends of 33(,): Let (_n,t_n) be a sequence in 33(,). After passing to a subsequence we may assume that the following hold: (i) The sequences ^*_-t_n(_n) and ^*_t_n(_n) converge over compact subsets of . (ii) The sequence ^*_τ^-(_n)(_n) converges over compacta to some ∈ M(,β), where β∈(Y). (iii) The sequences t_n and τ^-(_n) converge in [-∞,∞]. Suppose (_n,t_n) does not converge in 33(,). Case 1: β=. We show this cannot happen. First observe that the sequence (_n) converges in . Since Lemma <ref> provides an upper bound on τ^-(_n) and a lower bound on (_n) it follows that both sequences must be bounded. Applying the same lemma again we see that |t_n| is bounded. But then assumptions (ii) and (iii) imply that (_n,t_n) converges in 33(,), which we assumed was not the case. Case 2: β irreducible, M(,β)≤4. Then (_n)=0 for n≫0. As in the proof of Proposition <ref> we find that the corresponding number of ends of 33(,) is ψ d,. Case 3: β irreducible, M(,β)=5. Then (_n)=τ^+_β(_n) for n≫0, and (_n)→τ_d(). As in Case 1 we see that the sequences τ^-(_n) and t_n must be bounded, hence they both converge in by assumption (iii). From (ii) we deduce that _n converges over compacta to some '∈ M(,β) (related to by a translation). By Lemma <ref> we have ξ_j(_n,t)= j(_n,t) for n≫0 and any t, so _j(_n,t)→_j(',t). Setting t':=lim t_n we conclude that (',t')∈33(,β). The corresponding number of ends of 33(,) is dψ,.□ §.§ Calculation of ψ There are constants ^±∈/2 independent of Y and satisfying ^++^-=1 such that if ψ is defined in terms of “generic” sections s_1,s_2 that are sufficiently close and e is the sign of ν(A^+)-ν(A^-) then there is a homomorphism Ξ:C^*(Y)→ C^*+4(Y) such that ψ=v_3v_2+^e'+dΞ+Ξ d. To be precise, if s'∈() satisfies Property 4 and ⊂() is any sufficiently large finite-dimensional linear subspace then for any sufficiently small generic (_0,_1)∈× the conclusion of the proposition holds with s_j=s'+_j. The above proposition completes the proof of Proposition <ref> except for the order of v_2,v_3, which is insignificant in vue of Proposition <ref>. (The order could be reversed by a small change in the proof given below.) Let ,β∈^*(Y) with (,β)=5. Part (I) Suppose ()≢48. For -3≤ y≤3 we define a section χ_y of by 6χ_y:=(3-y)s_1+(3+y)s_2. In particular, χ_-3=s_1, χ_3=s_2. Let :={z∈:|(z)|≤3, |z|≥1} and let ':=/±1 be the surface-with-boundary obtained by identifying each z∈ with -z. The image of a point z∈ in ' will be denoted by [z]. Let ξ̅∈(), and let ξ̂ be a section of the bundle × S^1 over ^*(Y[0])× S^1 satisfying ξ̂(,-z)=-ξ̂(,z), so that ξ̂∈_a() in the notation of Section <ref>. We then define a section ξ of the bundle × over ^*(Y[0])× as follows. Let b_1(z):=b(|z|-2). For ∈^*(Y[0]) and z=(x,y)∈ let ξ(,z):=(1-b_1(z))·(ξ̅()+ξ̂(,z/|z|)) +b_1(z)χ_y(). Let f:→ be the smooth function given by f(z):=b_1(z)(z). Note that f(z)=(z) for |z|≥3, and f(z)=0 for |z|=1. Moreover, f(-z)=-z. (i) Let =(,β) be the subspace of M(,β)×' consisting of those points (,[z]) such that ξ([f(z)],z)=0, ξ([f(-z)],-z)=0. (ii) Let =(,β) be the subspace of M(,β)× S^1×[0,∞) consisting of those points (,z^2,r) such that z∈ S^1 and ξ̂([-r],z)=0, ξ̅([r])=0. If ξ̅ is “generic” and ξ̂ is given by a “generic” section of ⊗ (see Lemma <ref>) then will be a smooth 1–manifold-with-boundary. Now choose a section s'∈() satisfying Property 4. If is a sufficiently large finite-dimensional linear subspace of () and (_0,_1) a generic element of × then taking s_j=s'+_j, j=1,2 the space will be a smooth 1–manifold-with-boundary. (The reason that transversality can be achieved over the boundary component of M(,β)×' given by |z|=1 is essentially that if V is any real vector space then every element of V× V can be written as (a+b,a-b) for suitable a,b∈ V.) If in addition _0,_1 are sufficiently small then for -3≤ y≤3 the section χ_y will satisfy Property 4 and define the same cup product v_3:C^*(Y)→ C^*+3(Y) as s', by Lemma <ref>. The part of the boundary of given by |z|=1 can be identified with the boundary of (defined by r=0). To see this, let (,z)∈ M(,β)× and set _0:=[0]. Then (,[z])∈ if and only if ξ̅(_0)+ξ̂(_0,z)=0=ξ̅(_0)-ξ̂(_0,z), which in turn is equivalent to (,z^2,0)∈. This allows us to define a topological 1–manifold-with-boundary =(,β) as a quotient of the disjoint union ∐ by identifying each boundary point of with the corresponding boundary point of . The proposition will be proved by counting the ends and boundary points of modulo 2. Before doing this, we pause to define the homomorphism Ξ. Let ',β'∈^*(Y) with (',β')=4. Replacing (,β) by (',β') in Definition <ref> yields zero-dimensional manifolds _j(',β'), j=1,2. The argument that we will give below to determine the ends of _j(,β) can also be applied to show that _j(',β') is compact. Granted this, we define Ξ:=Ξ_1+Ξ_2, where Ξ_j has matrix coefficient Ξ_j',β':=#_j(',β'). Ends of (,β): Let (_n,[z_n]) be a sequence in (,β), where z_n=(x_n,y_n)∈^2. After passing to a subsequence we may assume that description (i) The sequence ^*_-x_n(_n) converges over compact subsets of to some ^-∈ M(^-,β^-). (ii) The sequence ^*_x_n(_n) converges over compact subsets of to some ^+∈ M(^+,β^+). (iii) The sequence (x_n,y_n) converges in [-∞,∞]×[-3,3] to some point (x,y). Suppose (_n,[z_n]) does not converge in (,β). Case 1: x finite. Then (^+,β^+)=4 and either ^+= or β^+=β. The corresponding number of ends of (,β) is (dΞ_1+Ξ_1d),β modulo 2. Case 2: x=±∞. Then for n≫0 one has 0=ξ([± x_n],± z_n)→χ_± y(^±[0]). Hence χ_± y(^±[0])=0. Since χ_± y satisfy Property 4 we must have (^±,β^±)≥3, so 5=(,β)≥(^-,β^-)+(^+,β^+)≥6. This contradiction shows that there are no ends in the case x=±∞. Ends of (,β): We argue as in part (I) of the proof of Proposition <ref>. Let (_n,z_n^2,r_n) be a sequence in (,β). After passing to a subsequence we may assume that r_n converges in [0,∞] to some point r. Then the number of ends modulo 2 corresponding to r<∞ is (dΞ_2+Ξ_2d),β. Using Proposition <ref> and Lemma <ref> we see that the number of ends corresponding to r=∞ is v_3v_2,β. Boundary points of (,β): These are the points (,[z]) in M(,β)×' where (z)=3 and 0=ξ([x],z)=s_2([x]), 0=ξ([-x],-z)=s_1([-x]). The number of such points is by definition ψ,β. Since the number of ends plus the number of boundary point of must be zero modulo 2 we obtain the equation eqn:psi-v3v2 in the case ()≢42. Part (II) Suppose ()≡48. We define maps V^±:[-3,3]→^3 by 6V^±(y):=(3-y)A^±_1 +(3+y)A^±_2. Choose generic elements L̅^±∈^3 and functions L̂^±:S^1→^3 satisfying L̂^±(-z)=-L̂^±(z) for z∈ S^1. We define maps L^±:→^3 by L^±(z):=(1-b_1(z))· (L̅^±+L̂^±(z/|z|))+ b_1(z)· V^±((z)), where the function b_1 is as in eqn:b1def. Let (,β) be the vector bundle over × obtained by pulling back the bundle →^*(Y[0]) by the map ×→^*(Y[0]), (,z)↦[f(z)]. Let c and g be the functions defined in eqn:c23def and eqn:gomt, respectively. We define sections ,s of (,β) by (,z):=(1-c(,f(z)))∑_i=1^3L^-_i(z)ρ^-_i(,f(z)) +c(,f(z))∑_i=1^3L^+_i(z)ρ^+_i(,f(z)), s(,z):=(1-g(,f(z)))·ξ([f(z)],z)+g(,f(z))·(,z). Let =(,β) be the subspace of ×' consisting of those points (,[z]) such that s(,z)=0, s(,-z)=0. We define sections ,s̅ of the bundle (,β) over × by (,r):=(1-c(,r))∑_i=1^3L̅^-_iρ^-_i(,r) +c(,r)∑_i=1^3L̅^+_iρ^+_i(,r), s̅(,r):=(1-g(,r))·ξ̅([r])+g(,r)·(,r). Let (,β) be the vector bundle over × S^1× obtained by pulling back the bundle by the map × S^1×→ Y[0], (,z,r)↦[r]. We define sections ,ŝ of (,β) by (,z,r):=(1-c(,r))∑_i=1^3L̂^-_i(z)ρ^-_i(,r) +c(,r)∑_i=1^3L̂^+_i(z)ρ^+_i(,r), ŝ(,z,r):=(1-g(,r))·ξ̂([r],z)+g(,r)·(,z). Note that ŝ(,-z,r)=-ŝ(,z,r). Let =(,β) be the subspace of × S^1×[0,∞) consisting of those points (,z^2,r) such that z∈ S^1 and ŝ(,z,-r)=0, s̅(,r)=0. By inspection of the formulas involved one finds that for |z|=1 one has (,0)+(,z,0) =(,z), s̅(,0)+ŝ(,z,0) =s(,z). Therefore, the part of the boundary of given by |z|=1 can be identified with the boundary of (defined by r=0). By gluing and correspondingly we obtain a topological 1–manifold-with-boundary . There is a constant C_0<∞ such that for all (,[z])∈ one has |f(z)|≤min(-τ^-(),τ^+())+C_0. The proof is similar to that of Lemma <ref>. We must provide upper bounds on both quantities |f(z)|+τ^-() and |f(z)|-τ^+() for (,[z])∈. The proof is essentially the same in both cases, so we will only spell it out in the second case. Suppose, for contradiction, that (_n,[z_n]) is a sequence in with |f(z)|-τ^+(_n)→∞. By perhaps replacing z_n by -z_n we can arrange that (z_n)≥0. Then f(z_n)≥0 as well, and g(_n,f(z_n))=0 for n≫0. Let z_n=(x_n,y_n). After passing to a subsequence we may assume that z_n converges in [0,∞]×[-3,3] to some point (x,y). Case 1: x finite. Let z:=(x,y)∈. The sequence _n converges to over compact subsets of , so for large n we have 0=ξ(_n[f(z_n)],z_n)→ξ(,z). However, the space of all w∈ for which ξ(,w)=0 has expected dimension 2-3=-1, so this space is empty for “generic” sections s_1,s_2,ξ̅,ξ̂. Hence, x cannot be finite. Case 2: x=∞. Then f(z_n)=x_n for large n. Now, ^*_x_n_n converges over compacta to , so for large n we have 0=ξ(_n[x_n],z_n)=χ_y_n(_n[x_n])→χ_y(). However, the space of all t∈[-3,3] for which χ_t()=0 has expected dimension 1-3=-2, so this space is empty for “generic” sections s_1,s_2. Hence, x≠∞. This contradiction proves the lemma.□ In the proof of Lemma <ref> below we will encounter certain limits associated to sequences in with chain-limits in (,θ)×(θ,β). These limits lie in cut down moduli spaces analogous to those introduced in Definitions <ref> and <ref>, with M(,θ) or M(θ,β) in place of . We now define these cut-down spaces in the case of M(θ,β) and observe that they are “generically” empty. The case of M(,θ) is similar. For any (,z)∈× let s(,z):= (1-b(τ^+()-f(z)))·ξ([f(z)],z) +b(τ^+()-f(z))∑_i=1^3L^+_i(z)ρ^+_i(,f(z)). Let (θ,β) be the subspace of M(θ,β)×' consisting of those points (,[z]) such that s(,z)=0, s(,-z)=0. Then (θ,β) has expected dimension 3-6=-3 and is empty for “generic” sections s_1,s_2,ξ̅,ξ̂ and generic choices of A^+,L̅^+,L̂^+. Let (θ,β) be the subspace of M(θ,β)×[-3,3] consisting of those points (,y) such that (1-b(τ^+()))·χ_y([0]) +b(τ^+())∑_iV^+_i(y)ρ^+_i(,0)=0. We observe that the space (θ,β) (a parametrized version of the space M_3(θ,β) defined in Subsection <ref>) has expected dimension 2-3=-1 and is empty for “generic” sections s_1,s_2 and generic matrix A^+. For any constant C_1<∞ there is constant L>0 such that for all (,[z])∈ satisfying ()≥ L one has |f(z)|≤min(-τ^-(),τ^+())-C_1. The proof is similar to that of Lemma <ref>. If the lemma did not hold there would be a sequence (_n,[z_n]) in such that (_n)→∞ and one of the following two conditions hold: (i) |f(z_n)|>-τ^-(_n)-C_1 for all n, (ii) |f(z_n)|>τ^+(_n)-C_1 for all n. Suppose (ii) holds, the other case being similar. By replacing z_n by -z_n, if necessary, we can arrange that (z_n)≥0. From Lemma <ref> we deduce that the sequence f(z_n)-τ^+(_n) is bounded, whereas f(z_n)-τ^-(_n)→∞. For large n we therefore have c(_n,f(z_n))=1, g(_n,f(z_n))=b(τ^+(_n)-f(z_n)). Let z_n=(x_n,y_n). After passing to a subsequence we may assume that * '_n:=^*_x_n_n converges over compact subsets of to some '∈ M(θ,β); * z_n converges in [0,∞]×[-3,3] to some point z=(x,y). Case 1: x finite. Then _n converges over compacta to some ∈, and 0=s(_n,z_n)→ s(,z). Beause the sequence z_n is bounded, we also have c(_n,f(-z_n))=1 for large n, so 0=s(_n,-z_n)→ s(,-z). But then (,[z]) belongs to (θ,β), contradicting the fact that that space is empty. Case 2: x=∞. Since τ^+('_n)=τ^+(_n)-x_n, we obtain g(_n,f(z_n))=b(τ^+('_n)) for n≫0. Therefore, 0=s(_n,z_n)→ (1-b(τ^+(')))·χ_y('[0]) +b(τ^+('))∑_iV^+_i(y)ρ^+_i(',0). But this means that (',y) belongs to (θ,β), which is empty. This contradiction proves the lemma.□ There is a constant C_0<∞ such that for all (,z^2,r)∈ one has r≤min(-τ^-(),τ^+())+C_0. This is similar to the proof of Lemma <ref>.□ For any constant C_1<∞ there is constant L>0 such that for all (,z^2,r)∈ satisfying ()≥ L one has r≤min(-τ^-(),τ^+())-C_1. This is similar to the proof of Lemma <ref>.□ Choose L≥2 such that the conclusions of Lemmas <ref> and <ref> hold with C_1=1. For all (,[z]∈ with ()≥ L we then have s(,z)=(,z), and for all (,z^2,r)∈ with ()≥ L we have ŝ(,z,-r)=(,z,-r), s̅(,r)=(,r). From Lemma <ref> it follows that L is a regular value of the real functions on and defined by . Therefore, :={(,[z])∈()≤ L}, :={(,z^2,r)∈()≤ L} are smooth 1–manifolds-with-boundary, and ^L:=∪ is a topological 1–manifolds-with-boundary. (As before we identify the part of given by |z|=1 with the part of given by r=0.) Ends of ^L: From Lemma <ref> we deduce that every sequence (_n,[z_n]) in which satisfies (_n)>0 has a convergent subsequence. Similarly, it follows from Lemma <ref> that every sequence (_n,z_n^2,r_n) in with (_n)>0 has a convergent subsequence. (See the proof of Proposition <ref>, “Ends of 23(,β)”, Case 2.) Therefore, all ends of ^L are associated with sequences on which =0. The number of such ends, counted modulo 2, is given by the same formula as in Part (I), namely (v_3v_2+dΞ+Ξ d),β. Boundary points of ^L: The boundary of ^L decomposes as ^L=∪'∪, where and are the parts of the boundaries of and , respectively, given by ()=L, and ' is the part of the boundary of given by (z)=±3. By choice of matrices A^± there are no points (,t)∈33(,β) with ()≥ L, hence W'_=33(,β) and #'=ψ,β. By Lemma <ref> we can identify =(,θ)×(θ,β)×, =(,θ)×(θ,β)×, where is the set of points (H,τ,[z]) in 3××' satisfying (1-b(f(z)-τ))HL^-(z)+b(f(z)-τ)L^+(z), (1-b(f(-z)-τ))HL^-(-z)+b(f(-z)-τ)L^+(-z), whereas is the set of points (H,τ,z^2,r) in 3×× S^1×[0,∞) satisfying (1-b(-r-τ))HL̂^-(z)+b(-r-τ)L̂^+(z)=0, (1-b(r-τ))HL̅^-+b(r-τ)L̅^+=0. Here, (H,τ) corresponds to (h(),τ_a()). It follows from these descriptions that #(∪)=',β, where =#(∪)∈/2 is independent of the manifold Y. To prove the theorem it only remains to understand the dependence of on the pair of matrices A=(A^+,A^-). To emphasize the dependence on A we write =(A) and =(A). The space is independent of A. The part of corresponding to |z|=1 is also independent of A and is empty for generic L̅,L̂ for dimensional reasons. Let P denote the space of all pairs (B^+,B^-) of 3×2 real matrices with non-zero columns B^±_j. Let P^±:={(B^+,B^-)∈ P±(ν(B^+)-ν(B^-))>0}, where ν is as in eqn:nuB. Note that each of P^+,P^- is homotopy equivalent to S^2× S^2 and therefore path connected. For any smooth path C:[0,1]→ P we define :=⋃_0≤ t≤1(C(t))×{t}⊂3××'×[0,1]. As observed above there are no points (H,τ,[z],t) in with |z|=1. Since b_1(z)>0 for |z|>1 we can therefore make regular (i.e. transversely cut out) by varying C alone. If is regular then it is a compact 1–manifold-with-boundary, and =(C(0))∪(C(1))∪ X_C, where X_C is the set of points (H,τ,x,t) in 3×××[0,1] satisfying the two equations (1-b(x-τ))HC^-_1(t)+b(x-τ)C^+_1(t)=0, (1-b(-x-τ))HC^-_2(t)+b(-x-τ)C^+_2(t)=0. It follows that (C(0))+(C(1))=#X_C. If A,B∈ P^+ then we can find a path C:[0,1]→ P^+ from A to B. Then X_C is empty. By perturbing C(t) for 0<t<1 we can arrange that is regular. This yields (A)=(B). The same holds if A,B∈ P^-. Let ^± be the value that takes on P^±. To compute ^++^-, let (e_1,e_2,e_3) be the standard basis for ^3 and define C:[0,1]→ P by -C^+_1(t) =C^-_1(t):=e_1, -C^+_2(t) :=(1-t)e_1+te_2, C^-_2(t) :=(1-t)e_2+te_1. Then C(0)∈ P^+ and C(1)∈ P^-. Moreover, X_C consists of the single point (I,0,0,1/2), and this point is regular. (Here I is the identity matrix.) If we perturb C a little in order to make regular then X_C will still consist of a single, regular point. We conclude that ^++^-=#X_C=1. This completes the proof of the proposition.□ § INSTANTONS REDUCIBLE OVER OPEN SUBSETS The following proposition is implicit in <cit.> but we include a proof for completeness. Let X be an oriented connected Riemannian 4–manifold and E→ X an oriented Euclidean 3–plane bundle. Suppose A is a non-flat ASD connection in E which restricts to a reducible connection over some non-empty open set in X. Then there exists a rank 1 subbundle of E which is preserved by A. This is a simple consequence of the unique continuation argument in the proof of <cit.>. The proof has two parts: local existence and local uniqueness. (i) Local existence. By unique continuation, every point in X has a connected open neighbourhood V such that A|_V is reducible, i.e. there exists a non-trivial automorphism u of E|_V such that ∇_Au=0. The 1–eigenspace of u is then a line bundle preserved by A. (ii) Local uniqueness. Because A is not flat, it follows from unique continuation that the set of points in X where F_A=0 has empty interior. Now let V be any non-empty connected open set in X and suppose A preserves a rank 1 subbundle ⊂ E|_V. We show that is uniquely determined. Let x∈ V be a point where F_A≠0. By the holonomy description of curvature (see <cit.>) we can find a loop in V based at x such that the holonomy _(A) of A along is close to but different from the identity. The 1–eigenspace of _(A) is then 1–dimensional and must agree with the fibre _x. If x' is an arbitrary point in V then there is a similar description of _x' in terms of the holonomy of A along a loop obtained by conjugating with a path in V from x to x'. □ § UNIQUE CONTINUATION ON A CYLINDER As in Subsection <ref> let Y be a closed oriented connected 3-manifold and P→ Y an 3 bundle. If Y is not an integral homology sphere then we assume P is admissible. Let J⊂ be an open interval. We consider the perturbed ASD equation for connections in the bundle J× P→ J× Y obtained by adding a holonomy perturbation to the Chern-Simons function. For a connection A in temporal gauge the equation takes the form A_t/ t=-*F(A_t)+V(A_t), where A_t is the restriction of A to the slice {t}× P and V is the formal gradient of the perturbation. The following proposition is probably well known among experts, but we include a proof for completeness. Suppose A,A' are perturbed ASD connections in the bundle J× P→ J× Y. If A and A' are in temporal gauge and A_T=A'_T for some T∈ J, then A=A'. We will apply (an adaption of) the abstract unique continuation theorem in <cit.>. To this end, fix an arbitrary connection B in P and let c_t=A_t-A'_t, a_t=A_t-B, a'_t=A'_t-B. We have F(A_t)=F(B)+d_Ba_t+a_t∧ a_t and similarly for A'_t, so c_t/ t+*d_Bc_t=-*(a_t∧ c_t+c_t∧ a'_t) +V(A_t)-V(A'_t). By <cit.> we have V(A_t)-V(A'_t)_L^2≤c_t_L^2, hence c_t/ t+*d_Bc_t_L^2≤ϕ(t)c_t_L^2 where ϕ(t)=(a_t_∞+a'_t_∞+1). Because *d_B is a formally self-adjoint operator on 1–forms on Y and ϕ is locally square integrable (in fact, continuous), we deduce from <cit.> that for any compact subinterval [t_0,t_1] of J there are constants C_0,C_1 such that for t_0≤ t≤ t_1 one has c_t_L^2≥c_t_0_L^2·exp(C_0t+C_1). (<cit.> considers the case when c_t is defined for 0≤ t<∞, but the approach works equally well in our case.) Taking t_1=T we obtain c_t=0 for t<T. Replacing c_t by c_-t we get c_t=0 for t>T as well.□ 10 AS1 M. F. Atiyah and I. M. Singer. The index of elliptic operators: I. Ann. of Math., 87:484–530, 1968. BD1 P. J. Braam and S. K. Donaldson. Floer's work on instanton homology, knots and surgery. In H. Hofer, C. H. Taubes, A. Weinstein, and E. Zehnder, editors, The Floer Memorial Volume, pages 195–256. Birkhäuser, 1995. DHST1 I. Dai, J. Hom, M. Stoffregen, and L. Truong. An infinite-rank summand of the homology cobordism group. arXiv:1810.06145. D1 S. K. Donaldson. An application of gauge theory to four dimensional topology. J. Diff. Geom., 18:279–315, 1983. D2 S. K. Donaldson. The orientation of Yang–Mills moduli spaces and 4–manifold topology. J. Diff. Geom., 26:397–428, 1987. D5 S. K. Donaldson. Floer Homology Groups in Yang–Mills Theory. Cambridge University Press, 2002. DK S. K. Donaldson and P. B. Kronheimer. The Geometry of Four-Manifolds. Oxford University Press, 1990. Miller-Eismeier1 M. Miller Eismeier. Equivariant instanton homology. arXiv:1907.01091. FS2 R. Fintushel and R. J. Stern. Definite 4–manifolds. J. Diff. Geom., 28:133–141, 1988. F1 A. Floer. An instanton invariant for 3–manifolds. Comm. Math. Phys., 118:215–240, 1988. Fr0 K. A. Frøyshov. On Floer homology and 4–manifolds with boundary, 1995. D.Phil. thesis, University of Oxford. Fr1 K. A. Frøyshov. The Seiberg–Witten equations and four-manifolds with boundary. Math. Res. Lett., 3:373–390, 1996. Fr3 K. A. Frøyshov. Equivariant aspects of Yang–Mills Floer theory. Topology, 41:525–552, 2002. Fr7 K. A. Frøyshov. An inequality for the h–invariant in instanton Floer theory. Topology, 43:407–432, 2004. Fr13 K. A. Frøyshov. Compactness and gluing theory for monopoles, volume 15 of Geometry & Topology Monographs. Geometry & Topology Publications, 2008. Fr4 K. A. Frøyshov. Monopole Floer homology for rational homology 3–spheres. Duke Math. J., 155:519–576, 2010. Fr14 K. A. Frøyshov. 4–manifolds and intersection forms with local coefficients. J. Diff. Geom., 91:233–259, 2012. Hirsch M. W. Hirsch. Differential Topology. Springer, 1976. HM D. Husemoller and J. Milnor. Symmetric Bilinear Forms. Springer-Verlag, 1973. Kotsch1 D. Kotschick. SO(3)–invariants for 4-manifolds with b_2^+=1. Proc. London Math. Soc., 63(3):426–448, 1991. KM3 P. B. Kronheimer and T. S. Mrowka. Embedded surfaces and the structure of Donaldson's polynomial invariants. J. Diff. Geom., 41:573–734, 1995. KM5 P. B. Kronheimer and T. S. Mrowka. Monopoles and Three-Manifolds. Cambridge University Press, 2007. KM7 P. B. Kronheimer and T. S. Mrowka. Knot homology groups from instantons. J. Topology, 4:835–918, 2011. Jeffrey-Lee-Manifolds-DG Jeffrey M. Lee. Manifolds and Differential Geometry. AMS, 2009. NST1 Y. Nozaki, K. Sato, and M. Taniguchi. Filtered instanton Floer homology and the homology cobordism group. arXiv:1905.04001. Ogawa H. Ogawa. Lower bounds for solutions of differential inequalities in Hilbert space. Proc. AMS, 16:1241–1243, 1965. OS6 P. S. Ozsváth and Z. Szabó. On the Floer homology of plumbed three-manifolds. Geometry & Topology, 7:185–224, 2003. Scaduto2 Ch. W. Scaduto. On definite lattices bounded by a homology 3–sphere and Yang-Mills instanton Floer theory. arXiv:1805.07875. Scaduto1 Ch. W. Scaduto. Instantons and odd Khovanov homology. J. Topology, 8(3):744––810, 2015. University of Oslo, Norway Email: [email protected]
http://arxiv.org/abs/2307.05279v1
20230711141638
DRAMP: Double-RIS Assisted Multihop Routing Protocol for Wireless Networks
[ "Lakshmikanta Sau", "Priyadarshi Mukherjee", "Sasthi C. Ghosh" ]
eess.SY
[ "eess.SY", "cs.SY" ]
DRAMP: Double-RIS Assisted Multihop Routing Protocol for Wireless Networks Lakshmikanta Sau, Priyadarshi Mukherjee, and Sasthi C. Ghosh L. Sau, P. Mukherjee, and S. C. Ghosh are with the Advanced Computing & Microelectronics Unit, Indian Statistical Institute, Kolkata 700108, India. (E-mail: [email protected], [email protected], [email protected]). This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Reconfigurable intelligent surfaces (RISs) is a promising solution for enhancing the performance of multihop wireless communication networks. In this paper, we propose a double-RIS assisted multihop routing protocol for a device-to-device (D2D) communication network. Specifically, the protocol is dependent on the already deployed RISs and users in the surroundings. Besides the RISs, the emphasis of this work is to make more use of the existing intermediate users (IUs), which can act as relays. Hence, the density of RIS deployment in the surroundings can be reduced, which leads to the avoidance of resource wastage. However, we cannot solely depend on the IUs because this implies complete dependence on their availability for relaying and as a result, the aspect of reliability in terms of delay-constrained information transfer cannot be guaranteed. Moreover, the IUs are considered capable of energy harvesting and as a result, they do not waste their own energy in the process of volunteering to act as a relay for other users. Numerical results demonstrate the advantage of the proposed protocol over some existing approaches and lastly, useful insights related to the protocol design are also drawn, where we characterize the maximum acceptable delay at each hop under different set-ups. Reconfigurable intelligent surfaces, device-to-device communication, multihop network, line-of-sight wireless channels, energy harvesting, Markov chains. § INTRODUCTION In the recent years, the wireless traffic has been increasing at an explosive rate; it is expected to increase more than five times in between 2019 and 2025 <cit.>. To support this need of enhanced data traffic, technologies such as beamforming and adaptive modulation have been developed over the last few decades. However, irrespective of the technicalities, the unifying motivation behind all of them is to intelligently adapt to the randomly varying wireless channel instead of having a control over it. In this context, a new technology that promises to address this issue is the so-called reconfigurable intelligent surfaces (RISs) <cit.>. A RIS, consisting of an array of reconfigurable passive elements embedded on a flat metasurface, is able to `control' the wireless channel instead of adapting to it <cit.>. This is essentially done by tuning the parameters of its passive elements. Furthermore, as a RIS simply reflects the incident signal in a desired direction, it does not need any radio-frequency (RF) chains. As a result, this reduces the hardware cost thereby enhancing the energy efficiency of the future wireless networks. Motivated by this, the aspect of RIS assisted device-to-device (D2D) communications <cit.> forms an interesting direction of research. The work in <cit.> investigates the role of RISs for enhancing the energy efficiency of a D2D communication network. The authors in <cit.> focus on the uplink of a RIS assisted D2D enabled cellular networks. Moreover, the objective of obtaining high speed data rates is efficiently fulfilled by the use of high frequency signals, such as the millimeter waves (mmWaves) <cit.>, for short distance communication. However, mmWaves suffer from its own set of shortcomings like significantly high penetration and propagation losses. Thus, RIS assisted D2D network is the solution for such scenarios, where the direct line of sight (LoS) is not of sufficient quality to support mmWave-based communication <cit.>. As a result, RISs are strategically placed at locations where they have clear LoS links with both the users intending to communicate with each other. In this context, the authors in <cit.> investigate the aspect of strategic RIS placement. Furthermore, these works consider a primary reflection-based single RIS system, i.e., the signal from a given user reaches its desired counterpart on being reflected by a single RIS placed within its communication range. As a result, the number of strategic RIS locations obtained in a particular environment is significantly large. On the other hand, the work in <cit.> demonstrate that by proper tuning of the RISs, the multi-RIS secondary reflection can be leveraged to significantly enhance the range of communication. It is also to be noted that RISs are essentially passive devices, i.e., they simply reflect the incoming signal to a desired direction by tuning its parameters. As any given pair of users always communicate over a practically finite amount of time, having RISs deployed for all such potential pairs leads to unnecessary wastage of resources. A potential solution to avoid this wastage is a cooperative multihop framework <cit.>. In other words, apart from using the RISs, the other users present in the surroundings, if they are idle, may act as a relay, namely amplify and forward (AF) or decode and forward (DF) <cit.>, to facilitate the communication between a given pair of users. While AF relays facilitate low cost processing, they also result in boosting the effective noise at the desired user. On the other hand, DF relays guarantee high reliability as they forward only the received information and not the entire information-plus-noise mixture to the next hop <cit.>. Thus, it appears that for the intermediate idle users, opting to act as DF relay is a beneficial solution. Note that, we cannot rule out the usage of RISs completely. If the communication between a pair of users is solely dependant on the intermediate users, the aspect of reliability, in terms of delay-constrained information transfer, cannot be guaranteed. In that case, the time for the entire communication process will be entirely dependant on the traffic characteristics of these intermediate users. Moreover, it may apparently appear that the users, which agree to act as relays, are doing so by depleting their own energy source. In this context, we consider the green coexistence paradigm, i.e., all the users are equipped with an energy harvesting (EH) unit and if they agree to act as a relay, they do harvest energy from the received signal. This harvested energy can be interpreted as some `reward' for volunteering to act as relay in their idle time. By considering a over-simplified linear EH model instead of the actual non-linear one <cit.>, the works in <cit.> propose a similar approach in the context of wireless sensor networks. Furthermore, these works do not consider the impact of the user traffic characteristics, which in turn, is responsible for the EH time interval. Motivated by this, we consider the aspect of multi-RIS secondary reflection to look into a RIS assisted multihop D2D framework for wireless networks, where the users are capable of harvesting energy, while acting as DF relays depending on their availability. To the best of our knowledge, this is the first work that proposes a dynamic multihop routing protocol, which we call double-RIS assisted multihop routing protocol (DRAMP), for a D2D communication network. Specifically, DRAMP is based on the already deployed RISs and the users in the surroundings. It describes the procedure by which information transfer takes place from a particular user to its desired counterpart. Moreover, we assume that the RISs are strategically placed in the environment <cit.> and the idle intermediate users (IUs) agree to act as a DF relay node. Our priority is to make more use of the idle IUs over the RISs due to the following reasons. Firstly, the RISs reflect the entire incoming signal, i.e., including the noise, in the desired direction whereas a DF relay separates the noise from the information to transmit only the latter to the next user. Secondly, being a passive device, too many RISs installation in the surroundings lead to unnecessary wastage of resources. Lastly, opting for RIS over an idle IU as a hop implies frequent restart of the former, which creates problems for other users that are being served by this particular RIS at that time. However, the importance of the RISs cannot be ruled out completely. Being solely dependent on the IUs for information transfer may also hamper the reliability of the entire process, as it will fully rely on the IU availability and traffic characteristics. Useful insights related to the proposed protocol are also obtained in this work, where we characterize the maximum acceptable delay under different scenarios. Finally, the numerical results demonstrate the benefit of the proposed protocol in terms of reduced RIS usage, enhanced data rate and energy efficiency, respectively. The rest of this paper is organized as follows: Section II describes the system model and the problem formulation, Section III presents the proposed strategy, and Section IV investigates the same in terms of the delay associated with information transfer. Numerical results are presented in Section V and finally, Section VI concludes the work. § SYSTEM MODEL §.§ Network topology A wireless network topology is considered, which consists of a source S, K RISs R_1,R_2,⋯,R_K, M IUs U_1,U_2,⋯,U_M, and destination D[Network topology with multiple S-D pairs can be also considered, which is left for future work.], respectively. Each transmitter, i.e., the source S or an IU U_j, transmits with the same fixed power P and the RIS R_i, i=1,⋯,K has N_i reflecting elements, respectively. In general, each RIS is effectively controlled to adjust both the amplitude and phase of the incident waveform. However, for the sake of simplicity and mathematical tractability, the amplitude factor is set to unity and it is only the phase that is tuned or optimized <cit.>. Moreover, no direct LoS S→ D path exists and S relies on the IUs and/or RISs to communicate to D. Each IU U_j communicates with its own associated receiver and acts as a DF relay for other users, when idle. Lastly, each IU is equipped with a buffer of sufficient capacity and all the D2D pairs follow time-slotted synchronous communication <cit.>, with slot duration T_s. An example of this topology is presented in Fig. <ref>, for K=3 and M=4, respectively, where S can communicate with D via IUs and/or RISs. §.§ User Traffic Characterization It is noted that in a typical wireless communication scenario, data generally arrives in bursts to the users. As a result, the IUs U_1,U_2,⋯,U_M in this work are characterized by exponentially distributed OFF and ON period lengths, with means λ_k and μ_k, respectively. Without any loss of generality, T_s is assumed to be small in comparison with μ_k and λ_k <cit.>, which prevents U_k ∀ k=1,⋯,M changing its status multiple times within a single T_s. Thus, if state 0 and 1 represent the IU being idle and busy, respectively, the IU activities are characterized by a discrete-time Markov chain (DTMC) with the state transition probabilities <cit.>: p_10=∫_0^T_s1/μ_ke^-a/μ_kda=1-e^-T_s/μ_k, p_11=1-p_10, p_01=∫_0^T_s1/λ_ke^-b/λ_kdb=1-e^-T_s/λ_k, and p_00=1-p_01. Accordingly the state transition matrix 𝒫 is: 𝒫= [ p_00 p_01; p_10 p_11 ]= [ e^-T_s/λ_k 1-e^-T_s/λ_k; 1-e^-T_s/μ_k e^-T_s/μ_k ]. §.§ Channel Model Depending on the availability of the IUs, it is possible to connect S to D with or without taking the help of any RIS. When RIS is being used, the signal from S can reach D by one of the following paths: (i) single reflection from any RIS before reaching D , (ii) double reflection, i.e., via two consecutive RIS elements on its path to D, and (iii) triple or more reflection, i.e., via three or more consecutive RIS elements on the way. In this work, due to large effective path loss, we neglect the aspect of triple or higher order reflections <cit.>. However, the secondary reflection are not negligible in practice, especially in urban environments, where the RISs are not deployed too far from each other. This problem can modelled as a graph, where the vertices have an edge if and only if the corresponding nodes can communicate, i.e., they reside within some threshold distance. We assume that the wireless links suffer from both large-scale path-loss effects and small-scale block fading. The channels S → U_j, U_j → D, and U_j → U_k ∀ j,k=1,⋯, M exhibit small-scale fading and their corresponding path-loss factors are ρ_ L^1/2d_SU^-α/2,ρ_ L^1/2d_UD^-α/2, and ρ_ L^1/2d_U_1,U_2^-α/2, respectively, where ρ_ L is the pathloss at one meter distance, α is the path-loss exponent and d_mn denotes the distance between m and n. Let h_S/UR_i∈ℂ^N× 1, h_R_iR_j∈ℂ^N× N, and h_R_jD/U∈ℂ^1× N denote the channel matrix from S or IU to i-th RIS, i-th to j-th RIS (i ≠ j), and j-th RIS to an IU or D, respectively. In addition, the phase-shift matrix of the i-th RIS is denoted by Φ_i= diag(ϕ_1,⋯,ϕ_N) ∈ℂ^N × N, i.e., a diagonal matrix accounting for the response of the RIS elements, where ϕ_n=exp(jθ_n), n=1,⋯,N and θ_n ∈ [0,2π] is the phase shift applied by the RIS elements <cit.>. Lastly, the total path-loss for each of these channel matrix is the product of the path-loss of each point-to-point link <cit.>. Accordingly, the effective channel gain in case of single and double reflection is h_R_iD/UΦ_ih_S/UR_i and h_R_jD/UΦ_jh_R_iR_jΦ_ih_S/UR_i, respectively. §.§ Energy Harvesting Model As stated earlier, it is for an idle IU U_j ∀ j=1,⋯,M to decide whether to act as a DF relay or not. Moreover, there must be some `reward' for the same or else, there is no point for the user to waste its own energy in transferring data packets from S to D. In this context, we assume that each U_j is equipped with an EH unit, which can extract DC power from the received electromagnetic waves <cit.>. If an idle U_j agrees to act as a relay, we incentivize it in the form of a reward, i.e., it is able to harvest energy from the incoming signal and use the same to transfer the received information. For a transmission power P, the power harvested at IU is <cit.> P_ harv=M(1-e^-aPρ_ Ld^-α|h|^2)/1+e^-a(Pρ_ Ld^-α|h|^2-b), where M is the maximum harvested power corresponding to the saturated EH circuit, h is the complex channel gain, d is the associated distance, and finally, a and b are the respective circuit parameters. §.§ Delay-constrained Transmission Shannon capacity is the largest data rate at which the information can be transmitted with an arbitrarily small error probability, provided that the number of channel uses is infinitely large <cit.>. However, for applications such as delay-constrained scenarios, the number of channel uses cannot be very large. As a result, the error probability will not be arbitrarily small and it needs reconsideration. In such scenarios, the maximum instantaneous achievable data rate R is approximated as <cit.> R(γ)=log_2 (1+γ)-Q^-1(ε)/ln 2√(γ^2+2γ/M_b(1+γ)^2), where γ is the signal to noise ratio (SNR), ε∈ [0,1] is the error probability, M_b is the number of channel uses, and Q(x)=1/√(2π)∫_x^∞e^-t^2/2dt is the Gaussian Q function. For delay unconstrained scenarios, i.e., when M_b →∞, we have R(γ) →log_2 (1+γ). When a RIS is selected to pass the signal due to unavailability of idle IUs, we consider this achievable data rate R while searching for an IU in the next hop. § DRAMP: THE PROPOSED STRATEGY This section discusses the proposed multihop protocol DRAMP in detail, where the novelty lies in the joint IU traffic characteristics and double-RIS assisted dynamic framework. As we are considering a delay-constrained scenario, the data from S must reach D within time T_d in this set-up. Here we assume that a device cannot communicate with another beyond a distance r and S has α packets of information to send D with ϕ bits in each. Pictorially, we connect the location of S and D by an virtual straight line and consider it to be the x-axis. Accordingly, we consider another virtual line as the y-axis at S, which is perpendicular to the x-axis. We intend to connect S to D via some IUs/RISs. Firstly, we scan the right half circle of radius r at S to identify the idle IUs. After the idle IU identification, we decide on the appropriate modulation scheme and its corresponding energy requirement. Secondly, in case of multiple idle IU availability, we chose the appropriate IU based on the least remaining distance (LRD) from D and the acceptable delay constraint. If no idle IU is available, we identify a suitable RIS for the purpose. Moreover, we also provide an illustrative example of the proposed DRAMP. Finally, we define two performance metrics, namely, throughput and energy efficiency. §.§ Identification of Idle Intermediate Users In this section, we identify the idle IUs by beacon transmission <cit.> within a radius r, which can act as potential DF relays. We define Ω={U_1,⋯,U_ϵ} as the set of all U_js that are present in the right half circle of radius r centred at S, where ϵ<K and U_n=1/0, depending on whether the k^th IU is busy/idle. As we intend to reduce the LRD in each hop, we consider only the right half circle for identifying the potential relays. Accordingly, we define the set of idle/busy IUs as Ω_I={u_i^1,u_i^2,⋯,u_i^ϵ_I} and Ω_B={u_b^1,u_b^2,⋯,u_b^ϵ_B}, where ϵ_I+ϵ_B=ϵ denotes the total number of IUs in the concerned region. On the basis of the traffic characteristics of a particular idle (busy) IU, we estimate the time for which it continues to remain idle (busy) given that it is currently idle (busy). Functions κ_I and κ_B estimate these values, respectively. Duration of Idleness (DoI) ν_I: It is the time duration during which a particular IU is estimated to be idle, given that it is currently idle. From the transition probability matrix (<ref>), we know that p_00=e^-T_s/λ_k. As we are considering exponentially distributed idle and busy periods, due to the memoryless property <cit.>, we obtain ν_I for an acceptable error threshold δ as p_00^ν_I≥ 1-δν_I≤λ_k/T_sln( 1/1-δ). Hence we have κ_I:Ω_I ⟶φ_I, where φ_I={ν_I^1,ν_I^2,⋯,ν_I^ϵ_I}. Similarly, we define a metric `Duration of Busyness' (DoB). Duration of Busyness (DoB) ν_B: It is the time duration during which a particular IU is estimated to be busy, given that it is currently busy. By adopting a similar procedure as DoI, the DoB ν_B is obtained as p_11^ν_B≥ 1-δν_B≤μ_k/T_sln( 1/1-δ) and thus, κ_B:Ω_B ⟶φ_B, where φ_B={ν_B^1,ν_B^2,⋯,ν_B^ϵ_B}. It is to be noted that both κ_I and κ_B, which map the IU availability to ν_I and ν_B, are necessarily IU traffic characteristic dependent functions. §.§ Adaptive Modulation at the Intermediate Users The objective of this work is not only to transfer data from S to D, but also with minimum energy consumption. In the process of doing so, we introduce the aspect of rate adaptation at the IUs. Moreover, the idle IUs employ rate adaptation if and only if there is a direct connection between the users and there is no RIS used to connect them[The aspect of RIS-enabled rate adaptation is not considered here, as this is beyond the scope of the current work. However, the proposed framework can also be extended to such scenarios with suitable adjustments like optimal RIS beam alignment <cit.>.]. The IUs adopt a modulation scheme m_r from the set 𝕄={m_1,m_2,⋯,m_|𝕄|}, which corresponds to a data rate D_r=log_2(m_r) ∀ r=1,⋯,|𝕄|. The choice of m_r depends on the wireless channel between the two consecutive IUs, where the complex channel gain h, bit error rate (BER) P_b, constellation size m_r, and the received power Pρ_ Ld^-α|h|^2 are related as <cit.>: P_b=c_1 exp( -c_2 Pρ_ Ld^-α|h|^2/σ^2 (m_r^c_3-c_4)). Here σ^2 is the noise power, and c_1,⋯,c_4 are modulation-specific constants, respectively. By considering a transmission power P and a modulation scheme m_r, the time required for complete transfer of α information packets with φ bits in each is τ_req(r)=⌈αφ/D_r⌉ slots, where ⌈·⌉ denotes the ceiling function. Accordingly, if P_proc is the processing power, the corresponding energy required for complete information transfer is characterized as E_req(r)=(P+P_proc)τ_req(r)T_s, which is not a fixed quantity, but a function of the chosen constellation size. §.§ Selection of Appropriate Intermediate Users Till now, we have identified the pool of IUs that can be leveraged upon to act as DF relays in forwarding the information from S to D. Here we identify the most appropriate one of them, which suits our objective. In this context, S chooses the IU U_ tx from Ω_I to act as a relay, which is obtained by solving the optimization problem P1: (P1) :q,U_iminimizeτ_req(q) subject to C1: U_i ∈Ω_I, C2: τ_req(q)≤κ_I(U_i), C3: m_q+1∈𝕄. U_ tx is the desired U_i from Ω_I, that minimizes the objective function in P1 by employing the modulation scheme m_q+1 from the set 𝕄. Moreover, while C1 implies that U_ tx to be one of the idle IUs within a distance of radius r, C2 guarantees that the complete information transfer occurs in one go and C3 assures that the chosen modulation scheme is from 𝕄. Furthermore, as an additional constraint, we also state that U_ tx has to be chosen in a direction that reduces the LRD to D. Finally, minimizing τ_req(q) implies considering the maximum possible value of D_ r, i.e., only the idle IU with the best channel condition is chosen from Ω_I. §.§.§ Computational Complexity of P1 As the optimization problem P1 is combinatorial in nature, it does not have a `closed-form' solution. P1 chooses U_ tx only within the radius r, i.e., the search space consists of a finite ϵ_I number of IUs. Moreover, the adaptive modulation selection process is a non-iterative look-up table based approach. Thus, unlike the Brute-force method, here the complexity of obtaining U_ tx in P1 is 𝒪(ϵ_I). Hence, obtaining U_ tx is not computationally expensive, as depending on the radius r, ϵ is typically in the range of 20 to 30 and always ϵ_I<ϵ holds. However, at times we have τ_req(q) > κ_I(U_i), i.e., DoI corresponding to U_i is less than the time required by U_i for continuous complete information transfer by using a constellation of size m_q+1. As it is necessary to complete the entire information transfer in a single phase, S decides to avoid transmission and waits for an interval of η_w slots. §.§.§ Calculation of η_w In such scenarios, S identifies the pool of busy IUs in the right half circle of radius r, i.e., Ω_B from (<ref>). Accordingly, it estimates the time interval of η_ idle, n slots after which U_n ∈Ω_B will become idle, i.e., we have U_n ∈Ω_I after η_ idle, n slots. This can be effectively modelled as a geometric distribution <cit.>, where we map the event of U_n being idle and busy as success and failure, respectively. Therefore, given that U_n ∈Ω_B, we are interested in finding out the number of trial required till the first success, which in this case, is η_ idle, n. From the transition probability matrix 𝒫 in (<ref>), we obtain the probability of success in the η_ idle, n-th slot as p_11^η_ idle, n-1p_10, which needs to be greater than an acceptable threshold probability p_ th, i.e., p_11^η_ idle, n-1p_10≥ p_ th ( e^-T_s/μ_k)^η_ idle, n-1( 1-e^-T_s/μ_k) ≥ p_ th, which after some trivial algebraic manipulations yield η_ idle, n≤μ_k/T_sln( (1-e^-T_s/μ_k/p_ th)^+) +1. Here we define x^+=max(x,1) and as a result, S chooses not to transmit for η_w slots, where η_w= max_U_n ∈Ω_Bη_ idle, n. Therefore, we have the reduced delay bound T_d^', where T_d^'=T_d- η_w T_s. Hence, we can say that the acceptable delay bound becomes tighter with every transmission deferral. In this context, a detailed delay analysis is provided later in Section <ref>. Now after a time interval of η_w slots, S again scans the right half circle of radius r to classify the idle IUs and accordingly, it solves P1 again to obtain U_ tx. If even now, the solution is a null set, S proceeds with the selection of an RIS as the next hop. We adopt a cooperative framework in this work, i.e., a particular IU agrees to act as a DF relay whenever it is idle. Scenarios where an idle IU may not agree to act as a relay in spite of being idle or that it chooses a particular S-D pair in case of multiple requests (based on its own preferences) is not considered here but left for future work. §.§.§ Harvested Energy As stated earlier, we incentivize the selected IU in terms of harvested energy, i.e., U_ tx harvests energy from the signal it received from S. From (<ref>), when a particular constellation m_r is selected, we obtain the harvested energy at U_ tx as E_ harv(r)=M(1-e^-aPρ_ Ld_SU^-α|h_SU|^2)/1+e^-a(Pρ_ Ld_SU^-α|h_SU|^2-b)τ_ req(r), where h_SU∼𝒞𝒩(0,1) is the channel gain and d_SU^-α is the corresponding path-loss factor. Moreover, we observe from (<ref>) that the harvested energy is a function of the chosen constellation. Furthermore, P1 chooses the IU with the best channel condition and as E_ harv is a monotonic function of |h_SU|^2, this results in better EH performance at U_ tx. This entire procedure of identifying idle IUs within radius r to act as relays is described later pictorially in Fig. <ref>. However, if only IUs are to made act as relays, then the information transfer from S to D will solely rely on the IU traffic characteristics and activities. To overcome this problem, we take help of the already strategically placed RISs in the surroundings <cit.>, which always guarantee communication in the absence of appropriate IUs. This also implies that the location of the RISs are known to all the IUs in the surroundings. §.§ Identification of RIS in case of Idle IU Unavailability In case idle IUs are not available, S searches for an RIS within the right half circle of radius r, which reduces the LRD for the next hop. The location of the idle IU in the proximity of the RIS is known to S by the reverse path forwarding procedure <cit.> via the RIS, when an IU receives the beacon signal transmitted by S. By assuming that there are L-1 device pairs being supported by that particular RIS and there exists an idle IU in the right half circle of radius r that satisfies C1 and C2 of P1, the signal-to-interference-plus-noise-ratio (SINR) corresponding to this particular device pair is γ_ sr=P|h_R_iD/UΦ_ih_S/UR_i|^2 d_R_iD/U^-αd_S/UR_i^-αρ_ L^2/∑_l=1 l ≠ i^L Pρ_ L^2|h_R_lD/UΦ_lh_S/UR_l|^2d_R_lD/U^-αd_S/UR_l^-α + σ^2, where σ^2 is the variance of the circularly symmetric zero mean additive white Gaussian noise (AWGN), h_R_iD/UΦ_ih_S/UR_i is the effective channel gain between S and the idle IU, and ρ_ L^2d_R_iD/U^-αd_S/UR_i^-α is the effective path-loss factor as defined in Section <ref>. Accordingly, the phase shift matrix of this particular RIS is optimized, such that the sum throughput of all the users being served by the RIS is maximized. Moreover, note that we are considering a delay-constrained scenario here, i.e., the data rate that we aim to maximize is obtained by replacing γ=γ_ sr in (<ref>) as R(γ_ sr)=log_2 (1+γ_ sr)-Q^-1(ε)/ln 2√(γ_ sr^2+2γ_ sr/M_b(1+γ_ sr)^2), where M_b denoting the total number of channel uses, is a finite quantity and ε is the acceptable probability of error. For the scenario of a single user system, (<ref>) reduces to γ_ sr=P|h_R_iD/UΦ_ih_S/UR_i|^2 ρ_ L^2d_R_iD/U^-αd_S/UR_i^-α/σ^2, where we have h_R_iD/U=[ ζ_1e^-jθ_ζ,1, ⋯,ζ_Ne^-jθ_ζ,N], h_S/UR_i=[ ω_1e^-jθ_ω,1, ⋯,ω_Ne^-jθ_ω,N], and Φ_i= diag(ϕ_1,⋯,ϕ_N) with ϕ_n=exp(jθ_n), n=1,⋯,N. Intuitively, it appears that the optimal choice of Φ_i which maximizes γ_ sr is ϕ_n=θ_ζ,n+θ_ω,n ∀ n=1,⋯,N and γ_ sr^ opt=P ( ∑_i=1^N ζ_iω_i )^2 ρ_ L^2 d_R_iD/U^-αd_S/UR_i^-ασ^2. Furthermore, the received power at the IU is P|h_R_iD/UΦ_ih_S/UR_i|^2 ρ_ L^2d_R_iD/U^-αd_S/UR_i^-α and accordingly, the harvested power, in this case, is obtained from (<ref>), i.e., P_ harv=M(1-e^-aP|h_R_iD/UΦ_ih_S/UR_i|^2 ρ_ L^2d_R_iD/U^-αd_S/UR_i^-α)/1+e^-a(P|h_R_iD/UΦ_ih_S/UR_i|^2 ρ_ L^2d_R_iD/U^-αd_S/UR_i^-α-b), which can be further analytically characterized depending on the probability distribution of h_R_iD/U and h_S/UR_i, respectively <cit.>. Finally, based on (<ref>), the maximum harvested power at the IU for a single user system is P_ harv^ opt=M(1-e^-aP ( ∑_i=1^N ζ_iω_i )^2ρ_ L^2 d_R_iD/U^-αd_S/UR_i^-α)/1+e^-a(P ( ∑_i=1^N ζ_iω_i )^2 ρ_ L^2d_R_iD/U^-αd_S/UR_i^-α-b). The above approach of phase shift matrix optimization can be generalized for any number of users. However, already there are significant works <cit.> in the literature that address this problem. On the other hand, our contribution lies in DRAMP, which is a RIS assisted multihop routing protocol for a D2D communication network. Please note that it may happen as to there is no idle IU in the right half circle of radius r centred at the RIS, which satisfy the desired criteria as mentioned in C1 and C2 of P1. However, we are sure to find another RIS, as we assume that the RISs are already strategically placed in the surroundings <cit.>. Hence, the RIS directs the signal towards this newly found RIS in its range, which is expected to have an idle IU within its right half circle area of radius r. Accordingly, the received power at this idle IU is P_ dr =P|h_R_jD/UΦ_jh_R_iR_jΦ_ih_S/UR_i|^2ρ_ L^3d_R_jD/U^-αd_S/UR_i^-αd_R_iR_j^-α and by assuming that there are L-1 D2D pairs being served by this RIS at this point of time, the resulting doubly reflected SINR in this case is evaluated as γ_ dr=P|h_R_jD/UΦ_jh_R_iR_jΦ_ih_S/UR_i|^2×ρ_ L^3d_R_jD/U^-αd_S/UR_i^-αd_R_iR_j^-α/∑_l=1 l ≠ i^L P|h_R_jD/UΦ_jh_R_lR_jΦ_lh_S/UR_l|^2×ρ_ L^3d_R_jD/U^-αd_S/UR_l^-αd_R_lR_j^-α+ σ^2. Our aim is to maximize R(γ_ dr), which is obtained by replacing γ=γ_ dr in (<ref>). As stated earlier in Remark <ref>, we are not proposing any phase shift matrix optimization here and thus, we optimize this matrix by using any of the existing techniques. Lastly, the harvested power, in this context of a doubly RIS reflected signal, is obtained from (<ref>) as P_ harv=M(1-e^-aP_ dr)/1+e^-a(P_ dr-b), where P_ dr is the received power as defined in (<ref>). Once the information reaches an idle IU in one of the hops, then the process of identifying the next idle IU or the nearby RIS arises, which has already been described above. This entire process of information transfer from S to D within a time limit of T_d is the proposed protocol DRAMP, which has been pictorially presented by a concise and compact flowchart in Fig. <ref>. §.§ Illustrative Example of DRAMP A specific scenario is demonstrated in Fig. <ref>, where there is no direct LoS link between S and D and the corresponding delay constraint for the complete data transfer is T_d. To select the intermediate hops (IU/RIS), we consider a semi-circle of radius r at S in the LRD direction from S to D. We observe from the figure that there are no available IUs within the half-circle C_1, which satisfy the conditions as stated in P1. Hence, S estimates the waiting time interval of η_w slots and the delay constraint T_d is accordingly updated as T_d^'=T_d- η_w T_s. After waiting for η_w slots, S again scans the semi-circle of radius r but still it cannot find an appropriate IU that can act as a relay. Accordingly, S does not wait any further but observes that RIS R_1,R_9 and R_10 can act as potential reflectors for the first hop. As the reflected ray, via R_1, traverses the maximum distance in the LRD direction, R_1 is selected among the three. Since r is the coverage distance for each RIS and idle U_2,U_3 lie inside this region, P1 is solved over these two IUs to select U_3 as the next hop. By adopting a similar technique, we consider a right half circle of radius r at U_3 to observe that there are no idle IUs in this region and R_3 is the only RIS that reduces the LRD from U_3 to D. Moreover, R_4 is selected to act as the next hop of this framework due to two reasons: firstly, there are no idle IUs available in the concerned region and secondly, as stated earlier, double reflections are non-negligible in practice. As U_5 covers the LRD towards D and three consecutive RIS selection results in significant signal degradation <cit.>, U_5 is chosen to act as the next relay node. In a similar logic, U_7 and U_9 are selected to act as the corresponding hops of the proposed framework. Therefore, the signal from S reaches D within a time limit of T_d by using the path S,R_1,U_3,R_3,R_4,U_5,U_7,U_9,D. §.§ Performance Measures of DRAMP Here we define the metrics, namely data throughput and energy efficiency, which will be used to quantify the performance of DRAMP. §.§.§ Data throughput 𝒟_𝒯 By assuming that DRAMP chooses 𝒳 IUs and a certain number of RISs to connect a S-D pair in 𝒳+1 hops, the data throughput is defined as 𝒟_𝒯=1∑_i=1^𝒳+11-a_i( 1-P_b ) m_i+a_iR_i(γ_i), where a_i= 1, if i-th hop involves RIS , 0, else. Here P_b is the BER and m_i is the constellation size as stated in (<ref>). Moreover, in case we have a RIS selected due to idle IU unavailability, R_i is the corresponding achievable data rate as defined in (<ref>). §.§.§ Energy efficiency ℰ_ eff Similar to data throughput, the system energy efficiency for transferring α packets of data with φ bits in each packet, when a S-D pair gets connected in 𝒳+1 hops is defined as ℰ_ eff=αφ( P+P_ proc)T_s∑_i=1^𝒳+1τ_i-T_s∑_j=1^𝒳P_ harv(j)τ_j, where P and P_ proc is the transmission and processing power respectively, as defined in (<ref>). Moreover, τ_i is the time required (in slots) for complete information transfer in the i-th hop and finally, based on the channel condition, P_ harv(j) is the harvested power at the j-th IU when the channel between the (j-1)-th and j-th IU is being used for information transfer. § DELAY ANALYSIS As stated earlier in Section III-C, here we investigate DRAMP in terms of the delay for information transfer from S to D. Based on the choice of IUs and RISs as hops of this framework, we can have the following scenarios: (i) only IUs (both busy and idle) and (ii) both IUs and RISs are being used. Now we look at all the scenarios in detail. §.§ Only IUs Here we observe that the information packets from S reach D through finite number of busy/idle IUs and no RIS is being used in this scenario. On arrival of information, S immediately locates an idle IU U_1 within the right half circle of radius r, which is also in the LRD direction and can act as a DF relay. However, if S cannot locate an idle IU in the desired region that satisfies the constraints from P1 in (<ref>), before proceeding with a RIS, S waits for a finite amount of time to identify the next node in the proposed framework. We assume that there must be an IU that meets the constraints in P1 within this waiting time. In this context, we estimate the maximum acceptable waiting time T_d_i at IU U_i, such that the overall delay constraint of time T_d is not violated. Let l be the Euclidean distance from S to D and any IU can transmit to a maximum distance r. Hence, the minimum number of hops required to send a data packet from S to D is Ψ=⌈l/r⌉. Therefore, the maximum acceptable delay and actual delay at S is T_d_0=T_d/Ψ and t_0, respectively. In case the actual waiting time t_i at U_i is less than T_d_i, we propose that the leftover waiting time T_d_i-t_i is carried forward to U_i+1, i.e., the enhanced maximum acceptable delay at U_i+1 is T_d_i+1≡ T_d_i+1+(T_d_i-t_i). In this work, we consider a scenario, where IUs cannot communicate beyond a distance r, i.e., the IUs do not have a global knowledge of the system topology. Therefore, U_i passes on T_d_i-t_i to U_i+1, as it is unaware of exactly how many hops will be required in DRAMP for the complete information transfer from S to D. Accordingly, we characterize T_d_i as T_d_i=T_d-∑_k=0^i-2β_kt_k-T_d_i-1/Ψ-Ψ_i+(T_d_i-1-β_i-1t_i-1) i≥ 2, where Ψ_i=⌊||S-U_i||/r⌋, ⌊·⌋ is the floor function, T_d_0=T_dΨ, T_d_1=T_d-T_d_0Ψ-Ψ_1+(T_d_0-β_0t_0), and β_i= 0, Idle U_i , 1, Busy U_i. Moreover, based on the value of β corresponding to a particular IU, we can have the following extreme cases: * β_z=0 ∀ z<i, i.e., no waiting at U_1,⋯,U_i-1 and * β_z=1 ∀ z<i, i.e., waiting at U_1,⋯,U_i-1. Both these cases are investigated below. §.§ Case I: β_z=0 ∀ z<i. Here we investigate the scenario when the information from S has not faced any waiting at U_1,⋯,U_i-2,U_i-1 till U_i. Accordingly, the maximum acceptable delay at U_i is given by the following theorem. Without any delay in information transfer from S through the IUs U_1,⋯,U_i-2,U_i-1, the maximum acceptable delay at U_i is given by T_d_i = T_d ( ∑_p=1^i (1/Ψ-Ψ_p) ∏_q=p+1^i ( 1-1/Ψ-Ψ_q) . + .1/Ψ∏_n=1^i ( 1-1/Ψ-Ψ_n) ). See Appendix <ref>. We observe from Theorem <ref> that the maximum acceptable delay T_d_i at IU U_i is expressed in terms of the overall delay constraint T_d. Furthermore, it can also be observed that T_d_i increases monotonically with i, i.e., T_d_i≤ T_d_i+1 ∀ i. §.§ Case II: β_z=1 ∀ z<i. This implies that the information from S has suffered delay at all the IUs till U_i, i.e., in this case, we obtain the maximum acceptable delay at U_i from (<ref>) as T_d_i=T_d-∑_k=0^i-2t_k-T_d_i-1/Ψ-Ψ_i+(T_d_i-1-t_i-1) i≥ 2, where T_d_0=T_dΨ and T_d_1=T_d-T_d_0Ψ-Ψ_1+(T_d_0-t_0). By simplifying T_d_1 in (<ref>), we obtain T_d_1 (a)= T_d ( 1/Ψ-Ψ_1( 1-1/Ψ) + 1/Ψ) -μ_1 ln( (1-e^-T_s/μ_1/p_ th)^+) -T_s (b)= T_d ( 1/Ψ-Ψ_1( 1-1/Ψ) + 1/Ψ) -μ_1 ln( (T_s/μ_1p_ th)) -T_s, where (a) follows from T_d_0=T_dΨ and (<ref>). Furthermore, (b) follows from the Taylor expansion of e^-T_s/μ_1 and neglecting its higher order terms as we know from Section <ref> that T_s/μ_1<1 and finally, by assuming T_s/μ_1p_ th>1. Note that for x ≥ 1, xln( 1x) is a monotonically decreasing function with its maxima at x=1, when xln( 1x)=0. Hence, we can observe from (<ref>), that T_d_1 increases with μ_1 when other parameters are constant. It is interesting to observe, that the same intuition is also provided by the term p_10 in transition probability matrix stated in (<ref>). Instead of making a claim specifically with respect to U_1 as in Remark <ref>, we can make a generalization as follows. From (<ref>), we obtain T_d_i =T_d-T_d_i-1Ψ-Ψ_i+T_d_i-1-∑_k=0^i-2t_kΨ-Ψ_i-t_i-1 i≥ 2 =T_d-T_d_i-1Ψ-Ψ_i+T_d_i-1- ( i-1/Ψ-Ψ_i+1 )T_s - ( 1/Ψ-Ψ_i∑_k=1^i-1μ_k ln( T_s/μ_kp_ th) +μ_i ln( T_s/μ_ip_ th) )_function of μ_1,⋯,μ_i, which is based on (<ref>). We observe from (<ref>) that T_d_i is a joint function of μ_1,⋯,μ_i. This implies that the average acceptable time delay at an arbitrary intermediate user is dependent on the traffic characteristics of all the previous intermediate users. Since we have investigated the two extreme scenarios, i.e., no delay at U_1,⋯,U_i-1 and waiting at all of U_1,⋯,U_i-1, the actual T_d_i corresponding to U_i will be in between the two. The reason behind this observation is attributed to the practical scenario of β=0 for some of the IUs and β=1 otherwise. §.§ RIS and IUs This is the most general scenario, which involves IUs, both idle and busy, and RISs in the process of information transfer from S to D. In this setup, if S has information packets to transfer, first it will search for idle IUs within the right half circle of radius r in the LRD direction. If one of the IUs is available that meets all the constraints of P1 within the acceptable waiting time, this particular IU will serve as the DF relay node but otherwise, S goes for a RIS. This process of IU or RIS selection is repeated at each hop until D is reached. In this context, we come across namely two types of delays at an arbitrary IU U_i as follows. * Delay t_i': waiting time at U_i, when it transfers the data packets to the following IU and * Delay t_i”: time after which U_i chooses an RIS to do the transfer due to reasons such as the unavailability of an appropriate IU or suitable channel conditions. It is to be noted that both t_i' and t_i” occur at U_i for information transfer to U_i+1. Towards this direction, U_i after suffering a delay t_i”, immediately chooses a RIS and if there are no suitable IU in the LRD direction of the RIS, it immediately directs the signal to its adjacent RIS. This is based on the fact that double-RIS aided information transfer is a non-negligible phenomenon <cit.> and the corresponding channel model is described in Section <ref>. Moreover, we assume that the RISs are strategically placed <cit.> and as a result, there is always another RIS in the right half circle of radius r. Furthermore, as the RISs are already optimally placed, this guarantees the presence of an idle IU in the LRD direction of the second RIS. Furthermore, in the context of maximum acceptable delay calculation at an IU, here also we propose T_d_i-t_i of being carried forward to only U_i+1. With the motivation and the framework for this variant already stated in the previous section, we proceed along similar lines to obtain the maximum acceptable delay T_d_i corresponding to U_i. T_d_i =T_d-∑_k=0^i-2 c_kt'_k-∑_k=0^i-2 (1-c_k)t”_k-T_d_i-1Ψ-Ψ_i +(T_d_i-1-(c_i-1t'_i-1+(1-c_i-1) t”_i-1)) i ≥ 2, and T_d_1=T_d-T_d_0Ψ-Ψ_1 +(T_d_0-(c_0t'_0+(1-c_0) t”_0)), where c_i= 1, for D2D delay , 0, else Note that for a particular U_i, both t_i' and t_i” cannot exist at the same time. Hence in this section, we have analyzed the transmission delay for all the possible cases of the proposed framework. Finally, Table <ref> presents a summary of the main analytical results derived in this section. § NUMERICAL RESULTS In this section, we carry out extensive simulations to validate the performance of DRAMP and also compare with the nearest existing approach. Here we consider data transmission in a Rician fading scenario, where we assume Rician factor K=10 dB. The default parameters considered are: slot duration T_s=100 μs, IU transmission power P=30 dBm, IU processing power P_ proc=10 dBm, pathloss at one meter distance ρ_ L=10^-3.53, pathloss exponent α=4.2 between two consecutive IUs and α=2 elsewhere, the number of elements in each RIS N=250, and acceptable delay bound T_d=50 ms. The parameters related to energy harvesting at the IUs are M=24 mW, a=150, and b=0.014 <cit.>. Based on (<ref>), we consider M-ary quadrature amplitude modulation (M-QAM) transmission between consecutive IUs and a BER of P_b=10^-6 results in Pρ_ Ld^-α|h|^2σ^2=9.6724(m-1), where m is modulation scheme as defined in Section <ref>. Accordingly, we obtain the transmission modes (TM) as stated in Table <ref> and these TM are used in this section. Moreover, in this work, when an RIS is chosen due to the unavailability of IUs, we consider parameters M_b=1000 and ε=10^-4. Next we demonstrate the performance of DRAMP and also validate the proposed analytical framework against Monte Carlo simulation. Finally, we compare DRAMP with the existing benchmark approaches. §.§ Performance of DRAMP An illustration of the DRAMP-based multihop trajectory is presented in Fig. <ref>. In this scenario, we consider a two dimensional squared area of 400 × 400 m^2, where the IUs and the RISs are randomly and strategically placed, respectively. Moreover, we assume an IU coverage of 60 m, where `coverage' refers to the maximum distance at which a particular IU can communicate. We consider two separate instances, when a randomly selected S-D pair wants to communicate. In the process of doing so, DRAMP establishes a multihop connection, which effectively brings out the advantages of the proposed protocol as follows. * IUs are preferred over RISs in establishing the S-D connection, i.e., the figure demonstrates that if IUs are available to act as relays without violating the delay constraint, RISs are completely overlooked by DRAMP. * On the other hand, the RISs are considered as an option only in the case of IU unavailability. Furthermore, to take advantage of the RISs significantly, DRAMP also leverages on the secondary reflections from the RISs, which can also be observed by the choice of two consecutive s in the figure. In this way, by prioritizing the choice of IUs over RISs, DRAMP avoids the aspect of unnecessary resource wastage. However, it is to be noted that this benefit does not come at a cost of violating the delay constraint and the corresponding analysis is already explained in Section <ref>. Fig. <ref> illustrates the number of RISs used to connect S to D as a function of the IU coverage. We consider a particular S-D pair and three different scenarios with IU density 100,400, and 900, respectively. Accordingly, we look at the number of RISs used to establish a multihop connection from S to D. For a particular IU density, we observe that the number of RISs used to connect S-D follows a non-increasing trend. This is justified by the fact that a smaller coverage implies more number of hops to connect S-D. Moreover, we know that higher carrier frequency results in higher data rate but lower coverage. Hence, the figure demonstrates that for identical IU density, higher carrier frequency results in higher number of RISs being used and vice-versa. Finally, we also note that irrespective of the IU density, the number of RISs used asymptotically reaches zero with increasing IU coverage. Fig. <ref> depicts the number of RISs used to connect S to D as a function of the IU density. In this figure, we establish connection between a S-D pair for three different IU coverage of 30,45, and 60 m, respectively. It is observed here that lesser number of RISs are being used as the IU density increases and here also, irrespective of the IU coverage radius, the value asymptotically reaches zero. In other words, for an environment with significantly large IU density, it is possible to completely avoid the usage of RISs. From Fig. <ref> and Fig. <ref> we observe that depending on the carrier frequency (i.e. , the IU coverage) and density of IUs in the surroundings, it is possible to establish a multihop S-D connection consisting of only IUs and not RISs. This further strengthens our claim of exploiting the IU traffic characteristics for reducing the dependency on the RISs. It is to be noted that our proposed DRAMP avoids wastage of resources but not at the cost of performance degradation. §.§ Verification of ν_B and ν_I by Monte Carlo Simulation Here for the generation of results, we define the average IU activity duty cycle as Ξ=μ_kμ_k+λ_k ∀ n. Fig. <ref> compares the analytically obtained ν_B in (<ref>) with the Monte Carlo simulations, where we consider the average `OFF' duration λ_k=4 ms ∀ n. It is observed that ν_B increases monotonically with Ξ and moreover, the rate of increase exponentially shoots up as Ξ→ 1. This is also intuitive, as increasing Ξ implies that the IU will remain busy most of the time and hence, the time duration for which it will remain busy given that it is currently busy will also increase. Furthermore, we also observe that for a particular Ξ, a higher value of δ implies a greater value of ν_B and vice-versa. We compare the analytically obtained ν_I with the corresponding Monte Carlo simulations in Fig. <ref>, where we consider the average `ON' duration μ_k=4 ms ∀ n. We observe that, irrespective of δ, the value of ν_I decreases with increasing Ξ, unlike ν_B. This is intuitive too, as increasing duty cycle implies that the IU will remain idle for a relatively lesser amount of the time. Furthermore, here also, we observe that for any particular Ξ, δ_1>δ_2 results in ν_I corresponding to δ_1 being greater than the ν_I corresponding to δ_2. Thus, based on Fig. <ref> and Fig. <ref>, we can state that ν_B and ν_I complement each other. Furthermore, it can also be said, that DRAMP will always have a tendency to select IUs with lower Ξ as relays. §.§ Performance Comparison Here we compare the performance of DRAMP with the existing benchmark scheme. Accordingly, the variants used for this purpose are the following: * DRAMP, M_b →∞: Unlike DRAMP with finite M_b, M_b →∞ denotes a delay tolerant scenario. In this context, our prime objective is information transfer from S to D and delay is not a concern. * DRAMP, no AM: This variant of DRAMP avoids the usage of adaptive modulation (AM) irrespective of the channel condition. Hence, in this case we communicate by a combination of IUs and RISs as proposed, but fail to exploit the advantages of the channel variations. * OPRIS <cit.>: The work investigates the optimal placement of RISs when the S-D pair is connected via single RIS only. In other words, it does not involve any aspect of connecting two devices that requires multihop communication. Fig. <ref> demonstrates an overall decreasing trend of 𝒟_𝒯 with IU coverage, irrespective of the deployed scheme. This is because although increasing coverage implies lesser number of hops, the pathloss factor becomes dominant. It is observed that the performance of DRAMP without AM and OPRIS are comparable and equally poor as compared to DRAMP irrespective of M_b. This degraded performance is attributed to the inability to exploit the temporal variation of the wireless channel. Moreover, we note that irrespective of coverage, DRAMP with M_b →∞ always outperforms its finite M_b counterpart. This is because the M_b →∞ scenario always chooses the best channel in each hop while connecting S-D, whereas a finite M_b scenario cannot do so always due to the application specific delay constraints. As a result, it can be concluded that if delay is not a critical factor for the application at hand, the performance of DRAMP gets enhanced by a finite margin. As in Fig. <ref>, Fig. <ref> depicts the advantage of the proposed protocol in terms of ℰ_ eff, which also reduces with increasing coverage irrespective of the framework being used. § CONCLUSION In this paper we proposed a novel double-RIS assisted adaptive modulation-based multihop routing protocol for D2D wireless networks, which takes into account the aspect of multi-RIS secondary reflection. The proposed protocol exploits the traffic characteristics of the users present in the surroundings to bring down the dependency on the already deployed RISs, which reduces the wastage of resources. Numerical results demonstrate that it is possible to significantly bring down the usage of RISs and under some circumstances, completely avoid them. Moreover, the results also showcase the significance of the proposed protocol in terms of enhanced data throughput and energy efficiency. An immediate extension of this work is to investigate a non-cooperative scenario, where the users are independent to decide whether they would like to act as a relay and if they do, then for which corresponding S-D pair in case of multiple requests. § PROOF OF THEOREM <REF> By replacing β_z=0 ∀ z<i in (<ref>), we obtain T_d_i=T_d-T_d_i-1Ψ-Ψ_i+T_d_i-1 i≥ 1, where T_d_0=T_dΨ. After trivial manipulations, T_d_1 can be alternatively written as T_d_1 =T_d-T_d_0Ψ-Ψ_1+T_d_0 = T_d (1/Ψ-Ψ_1+ 1/Ψ( 1-1/Ψ-Ψ_1)). Similarly, we obtain T_d_2 as a function of T_d_1, which in turn, can be further simplified in terms of T_d_0=T_dΨ as T_d_2 =T_d/Ψ-Ψ_2+ ( 1-1/Ψ-Ψ_2) T_d_1 (a)=T_d/Ψ-Ψ_2+( 1-1/Ψ-Ψ_2) ×(T_d/Ψ-Ψ_1+ ( 1-1/Ψ-Ψ_1) T_d_0) =T_d( 1/Ψ-Ψ_2 + 1/Ψ-Ψ_1( 1-1/Ψ-Ψ_2) . + . 1Ψ( 1-1/Ψ-Ψ_1) ( 1-1/Ψ-Ψ_2) ), where (a) follows from (<ref>). By proceeding in the same way for i ≥ 1, we get T_d_i = T_d ( ∑_p=1^i (1/Ψ-Ψ_p) ∏_q=p+1^i ( 1-1/Ψ-Ψ_q) . + .1/Ψ∏_n=1^i ( 1-1/Ψ-Ψ_n) ). Moreover, it can also be observed that by putting i=2 in (<ref>) results in (<ref>). ieeetr
http://arxiv.org/abs/2307.07658v1
20230714233846
On the limit of simply connected manifolds with discrete isometric cocompact group actions
[ "Jikang Wang" ]
math.DG
[ "math.DG" ]
[ [ ===== We study complete, connected and simply connected n-dim Riemannian manifold M satisfying Ricci curvature lower bound; moreover, suppose that it admits discrete isometric group actions G so that the diameter of the quotient space diam(M/G) is bounded. In particular, for any n-manifold M' satisfying diam(M') ≤ D and Ric≥ -(n-1), the universal cover and fundamental group (M̃',G) satisfies the above condition. Let {(M_i,p_i)}_i ∈ℕ be a sequence of complete, connected and simply connected n-dim Riemmannian manifolds satisfying Ric≥ -(n-1). Let G_i be a discrete subgroup of Iso(M_i) with diam(M_i/G_i) ≤ D where D>0 is fixed. Passing to a subsequence, (M_i, p_i,G_i) equivariantly pointed-Gromov-Hausdorff converges to (X,p,G). Then G is a Lie group by Cheeger-Colding and Colding-Naber. We shall show that the identity component G_0 is a nilpotent Lie group. Therefore there is a maximal torus T^k in G. Our first main result is that the fundamental group π_1(X,p) is generated by loops in the T^k orbit of p. In particular, π_1(X) is a finitely generated abelian group. Assume that M is a complete, connect and simply connected n-manifold with Ric≥ 0 and G is a discrete subgroup of Iso(M) with diam(M/G) ≤ D. The celebrated splitting theorem by Cheeger-Gromoll shows that M splits, M=ℝ^k × N, where N is a simply connected compact (n-k)-manifold. Our second result is that diam(N) ≤ D'(n,D). § INTRODUCTION Consider a sequence of complete and connected n-manifolds (M_i,p_i) with Ricci curvature lower bound, saying Ric≥ -(n-1). Passing to a subsequence if necessary, we may assume that (M_i,p_i) converges to a proper geodesic metric space (X,p) in the pointed Gromov-Hausdorff sense; we shall say that (M_i,p_i) converges to (X,p) for brevity. Such (X,p) is called a Ricci limit space. We call (X,p) non-collapsing if the volume of B_1(p_i), the open 1-ball of p_i, has a lower bound. If we further assume that that G_i is a closed subgroup of Iso(M_i), the isometry group of M_i, then passing to a subsequence, we have equivariant GH convergence, (M_i,p_i,G_i) eGH⟶ (X,p,G). The regularity and geometric structure theory of (X,p) have been studied extensively by Cheeger, Colding and Naber <cit.>. A celebrated theorem by Cheeger-Colding and Colding-Naber claims that G is a Lie group. We shall show in section 4 that if all G_i are discrete, then G_0, the identity component of G, is a nilpotent Lie group. The local topology of a Ricci limit space was studied in <cit.>, see section 2.3 for more details. In particular, X is semi-locally simply connected, thus the local fundamental groups of X and M_i are closely related. Roughly speaking, fix any R and choose large i, then for any loop γ⊂ B_R(p) ⊂ X and a loop γ_i ⊂ B_R(p_i) ⊂ M_i which is point-wise close to γ, γ is contractible in B_R(p) if and only if γ_i is homotopic to some loops, each of which is contained in a 10ϵ_i-ball. In particular, if γ_i is contractible, γ must be contractible. If diam(M_i) ≤ D, we have the following result for the fundamental group of the limit space. (<cit.>) Assume that {(M_i,p_i)}_i ∈ℕ is a sequence of complete and connected Riemannian n-manifolds with diam(M_i) ≤ D and Ric≥ -(n-1). If M_i GH⟶ X, then there is a surjective homomorphism ϕ_i: π_1(M_i) →π_1(X) when i is large enough. In particular, if all M_i are simply connected, X must be simply connected However, if diam(M_i) is unbounded, then X would be non-compact and there may be no global GH approximation between M_i and X. Thus the relation of π_1(M_i) and π_1(X) is unclear. We shall construct an example that a sequence of homogeneous simply connected manifolds converge to a manifold with a non-trivial fundamental group; see also <cit.>. Consider the Lie group SU(2)=S^3 and its Lie algebra is generated by X_1=[ i 0; 0 -i ], X_2=[ 0 1; -1 0 ], X_3=[ 0 i; i 0 ]. They are left-invariant vector fields on S^3. Then by defining a metric g(X_i,X_j)=δ^ij where 1 ≤ i,j ≤ 3 and δ^ij is the standard Kronecker delta, we get a unit 3 sphere. We construct a sequence of metrics g_i on S^3 by letting X_1,X_2/i,X_3/i orthogonal, g_i(X_1,X_1)=1, g_i(X_2,X_2)=g_i(X_3,X_3)=i^2. (S^3,g_i) has sectional curvature in [1/i^4, 4/i^2-3/i^4]. We may think (S^3,g_i) as a Hopf fibration S^1 → S^3 → S^2, where S^1 are the circle actions generated by X_1 (so they are right actions); we keep the metric along the fiber and blow up the metric on the tangent space perpendicular to the orbit. Then fix a base point on S^3 we have (S^3,g_i) GH⟶ℝ^2 × S^1, where the limit metric is flat. The limit space is not simply connected. Note that left actions of SU(2) act transitively and isometrically on (S^3,g_i). A natural question is that: under what condition, simply connected manifolds with Ricci curvature lower bound converge to a simply connected limit. Using the fact that Ricci limit spaces are semi-locally simply connected, Zamora's work implies that simply connected almost homogeneous manifolds with Ricci lower bound converge to a simply connected limit. (<cit.>) Assume that complete and connected Riemannian n-manifolds (M_i,p_i) satisfies Ric≥ -(n-1) and there is a discrete subgroup G_i of Iso(M_i) so that diam(M_i/G_i) ≤ 1/i; we call the sequence of M_i almost homogeneous. Suppose that (M_i,p_i) GH⟶ (X,p). Then for i large enough, there is a subgroup Γ_i of π_1(M_i) so that there is a surjective homomorphism Γ_i →π_1(X). In particular, if all M_i are simply connected, X must be simply connected. We briefly recall the proof of Theorem <ref> when all M_i are simply connected. Zamora showed that G, the limit group of G_i, is a connected, nilpotent and homemorphic to X. Thus it is enough to prove that G is simply connected. Since diam(M_i/G_i) ≤ 1/i, G_i is determined by G_i(20/i) = { g ∈ G_i, d(gp_i,p_i) ≤ 20/i }, see Lemma <ref>; here "determine" means that G_i(20/i) generates G_i and relations in G_i(20/i) generates all relations in G_i. Naively speaking, since G_i is determined by G_i(20/i), G is determined by a very small neighborhoood of the identity. It is well-known that a very small neighborhood of the identity in a Lie group uniquely determine a simply connected Lie group, thus G must be simply connected. In particular, G contains no compact subgroup since it is nilpotent. It was conjectured by Zamora that if simply connected manifolds M_i with Ric≥ -(n-1) and discrete isometric group actions G_i so that diam(M_i/G_i) ≤ D where D is fixed, then the limit space X is simply connected. In the case of diam(M_i/G_i) ≤ D, the limit group G is generally not homeomorphic to the limit space X, may be disconnected and contain a torus subgroup (thus G is not determined by a small neighborhood of the identity). Our first main theorem in this paper is a partial result about Zamora's conjecture. Assume that {(M_i,p_i)}_i ∈ℕ is a sequence of complete, connected and simply connected n-dim Riemannian manifolds with Ric≥ -(n-1). Let G_i be a discrete subgroup of Iso(M_i) so that diam(M_i/G_i) ≤ D. Suppose that (M_i,p_i,G_i) eGH⟶ (X,p,G). Then G_0, the identity component of G, is a nilpotent Lie group. Let T^k be the maximal torus in the identity component G_0. For any r>0, there exists R' > r, so that any loop γ in B_r(p) with γ(0)=p is homotopic to a loop contained in the T^k-orbit of p̃, while the image of the homotopy map is contained in B_R'(p). In particular, π_1(X,p) can be generated by the T^k orbit of p. In Theorem A, if we only assume that G_i is only closed but not discrete, then G_0 is not necessarily a nilpotent group and we can still show that X/G_0 is simply connected. Let M_i' be a sequence of n-manifolds with diam(M_i') ≤ D, Ric≥ -(n-1) and Vol≥ V > 0. Let (M̃_i',p̃_i,G_i) be the universal cover of M_i' with the fundamental group G_i. Passing to a subsequence if necessary, (M̃_i',p̃_i,G_i) eGH⟶ (X,p,G). Then G is discrete and X is simply connected. By Theorem A, it is enough to show that for any r>0, |G_i(r)| is bounded thus G is discrete, where G_i(r) = { g ∈ G_i, d(p̃_i,gp̃_i) ≤ r }. Let U_i ⊂M̃_i' be a fundamental domain of M_i containing p̃_i. Then U_i is contained in the closed ball B̅_2D(p̃_i). Then the orbit space G_i(r)U_i ⊂B̅_2D+r(p̃_i). Note that for any non-trivial g ∈ G_i, gU_i ∩ U_i has 0 measure. Therefore by the volume comparison theorem, |G_i(r)| = Vol( G_i(r)U_i)/Vol(U_i)≤Vol(B̅_2D+r(p̃_i))/V≤ C(n,D,r,V) We next consider manifolds with non-negative Ricci curvature. Recall the famous splitting theorem by Cheeger-Gromoll. ( <cit.>) If a complete and connected Riemannian manifold (M,g) contains a line and satisfies Ric≥ 0, then (M,g) is isometric to a product metric space (H ×ℝ,g_0+dt^2). In Theorem <ref>, if M admits cocompact isometric group actions G, then M splits as ℝ^k × N where N is compact (n-k)-manifold with non-negative curvature and Iso(M)= Iso(ℝ^k) ×Iso(N). The next result shows that if M is also simply connected and G is discrete, then N has a uniform diameter bound. Fix n and D, there exists D' so that the following holds. Assume that M is a complete, connected and simply connected n-manifold with Ric≥ 0 and G is a discrete subgroup of Iso(M) with diam(M/G) ≤ D. Then M=R^k × N with diam(N) ≤ D'(D,n), 0 ≤ k ≤ n. . Both conditions that, M is simply connected and G is discrete, are necessary in Theorem B. A flat circle could have an arbitrarily large diameter while admits cyclic isometric group actions so that the diameter of the quotient space is small; (S^3,g_i) in Example <ref> is simply connected and homogeneous while the the diameter could be very large. Assume that M_i is a sequence of simply connected n-manifolds with Ric≥ 0 and G_i is a discrete subgroup of Iso(M_i) with diam(M_i/G_i) ≤ D. Assume that M_i=ℝ^k × N_i where N_i is compact; we may assume that ℝ^k factor is same for all M_i. Then we have diam(N_i) ≤ D'. In particular, by Theorem <ref>, passing to a subsequence, (M_i,p_i) converges to a simply connected limit space. In Theorem A, if we further assume that M_i satisfies Ric≥ 0, then X is simply connected. We next sketch proofs of main theorems. For (M_i,p_i,G_i) eGH⟶ (X,p,G) in Theorem A, we shall show in section 3 that there is a normal subgroup G_i' ⊲ G_i converging to G_0 and (X_i,p_i,G_i,G_i') @>eGH>> (X,p,G,G_0) @VVπ V @VVπ V (X_i/G_i',p̅_i,G_i/G_i') @>eGH>> (X/G_0,p̅,G/G_0). Moreover, G_i/G_i' ≅ G/G_0. Then in section 4, we shall show that there is a global map from X_i/G_i' to X/G_0 so that it is a GH approximation on each 10D-ball (10D can be replaced by R_i →∞). Then we prove that π_1(X_i/G_i',p̅_i) is generated by loops contained in B_ϵ_i(p̅_i) where ϵ_i → 0. Since X/G_0 is semi-locally simply connected and the fundamental group of X_i/G_i' is generated by loops in a small ball, X/G_0 must be simply connected. Note that results in section 3 and 4 hold for any closed (not necessarily discrete) isometric group G_i. In section 5, we show that if G_i is discrete, G_0 is a nilpotent Lie group. Then X/T^k is simply connected where T^k is the maximal torus in G_0. Note that T^k actions are not necessarily free on X. In section 6, we show that there is an exact sequence π_1(T^k) →π_1(X) →π_1(X/T^k) → 0. We only need to prove for k=1 and apply an induction argument. Given a homotopy map H: [0,1] × [0,1] → X/T^1, we need to find a homotopy map H̃ on X so that π(H̃) coincides with H on the boundary of [0,1] × [0,1], where π: X → X/T^1. We may not lift H to X when T^1 actions are not free. We shall construct a square decomposition of [0,1] × [0,1] and lift H on the 1-skeleton H̃: K^1 → X. Then we can extend H̃ continuously on [0,1] × [0,1] since X is semi-locally simply connected. We prove Theorem B in section 7. We will find a discrete subgroup H” of Iso(N) so that diam (N/H”) ≤ C(n)D, then apply a lemma in <cit.>. An alternative proof of Theorem B given by Prof Rong is to consider a contradiction sequence and blow them down, then the limit group would be a connected and simply connected nilpotent Lie group which is not diffeomorphic a Euclidean space, a contradiction. We list some applications and questions in section 8. A final remark is that Theorem A and B hold in the corresponding RCD^* spaces. It is known that any RCD^*(K,N) space is semi-locally simply connected and the isometry group is a Lie group. Moreover, the splitting theorem holds for a RCD(0,N) space. See <cit.>. Thus the proof of Theorem A and B work in the RCD^* setting. The author would thank Prof Wei for suggesting him studying the limit of simply connected manifolds with cocompact isometric actions. The author would also thank for Jaime Santos-Rodríguez and Sergio Zamora-Barrera for sending him their manuscript. The author would thank Prof Kapovitch for helpful discussions and Prof Rong for providing an alternative proof of Theorem B. § PRELIMINARIES §.§ Equivariant Gromov-Hausdorff convergence and isometry group on a Ricci limit space We review some notations about equivariant Gromov-Hausdorff convergence, introduced by Fukaya and Yamaguchi <cit.>. Let (X,p) and (Y,q) be two proper geodesic metric spaces. Let H and K be closed subgroups of Iso(X) and Iso(Y), respectively. For any r>0, define H(r)={h ∈ H | d(hp,p) ≤ r }, K(r)={k ∈ K | d(kq,q) ≤ r }. Both H(r) and K(r) are compact in the compact-open topology. For ϵ>0, a pointed ϵ-equivariant Gromov-Hausdorff approximation is a triple of maps (f,ϕ,ψ): f:B_1/ϵ(p) → B_(1/ϵ)+ ϵ(q), ϕ:H(1/ϵ) → K(1/ϵ), ψ:K(1/ϵ) → H(1/ϵ) with the following conditions: (1) f(p)=q, f(B_1/ϵ(p)) is 2ϵ-dense in B_(1/ϵ) + ϵ(q) and |d(f(x_1),f(x_2))-d(x_1,x_2)|≤ϵ for all x_1,x_2∈ B_1/ϵ(p); (2) d(ϕ(h)f(x),f(hx)) < ϵ for all h ∈ H(1/ϵ) and x ∈ B_1/ϵ(p); (3) d(kf(x),f(ψ(k)x))<ϵ for all k ∈ K(1/ϵ) and x ∈ B_1/ϵ(p). The equivariant Gromov-Hausdorff distance d_eGH((X_i,p_i,G_i),(X,p,G)) is defined as the infimum of ϵ so that there exists a ϵ-eGH approximation. A sequence of metric spaces with isometric actions (X_i,p_i,G_i) converge to a limit space (X,p,G), if d_eGH((X_i,p_i,G_i),(X,p,G)) ≤ϵ_i → 0. Given a GH approximation f in the above (1), we can construct an admissible metric on the disjoint union B_1/ϵ(p) ⊔ B_1/ϵ(q) so that B_1/ϵ(p) ↪ B_1/ϵ(p) ⊔ B_1/ϵ(q), B_1/ϵ(q) ↪ B_1/ϵ(p) ⊔ B_1/ϵ(q) are isometric embedding and for any x ∈ B_1/ϵ(p) d(x,f(x)) ≤ 2ϵ. We always assume such an admissible metric when the GH distance between two metric spaces is small. Sometimes we may consider that (X,p,H) is ϵ close to (Y,q,K) while the approximation (f,ϕ,ψ) is not given explicitly. Fix r=1/(10ϵ). For any x ∈ B_r(p) ⊂ X, when we say "find a point y ∈ Y close to x", we just mean choose y=f(x); similarly, for any h ∈ H(r), when we say "find k ∈ K close to h", we just mean choose k=ϕ(h). We have the following pre-compact result in <cit.>. Let (X_i,p_i) be a sequence of proper geodesic metric spaces converging to a limit space (X,p) in the pointed Gromov-Hausdorff sense. For each i, let G_i be a closed subgroup of Isom(X_i), the isometry group of X_i. Then passing to a subsequence if necessary, (X_i,p_i,G_i)eGH⟶ (X,p,G), where G is a closed subgroup of Isom(X). Moreover, the quotient spaces (X_i/G_i, p̅_i) pointed Gromov-Hausdorff converge to (X/G,p). Given a sequence of complete n-manifolds (M_i,p_i,G_i) with Ric≥ -(n-1) and G_i an closed subgroup of Iso(M_i). By Gromov's precompactness theorem and Theorem <ref>, passing to a subsequence if necessary, (M_i,p_i,G_i) eGH⟶ (X,p,G). Then G is a Lie group by Cheeger-Colding, Colding-Naber <cit.>. The isometry group of a Ricci limit space is a Lie group. §.§ Nilpotent Lie group We list some basic results about a nilpotent Lie group, see <cit.> chapter 14 for further reference. Let G be a nilpotent Lie group and G_0 be the identity component. Then for any compact subgroup K ⊂ G, K commutes with G_0. Let 𝔤 be the Lie algebra of G_0. Consider the adjoint representation Ad: G →GL(𝔤). The image Ad(K) is compact. Choose any g ∈ K. Since G is nilpotent, [g,..[g,[g, · ]]...]=0 after k commutators for some k >0. Thus we have (Ad(g)- Id)^k=0. In particular, Ad(g) is unipotent and we may choose a basis of 𝔤 so that Ad(g) is an upper triangular matrix with diagonal elements equal to 1. If Ad(g) is not the identity matrix, the closure of ⟨Ad(g) ⟩ is a non-compact group, a contradiction. Thus Ad(g)=Id and g commutes with G_0. A compact nilpotent group is not necessary abelian. For example, dihedral group D_2^k is a finite nilpotent group but not abelian when k ≥ 2. Recall that a compact subgroup K ⊂ G is a maximal compact subgroup if any compact subgroup of G is conjugate to a subgroup of K. Any connected Lie group G_0 has a maximal compact subgroup K (unique up to a conjugation) and G_0 is topologically K ×ℝ^l for some l. Let G_0 be a connected nilpotent Lie group. Then any compact subgroup is contained in the center of G_0. In particular, the maximal compact subgroup of G_0 is a unique (without conjugation) torus T^k and G_0/T^k is homeomorphic (actually diffeomorphic) to ℝ^l. §.§ Local topology of a Ricci limit space The first topological result for a general Ricci limit space was proved by Sormani and Wei that any Ricci limit space has a universal cover <cit.>; while it was unknown whether the universal cover is simply connected by their method. Recently, it was proved that any Ricci limit space (more generally, any RCD^*(K,N) space) is semi-locally simply connected <cit.>, thus the universal cover must be simply connected. Actually we proved a little more than semi-local simple connectedness. We call a metric space strongly semi-locally simply connected if for any point p and R>0, there exists r<R so that any loop in B_r(p) is contractible in B_R(p). Any Ricci limit space is strongly semi-locally simply connected. Recall that for any two paths c_1 and c_2 with c_1(1)=c_2(0), we define c_1c_2(t)={ c_1(2t), 0 ≤ t ≤ 1/2 c_2(2t-1), 1/2 ≤ t ≤ 1 . If there is another path c_3 with c_2(1)=c_3(0), it is well-known that (c_1c_2)c_3 is homotopic to c_1(c_2c_3), thus we just use the notation c_1c_2c_3. We next define a δ-contractible loop. It is related to the δ-cover introduced in <cit.>; see also <cit.>. (δ-contractible loop) We call a loop γ at p is δ-contractible if it is homotopic to a loop generated by some [α^-1_j β_j α_j], where β_j is a loop lying in a δ-ball B_x_j(δ), and α_j is a path from β(0) to p. Assume (M_i,p_i) GH⟶ (X,p) with Ricci curvature lower bound. ϵ_i is the GH distance between (M_i,p_i) and (X,p). The following lemma in <cit.> shows that, for any path in B_1/10ϵ_i(p) ⊂ X, we can find a point-wise close path in B_1/10ϵ_i(p_i) ⊂ M_i (and vice versa). For any R < 1/(10ϵ_i) and any path γ⊂ B_R(p), there exists γ_i ⊂ B_R+3ϵ_i(p_i) so that d(γ(t),γ_i(t)) ≤ 3ϵ_i for all t ∈ [0,1]. Conversely, for any γ_i' ⊂ B_R(p_i), there exists γ' ⊂ B_R+3ϵ_i(p) so that d(γ'_i(t),γ'(t)) ≤ 3ϵ_i for all t ∈ [0,1]. The next lemma shows the relation between the local π_1 of M_i and X. Fix R and choose i large enough. Assume that a loop γ_i ⊂ B_R(p_i) is point-wise 3ϵ_i-close to a loop γ⊂ B_R(p). Then if γ_i is 10ϵ_i-contractible in B_R(p_i), then γ is contractible in B_R+2(p). Conversely, if γ is contractible in B_R(p), then γ_i is a 10ϵ_i-contractible in B_R+1(p_i). Although the proof of Lemma <ref> can be found in <cit.>, we give a detailed proof since it is related to the proof of Corollary <ref> and Lemma <ref>. We shall use a square decomposition of [0,1] × [0,1]; see <cit.> where a triangular decomposition was used. A square decomposition K of [0,1] × [0,1] is that we divide it evenly into l^2 small squares for some l, each of which can be written as [k_1/l, (k_1+1)/l] × [k_2/l,(k_2+1)/l] for some 0 ≤ k_1,k_2 ≤ l-1. We fix i large enough so that ϵ_i ≤ 1/100R and for any x ∈ B_R+1(p), any loop in B_100ϵ_i(x) is contractible in B_1(x). First assume that γ_i is 10ϵ_i-contractible by a homotopy map H:[0,1] × [0,1] → B_R(p_i) so that H(t,0)=γ, H(0,s)=H(1,s)=γ(0), and H(t,1) is generated by loops [α^-1_j β_j α_j] described in Def <ref>. We can find a square decomposition K of [0,1] × [0,1] so that for each small square □ , the diameter of H(□) is at most ϵ_i. We shall define H' from K^1, the 1-skeleton of the chosen square decomposition, to B_R+1(p). Define H'(t,0)=γ(t), H'(0,s)=H'(1,s)=γ(0). We next define H'(t,1). H(t,1) is generated by some loops [α^-1_j β_j α_j] with α(1)=γ_i(0) and β_j is contained in a 10ϵ_i-ball. For each α_j ⊂ B_R(p_i), we find a nearby path α_j' ⊂ B_R+10ϵ_i(p) so that α_j'(1)=γ(0). Define H'(t,1) to be the loop generated by [(α_j')^-1α_j']. H'(t,1) is a contractible loop and close to H(t,1) after a proper reparameterization, since β_j is close to a point. For any vertex of the decomposition y ∈ K^0 where H'(y) is not defined, we define H'(y) to be a point in X close to H(y). For any edge c ⊂ K^1 where H'(c) is not defined, we define H'(c) be a geodesic from two end points. Under this construction, for each square □⊂ K^1, the diameter of H'(∂□) is at most 20ϵ_i, thus it is contractible in a 1-ball. Therefore we can extend the definition area of H' to [0,1] × [0,1], which contracts γ. Now we prove the converse part. Assume that γ is contractible by a homotomy map H':[0,1] × [0,1] → B_R(p). By a similar construction, we define H from a 1-skeleton of a square decomposition K of [0,1] × [0,1] to B_R+1(p_i), with H(t,0)=γ_i(t), H(t,1)=H(0,s)=H(1,s)=γ_i(0). And for each small square □, H(∂□) has diameter bound 10ϵ_i. Now we shall show that γ_i is 10ϵ_i-contractible using H. [help lines, color=black, step=2cm] (0,0) grid (8,8); at (1,0) [below] γ_1; at (3,0) [below] γ_2; at (5,0) [below] γ_3; at (7,0) [below] γ_4; at (0,1) [left] γ_5; at (2,1) [left] γ_6; at (4,1) [left] γ_7; at (6,1) [left] γ_8; at (8,1) [left] γ_9; at (1,2) [above] γ_10; at (3,2) [above] γ_11; at (5,2) [above] γ_12; at (7,2) [above] γ_13; at (0,0) [left] (0,0); at (8,8) [right] (1,1); at (0,8) [left] (0,1); at (8,0) [right] (1,0); For simplicity, we assume that we divide [0,1] × [0,1] into 16 squares as above. By assumption, the diameter of each H(∂□) is at most 10ϵ_i. Let γ_1 be the image of H on [0,1/4] × 0, γ_2 be the image of H on [1/4,1/2] × 0, etc. The direction is going right (up) for horizontal (vertical) paths. Then γ_1γ_2γ_3γ_4=γ_i. We claim that we can compose some loops like [α^-1_j β_j α_j] described in Def <ref> (choosing δ=10ϵ_i) to γ_i, after cancelling out adjacent inverse path, we can get γ_5γ_10γ_11γ_12γ_13γ_9^-1. If the claim is proved, then we use an induction argument to show that we can compose more loops like [α^-1_j β_j α_j] to γ_i to get the constant loop H(0,[0,1])H([0,1],1)H(1,[0,1])^-1, therefore γ_i is 10ϵ_i-contractible. Now we prove the claim. Consider the loop c_1=γ_1γ_2γ_3γ_8γ_13γ_9^-1γ_4^-1γ_3^-1γ_2^-1γ_1^-1. c_1 is a loop like [α^-1βα] in Def <ref> by choosing α^-1= γ_1γ_2γ_3 and β=γ_8γ_13γ_9^-1γ_4^-1. c_1γ_i=γ_1γ_2γ_3γ_8γ_13γ_9^-1. Let c_2=γ_1γ_2γ_7γ_12γ_8^-1γ_3^-1γ_2^-1γ_1^-1. Then c_2 is also a loop like is a loop like [α^-1βα], and c_2c_1γ_i=γ_1γ_2γ_7γ_12γ_13γ_9^-1. Similarly, let c_3=γ_1γ_6γ_11γ_7^-1γ_2^-1γ_1^-1, c_4=γ_5γ_10γ_6^-1γ_1^-1. Then c_4c_3c_2c_1γ_i= γ_5γ_10γ_11γ_12γ_13γ_9^-1. The following picture shows how we move the path by adding loops like [α^-1βα]. The claim is proved. (0,0) - - (2,0); (3,0) - - (4.5,0) - - (4.5,0.5) - - (5,0.5) - - (5,0); (6,0) - - (7,0) - - (7,0.5) - - (8,0.5) - - (8,0); (9,0) - - (9.5,0) - - (9.5,0.5) - - (11,0.5) - - (11,0); at (1,0) [above] γ_i; at (4,0) [below] c_1γ_i; at (7,0) [below] c_2c_1γ_i; at (10,0) [below] c_3c_2c_1γ_i; (0,0) - - (0,0.5) - - (2,0.5) - - (2,0); at (1,0) [below] c_4c_3c_2c_1γ_i; §.§ Slice theorem Let Y be a completely regular topological space and G be a Lie group. We call Y a G-space if G acts as homeomorphisms on Y. For any point y ∈ Y, define the isotropy group at y, G_y= { g ∈ G | gy=y }. Given a subset S ⊂ Y, we say S is G_y-invariant if G_y S=S. For a G_y-invariant set S, define G ×_G_y S = G × S / ∼, with quotient topology, and ∼ is the equivalence relation (g,s) ∼ (gh^-1, hs) for all g ∈ G, h ∈ G_y, s ∈ S. There is a natural left G-action on G ×_G_y S by g:[g',s] ↦ [gg',s], where g ∈ G and [g',s] ∈ G ×_G_y S. We define S ⊂ Y a slice at y if the followings hold: (1). y ∈ S and S is G_y-invariant; (2). GS is an open neighborhood of y; [g,s] ↦ gs is a G-homeomorphism between G ×_G_y S and GS. In particular, the second condition above implies that (G ×_G_y S) / G = S/G_y is homeomorphic to GS/G, which a open set in the quotient space. By the definition of a slice S at y, for any g ∈ G, gS ∩ S ≠∅ iff g ∈ G_y. Thus for any x ∈ S, G_x ⊂ G_y. The following slice theorem is due to Palais <cit.>. Let G be a Lie group, Y be a G-space and y ∈ Y. The following two conditions are equivalent: (1). G_y is compact and there is a slice at y. (2). There is a neighborhood U of y ∈ Y such that {g ∈ G | gU ∩ U ≠ 0 } has compact closure in G. For any closed isometric group actions on a proper geodesic metric space (in particular, a Ricci limit space), the second statement in Theorem <ref> is always fulfilled, thus a slice always exists. The existence of a slice guarantees the path-lifting property: any path in Y/G can be lifted to a path in Y. Note that a loop could be lifted to a non-loop path. However, if we lift a loop on a slice, saying S is a slice at y and a loop γ⊂ S/G_y with γ(0)=π(y) ∈ S/G_y, then the lifting path of γ must be a loop since π(y) has a unique pre-image point in S. For a proper geodesic metric space X with G a closed subgroup of Iso(X), the r-ball B_r(x) is G_x invariant for any x ∈ X. For any slice S at x ∈ X, then S ∩ B_r(x) is still a slice by the definition; thus we always assume that a slice is contained in a small ball. Using the existence of a slice, we can show that the quotient of a Ricci limit space by closed isometric group actions is still strongly semi-locally simply connected. Let X be a strongly semi-locally simply connected proper geodesic metric space and Lie group G be a closed subgroup of Iso(X). Then X/G is strongly semi-locally simply connected. The slice of G actions exists at every point. Choose any p ∈ X and p̅=π(p) ∈ X/G, we shall find r>0 so that any loop in B_r(p̅) is contractible in B_1(p̅). We have t>0 so that any loop in B_t(p) is contractible in B_1(p). By assumption there is a slice S at p, we may assume S ⊂ B_t(p). Since π(S)=S/G_p contains a neighborhood of p̅, we can find r >0 so that B_r(p̅) ⊂ S/G_p. For any loop γ⊂ B_r(p̅), we may assume γ(0)=p̅; otherwise we consider cγ c^-1 where c is a geodesic from p̅ to γ(0). We can lift γ to a loop in S, which is contractible in B_1(p). Project down the homotopy map, γ is contractible in B_1(p̅). § NORMAL SUBGROUP AND CONVERGENCE We start with a preliminary lemma about the equivariant convergence of normal subgroups. Assume that (X_i,p_i,G_i) are proper geodesic metric spaces with G_i closed subgroup of Iso(X_i). Let G_i' be a closed normal subgroup of G_i. Suppose that (X_i,p_i,G_i,G_i') eGH⟶ (X,p,G,G') Then we have (X_i,p_i,G_i,G_i') @>eGH>> (X,p,G,G') @VVπ V @VVπ V (X_i/G_i',p̅_i,G_i/G_i') @>eGH>> (X/G',p̅,G/G'). We should mention that G/G' (or G_i/G_i') does not necessarily act effectively on X/G' (or X_i/G_i'); that is, some non-trivial elements g ∈ G/G' may be a trivial action on X/G'. For example, we may take X=ℝ, G=Iso(X) which is generated by translations and reflection, G' is the group of all translations. Then X/G' is a point while G/G'=Z_2 acts trivially on the point. We shall construct equivariant approximation maps from (X_i/G_i',p̅_i,G_i/G_i') to (X/G',p̅,G/G'). The non-effectiveness does not matter in this paper; readers may quotient out the subgroup in G/G' (or G_i/G_i') generated by trivial-action elements. For any x,y ∈ X_i and g ∈ G_i, let π(x),π(y) ∈ X_i/G_i' and π(g) ∈ G_i/G_i' be their quotient images. Then G_i/G_i' acts on X_i/G_i' by π(g)π(x)=π(gx), which is well-defined since G_i' is a normal subgroup. Recall that d(π(x),π(y)) = inf{ d(x,hy) | h ∈ G_i' }. We can show that we can actually take the minimum. Let r= d(π(x),π(y))+1. Then the set {h ∈ G_i', d(x,hy) ≤ r} is non-empty and compact. Thus d(π(x),π(y)) = min{ d(x,hy) | h ∈ G_i' }. Then we have d(π(gx),π(gy))= min{ d(gx,hgy) | h ∈ G_i' }= min{ d(gx,gh'y) | h' ∈ G_i' } since G_i' is normal; therefore d(π(x),π(y))=d(π(gx),π(gy)). G_i/G_i' acts isometrically on X_i/G_i'. We next show that G' is normal in G. For any g ∈ G' and h ∈ G, we may choose a large r so that g ∈ G'(r) and h ∈ G(r). Recall that G'(r)= {g ∈ G' | d(gp, p) ≤ r }, G(r)= { g ∈ G | d(gp,p) ≤ r }. Then we may choose g_i ∈ G'_i(r) and h_i ∈ G_i(r) converging to g and h respectively. Since G_i' is normal, h_i^-1g_ih_i ∈ G_i'(3r). Then the limit h^-1gh ∈ G'(3r). Thus G' is normal. By the same proof in the last paragraph, G/G' acts isometrically on X/G'. We finally show that G_i/G_i' converges to G/G'. It suffices to show the convergence on any r-ball. Assume that (f,ϕ,ψ) is an ϵ_i eGH approximation from (B_10r(p_i),p_i,G_i(10r),G_i'(10r)) to (B_10r(p),p,G(10r),G'(10r)). We shall construct a 10ϵ_i approximation (f̅,ϕ̅,ψ̅) from (B_r(p̅_i),p̅_i,G_i/G_i'(r)) to (B_r(p̅),p̅,G/G'(r)). Here B_r(p̅_i) ⊂ X_i/G_i', B_r(p̅) ⊂ X/G', and G_i/G_i'(r)= {g ∈ G_i/G_i' | d(gp̅_i, p̅_i) ≤ r }, G/G'(r)= {g ∈ G/G' | d(gp̅, p̅) ≤ r }. We still use π for all quotient maps. For any x̅∈ B_r(p̅_i), we can find a preimage point x ∈ B_r(p_i). Define f̅(x̅)=π(f(x)) ∈ X/G'. By the proof of Theorem <ref>, f̅ is a GH approximation map. Now we construct the aprroxiamtion map ϕ̅:G_i/G_i'(r) → G/G'(r); the construction of ψ̅ is same. For any h̅∈ G_i/G_i'(r), we can find a preimage action h ∈ G_i of h̅. Since d(p̅_i,h̅p̅_i) ≤ r, we may find g ∈ G_i' so that d(p_i,ghp_i) ≤ r. Since h^-1gh ∈ G_i', gh is also a preimage of h̅. Therefore we may assume d(p_i,hp_i) ≤ r, otherwise we choose gh. Define ϕ̅(h̅)=π(ϕ(h)) ∈ G/G'(r). Now we prove that ϕ̅ is an eGH approximation, that is, for any x̅∈ B_r(p̅_i) and h̅∈ G_i/G_i'(r), d(ϕ̅(h̅)f̅(x̅),f̅(h̅x̅)) ≤ϵ_i. Actually by our definition, choose preimages h ∈ G_i(r) and x ∈ B_r(p_i) of h̅ and x̅, ϕ̅(h̅)f̅(x̅)=π(ϕ(h)f(x)), f̅(h̅x̅)=π(f(hx)). Since d(ϕ(h)f(x),f(hx)) ≤ϵ_i by the definition of ϕ, d(π(ϕ(h)f(x)),π(f(hx))) ≤ϵ_i. The above proof shows G_i/G_i' (r) =⟨ G_i(r),G_i' ⟩ /G_i'. The image π(G_i(r)) is exactly G_i/G_i' (r) where π:G_i → G_i/G_i' is the projection map. In particular, let r=0, π((G_i)_p_i) is exactly the isotropy group (G_i/G_i')_p̅_i. The main goal of this section is to prove the following theorem, which explains how the global property that X_i is simply connected is related to the GH convergence. Assume that (X_i,p_i,G_i) are simply connected proper geodesic metric spaces with G_i a closed subgroup of Iso(X_i). Suppose that (X_i,p_i,G_i) eGH⟶ (X,p,G) and there is a closed normal subgroup G' ⊲ G satisfying: G/G' is discrete; G' is generated by G'(R); the diameter of X_i/G_i is less than D. Then there exists a sequence of normal subgroup G_i' of G_i so that G_i' converges to G'; G_i/G_i' ≅ G/G' for large i; G_i' is generated by G_i'(R+ ϵ_i), ϵ_i → 0; G/G' is finitely presented. Note that the above theorem was proved by Fukaya-Yamaguchi under an additional condition that G_i is discrete and free <cit.>. We shall use some ideas in <cit.>. Before proving Theorem <ref>, we discuss the groupfication of a pseudo-group. Let S be a symmetric subset of a group H; we always assume that S contains the identity. We call S a pseudo-group. We can construct Ŝ, the groupfication of S, by the following way. Let F_S be the free group generated by e_g for all g ∈ S. We may quotient F_S by the normal subgroup generated by all elements of the form e_g_1e_g_2e_g_1g_2^-1, where g_1,g_2 ∈ S with g_1g_2 ∈ S. The quotient group is Ŝ. For any group H and map ϕ:S → H, we call ϕ a homomorphism if ϕ(g_1)ϕ(g_2)=ϕ(g_1g_2) for all g_1,g_2 ∈ S with g_1g_2 ∈ S. Then there is a natural homomorphism i: S →Ŝ by i(g)=[e_g] where [e_g] is the quotient image of e_g. Any homomorphism ϕ:S → H can be extended to ϕ̂:Ŝ→ H so that ϕ=ϕ̂∘ i. There is a homomorphism π:Ŝ→ H by π([e_g_1e_g_2...e_g_k])=g_1g_2...g_k. π∘ i is the identity map on S, thus i is injective. If S generates H, π is surjective. For any group H and a symmetric subset S ⊂ H which generates H, we call S determining if π: Ŝ→ H is an isomorphism. The following lemma was proved by Santos and Zamora in <cit.>. Their proof only concerns the case that H is discrete; the general case that H is closed can be proved by adding some arguments in <cit.>. We shall sketch the proof for readers' convenience. Fix D>0. Let (Y,q) be a simply connected pointed proper geodesic metric space and H be a closed subgroup of Iso(Y) so that diam(Y/H) ≤ D. Then H(20D)={g ∈ H ,d(gq,q) ≤ 20D } is a determining subset of H. H(20D) generates H since diam(Y/H) ≤ D. We show that H(20D) is determining. Let Ĥ be the groupfication of H(20D). H has compact-open topology. We construct topological structure on Ĥ. Let K={g ∈ H|d(gq,q)< 5D} and let { U_λ}_λ∈Λ be the set of all open sets in K. We use left translations on Ĥ and the map i: K →Ĥ to define a basis on Ĥ; more precisely, {ĝ i(U_λ) | ĝ∈Ĥ,λ∈Λ} generates a topology on Ĥ. Since the topology of H is generated by the topology of K and left H-actions, i:K → i(K) is embedding. Let B̅_10D(q) be the closed ball at q with radius 10D. We define Ĥ×_H(20D)B̅_10D(q)= Ĥ×B̅_10D(q) / ∼, where the equivalence relation is given by (ĝi(g),x) ∼ (ĝ,gx) for all g ∈ H(20D), ĝ∈Ĥ, x ∈B̅_10D(q) with gx ∈B̅_10D(q). We endow Ĥ×_H(20D)B̅_10D(q) with the quotient topology. We will always write [ĝ, x] as elements in Ĥ×_H(20D)B̅_10D(q). We first show that Ĥ×_H(20D)B̅_10D(q) is connected. B̅_10D(q) is path connected thus connected. [ĝ,B̅_10D(q)] is contained in a connected component of Ĥ×_H(20D)B̅_10D(q) for any ĝ∈Ĥ. Notice that for any g ∈ H, g B̅_10D(q) ∩B̅_10D(q) ≠∅ iff g ∈ H(20D). In particular, for any ĝ∈Ĥ and g ∈ H(20D), [ĝ,B̅_10D(q)] and [ĝi(g),B̅_10D(q)] are in the same connected component. Since i(H(20D)) generates Ĥ, Ĥ×_H(20D)B̅_10D(q) is connected. Define Ψ:Ĥ×_H(20D)B̅_10D(q) → Y by Ψ([ĝ,x])=π(ĝ)x. Ψ is well-defined and surjective, since diam(Y/H) ≤ D and π: Ĥ→ H is surjective. By the argument in <cit.>, Ψ is locally homeomorphic and Ĥ×_H(20D)B̅_10D(q) is a covering space of Y. Since Y is simply connected and Ĥ×_H(20D)B̅_10D(q) is connected, Ψ is a homeomorphism map. Then for any ĝ∈ (π) and x ∈B̅_10D(q), [ĝ,x]=[1,x]= Ψ^-1(x) where 1 means the identity. Then ĝ must be the identity map by lemma 5.2 in <cit.>. Thus (π) is trivial and H(20D) is determining. Assume that S is a determining set of group H and H' is a normal subgroup H. If S ∩ H' generates H', π(S) is a determining set of H/H' where π: H → H/H' is the projection map. It's obvious that π(S) is symmetric and generates H/H'. We shall prove that for any g_1,g_2,..,g_k ∈π(S) so that g_1g_2...g_k=1, then in the free group F_π(S), e_g_1e_g_2...e_g_k is contained in the normal subgroup generated by all elements of the form e_h_1e_h_2e_h_1h_2^-1, where h_1,h_2 ∈π(S) with h_1h_2 ∈π(S). Let π: H → H/H'. We may find g̃_j ∈ S so that π(g̃_j)=g_j for each 1 ≤ j ≤ k. Then g̃_1g̃_2...g̃_k ∈ H'. Since S ∩ H' generates H', we may find g̃_k+1,...g̃_k+k'∈ S ∩ H' so that g̃_1g̃_2...g̃_kg̃_k+1,...g̃_k+k' = 1. Since S is a determining set, in the free group F_S, e_g̃_1e_g̃_2...e_g̃_k+k' is contained in the normal subgroup generated by all elements of the form e_h̃_1e_h̃_2e_h̃_1h̃_2^-1, where h̃_1,h̃_2 ∈ S with h̃_1h̃_2 ∈ S. Notice that π(g̃_k+j) is the identity for each 1 ≤ j ≤ k'. Project down to π(S), e_g_1e_g_2...e_g_k is contained in the normal subgroup generated by all elements of the form e_h_1e_h_2e_h_1h_2^-1, where h_1,h_2 ∈π(S) with h_1h_2 ∈π(S), thus π(S) is a determining set. Consider (X_i,p_i,G_i) eGH⟶ (X,p,G) in the setting of Theorem <ref>. We may assume D ≥ R. Let S_i(20D) be the subset of elements of G_i(20D) which are ϵ_i close to an element G'. Notice that G/G' is discrete. Thus there is ϵ > 0 so that for any g ∈ G(20D) and h ∈ G'(20D), if g is ϵ close to h, then g ∈ G'(20D). Here ϵ closeness means d(gx,hx) < ϵ for all x ∈ B_1/ϵ(p). Then for large i and any g_i ∈ G_i(20D) and h_i ∈ S_i(20D), if g_i is ϵ/2 close to h_i, then g_i ∈ S_i(20D) as well. In particular, any element in G_i ϵ/2 close to the identity is contained in S_i(20D). Therefore S_i(20D) is a closed subset when i is large enough and G_i' = ⟨ S_i(20D) ⟩ is a closed subgroup of G_i. G_i' is a normal subgroup in G_i for all large i. Since G_i(20D) generates G_i, it suffices to prove g G_i' g^-1=G_i' for any g ∈ G_i(20D); then we only need show gS_i(20D)g^-1⊂ G_i' for any g ∈ G_i(20D). Assume that we have a contradicting sequence g_i ∈ G_i(20D) and h_i ∈ S_i(20D) so that g_ih_ig_i^-1∉ G_i'. Passing to a subsequence if necessary, g_i → g ∈ G(20D), h_i → h ∈ G'(20D) and g_ih_ig_i^-1→ ghg^-1∈ G'(60D) since G' is normal. Since G'(R) generates G' and R ≤ D, we can find t_1,t_2,...,t_k ∈ G'(D) so that ghg^-1=t_1t_2...t_k. We may find t_ij∈ S_i(20D) converging to t_j ∈ G'(D) for each 1 ≤ j ≤ k, in particular, g_ih_ig_i^-1=t_i1t_i2...t_iks_i, where s_i ∈ G_i is close to the identity. Since the identity map is contained in G_i, s_i ∈ S_i(20D) as well and g_ih_ig_i^-1∈ G_i', a contradiction. By Lemma <ref>, we can find a normal closed subgroup G” in G. (X_i,p_i,G_i,G_i') @>eGH>> (X,p,G,G”) @VVπ V @VVπ V (X_i/G_i',G_i/G_i',p̅_i) @>eGH>> (X/G”,p̅,G/G”). Recall G_i'=⟨ S_i(20D) ⟩ and define G_i' (20D) = { g ∈ G_i', d(gp_i,p_i) ≤ 20D} Although S_i(20D) ⊂ G_i'(20D), it is not clear so far that they are actually same. Then the group G” contains G'(20D), thus G” contains G' since G'(R) generates G' and R ≤ D. In particular, G/G” is also discrete. We shall show G'=G”, which is equivalent to S_i(20D)=G_i'(20D), at the ending of this section. Since G/G' is discrete, we may assume that {g̅∈ G/G' | d(p̅',g̅p̅')=20D } = ∅, where p̅' is the image of p in X/G', otherwise we replace 20D by 20D+ϵ for some small ϵ. Then by Remark <ref>, {g̅∈ G/G” | d(p̅,g̅p̅)=20D } = ∅. We next show eGH approximation is actually a pseudo-group isomorphism. The eGH approximation ϕ̅_i:G_i/G_i'(20D) → G/G”(20D) is a pseudo-group isomorphism for all large i. In particular, G_i/G_i'(20D) is discrete. Since G/G”(20D) is discrete, the approximation map ϕ̅_i:G_i/G_i”(20D) → G/G”(20D) is a surjective homomorphism. We shall show that it is injective, then its inverse must be homomorphism . Assume g̅_i ∈ G_i/G_i'(20D) so that ϕ̅_i(g̅_i)=1. By the construction in Lemma <ref>, we can find a pre-image g_i ∈ G_i(20D) of g̅_i, which is close to an element g ∈ G”(20D). On the other hand, since G_i'(20D) converges to G”(20D), we may find h_i ∈ G_i'(20D) which is close to g ∈ G”(20D). In particular, h_ig_i^-1 is close to the identity map in G_i. Therefore h_ig_i^-1∈ S_i(20D) ⊂ G_i'(20D). Then g_i ∈ G_i' and g̅_i=1 ∈ G_i/G_i'. The next lemma shows that the quotient groups are isomorphic. G/G_i' is isomorphic to G/G” for all large i. Since ϕ̅_i:G_i/G_i'(20D) → G/G”(20D) is a pseudo-group isomorphism, the groupfication of G_i/G_i'(20D) is isomorphic to the groupfication of G/G”(20D). By Lemma <ref>, G_i(20D) is a determining set of G_i. Since G_i' is generated by S_i(20D) ⊂ G_i(20D), by Lemma <ref>, G_i/G_i'(20D) is a determining set of G_i/G_i'. Thus the groupfication of G_i/G_i'(20D) is exactly G_i/G_i'. We shall show that G/G”(20D) is determining, then G/G” is isomorphic to G_i/G_i'. Since (X/G”)/(G/G”)=X/G has diameter at most D, G/G”(20D) generates G/G”. Recall that there is a surjective homomorphism π from the groupfication of G/G”(20D) to G/G”. Assume that (π) has a nontrivial element saying [e_g_1e_g_2...e_g_k] where g_j ∈ G/G”(20D), then g_1g_2...g_k = 1 ∈ G/G”. We may find g_ij=ϕ̅_i^-1(g_j) ∈ G_i/G_i'(20D). Since the groupfication of G_i/G_i'(20D) is isomorphic to the groupfication of G/G”(20D), [e_g_i1e_g_i2...e_g_ik] in the groupfication of G_i/G_i'(20D) is also nontrivial. For large i, g_i1g_i2...g_ik∈ G_i/G_i' is close to g_1g_2...g_k = 1 ∈ G_i/G_i'. Since G_i/G_i'(20D) is discrete, g_i1g_i2...g_ik=1 and [e_g_i1e_g_i2...e_g_ik] is a nontrivial element in (π). This contradicts to that G_i/G_i'(20D) is a determining set. Thus G/G”(20D) is a determining set. The final step is to show G'=G”. G_i' converges to G” and G_i/G_i' ≅ G/G” for all large i. It suffices to show G”=G'. G' is a normal subgroup of G”. Fix large i, there is an eGH approximation ϕ_i: G_i(20D) → G(20D). Consider maps G_i(20D) ϕ_i→ G π'→ G/G' π”→ G/G”, where π',π” are quotient maps. Since ϕ_i is an approximation map, G/G' and G/G” are discrete, then π'ϕ_i and π”π'ϕ_i must be pseudo-group homomorphisms. Since G_i(20D) is a determining set of G_i, we may extend π'ϕ_i and π”π'ϕ_i to group homomorphisms G_i ϕ_i'→ G/G' π”→ G/G”, Since S_i(20D) ⊂ (π'ϕ_i) by the construction, G_i'=⟨ S_i(20D) ⟩ is contained in the (ϕ_i'). In particular, we have group homomorphisms G_i/G_i' ϕ̅_i'→ G/G' π”→ G/G”. Since ϕ(G_i(20D)) is ϵ_i dense in G(20D) and G/G' is discrete, π'(ϕ_i(G_i(20D))) contains G/G'(20D) which generates G/G'. Therefore ϕ_i' and ϕ̅_i' are surjective. On the other hand, Lemma <ref> implies that π”ϕ̅_i' is an isomorphism, thus π” must be a trivial map and G'=G”. G/G' is finitely presented since G/G'(20D) is a finite determining set of G/G'. Finally we show that G_i' is generated by G_i'(R + ϵ_i) where ϵ_i → 0. Notice that G_i' is generated by G_i'(20D). Assume there exists δ > 0 and g_i ∈ G_i'(20D) so that g_i ∉⟨ G_i'(R+δ) ⟩ for some large i. Passing to a subsequence, g_i → g ∈ G'. Since G' is generated by G'(R), we may find h_1,h_2,...,h_k ∈ G'(R) so that h_1h_2...h_k=g. Then we may find h_ij∈ G_i'(R+δ) close to h_i for each 1 ≤ j ≤ k. Then h_i1h_i2...h_iks_i=g_i where s_i ∈ G_i'(R+δ) is close to the identity. g_i ∈⟨ G_i'(R+δ) ⟩, a contradiction. § EXTENDING A GH APPROXIMATION In this section, we assume that (M_i,p_i,G_i) is a sequence of complete, connected and simply connected n-manifolds with Ric≥ -(n-1) and G_i is a closed subgroup of Iso(M_i) with diam(M_i/G_i) ≤ D. We do not assume that G_i is discrete so far. Suppose that (M_i,p_i,G_i) eGH⟶ (X,p,G). Then G is a Lie group and G/G_0 is discrete where G_0 is the identity component. By Theorem <ref>, there are normal subgroups G_i' ⊲ G_i converging to G_0. Moreover, G_i/G_i' is isomorphic to G/G_0. Let M_i'= M_i/G_i' and X'=X/G_0, then we have (M_i,p_i,G_i',G_i) @>eGH>> (X,p,G,G_0) @VVπ V @VVπ V (M_i',p'_i,G_i/G_i') @>eGH>> (X',p',G/G_0) We can identify G_i/G_i' ≅ G/G_0; actually the isomorphism map is an eGH approximation on the pesudo-groups (see Lemma <ref>) . Assume that f_i is the pointed ϵ_i-GH approximation from (M'_i,p'_i,G_i/G_i') to (X',p',G/G_0), then for any x ∈ B_1/(10ϵ_i)(p'_i) and g ∈ (G_i/G_i' )(1/(10ϵ_i)), d(f_i(gx),gf_i(x))< 2ϵ_i. We shall extend f_i to a global map which is a locally GH approximation. For i large enough, there exists a map f_i':M'_i → X' so that f_i'(M_i') is 3ϵ_i-dense in X' and |d(f_i'(x_1),f_i'(x_2))-d(x_1,x_2)| ≤ 2ϵ_i for all x_1,x_2∈ M_i' with d(x_1,x_2) ≤ 10D. We may assume 1/ϵ_i > 100D. Since the diameter of M_i'/(G_i/G_i')=M_i/G_i is at most D, for any x ∈ M_i', we can find g ∈ G_i/G_i' so that d(p',gx) ≤ 2D. Define f'_i(x)=g^-1 f_i(gx) where g^-1∈ G/G_0, g ∈ G_i/G_i'. The definition depends on the choice of g, we just use an arbitrary choice of g satisfying d(p',gx) ≤ 2D. We next show |d(f_i'(x_1),f_i'(x_2))-d(x_1,x_2)| ≤ 2ϵ_i for all x_1,x_2∈ M_i' with d(x_1,x_2) ≤ 10D; the proof below also implies that for a different choice of g (saying, g') in the definition of f_i', we have d(g^-1 f_i(gx), (g')^-1 f_i(g'x)) ≤ 2 ϵ_i. Thus by ignoring small difference, the definition of f_i' does not depend on the choice of g. By the definition of f_i, there exists g_1,g_2 ∈ G_i/G_i' so that d(p',g_j x_j) ≤ 2D and f'_i(x_j)=g^-1_j f_i(g_jx), j=1,2. Note that d(g_1g_2^-1p',p')= d(g_2^-1p',g_1^-1p') ≤ d(g_2^-1p',x_2)+d(x_1,x_2)+d(x_1,g_1^-1p') ≤ 20D. Thus g_1g_2^-1∈ G/G_0 (20D). Note that g_jx_j ∈ B_2D(p') and f_i is an eGH approximation, d(g_1g_2^-1 f_i(g_2x_2),f_i((g_1g_2^-1) g_2x_2)) ≤ϵ_i, Since g_1 is an isometric action and g_2^-1 f_i(g_2x_2) =f_i'(x_2), d(f_i'(x_2),g_1^-1f_i(g_1x_2)) ≤ϵ_i. On the other hand, since d(g_1x_2,p') ≤ d(g_1x_1,g_1x_2) + d(g_1x_1,p') ≤ 12D, by the definition of a GH approximation map we have |d(f_i(g_1x_2),f_i(g_1x_1)) - d(g_1x_2,g_1x_1)| ≤ϵ_i, so we have |d(g_1^-1f_i(g_1x_2),f_i'(x_1)) - d(x_1,x_2)| ≤ϵ_i. Thus we have |d(f'_i(x_1),f'_i(x_2)) - d(x_1,x_2)| ≤ |d(g_1^-1f_i(g_1x_2),f_i'(x_1)) - d(x_1,x_2)| + d(f_i'(x_2),g_1^-1f_i(g_1x_2)) ≤ 2ϵ_i. Then we show that f_i is 3ϵ_i-dense. For any y ∈ X', we can find g ∈ G/G_0 so that d(gy,p') ≤ 2D. Choose x ∈ M_i' so that d(f_i(x),gy)< ϵ_i. By the argument in the last paragraph, d(f_i'(g^-1x),g^-1f_i(x)) ≤ 2ϵ_i. Therefore d(y,f_i'(g^-1x)) ≤ 3ϵ_i. If in Theorem <ref> we further assume that G is discrete, then the global map M_i π→ M_i'=M_i/G_i' f_i'→ X=X' is a 3ϵ_i appropriation map on each 10D ball, since G_i' converges to the trivial map and is normal in G_i. Since G_0 can be generated by arbitrarily small neighborhood of the identity, by Theorem <ref>, G_i' is generated by G_i'(2ϵ_i) where ϵ_i → 0. Then we show that π_1(M_i',p_i') is generated by loops of length ≤ 2ϵ_i at p_i'. Let H_i ⊲ G_i' be the group generated by all isotropy actions in G_i' and (G_i')_0, the identity component in G_i'. Then M_i/H_i is the universal cover of M_i' with deck transformation group G_i'/H_i. Moreover, since G_i'/H_i is generated by G_i'/H_i (2ϵ_i), π_1(M_i',p_i') is generated by loops of length ≤ 2ϵ_i at p_i'. H_i is normal since the identity component is normal and g ∈ G_i' is an isotropy action at x ∈ M_i if and only if hgh^-1 is an isotropy action at hx for any h ∈ G_i'. By Remark <ref>, G_i'/H_i are discrete free isometric actions on M_i/H_i. G_i'/H_i(2ϵ_i) generates G_i'/H_i since G_i'(2ϵ_i) generates G_i'. We only need to show that M_i/H_i is simply connected. Note that M_i/H_i= (M_i/(G_i')_0)/(H_i/ (G_i')_0). First we prove that M_i/(G_i')_0 is simply connected. For any loop γ⊂ M_i/(G_i')_0 with γ(0)=p̅_i ∈ M_i/(G_i')_0, we may lift γ to a path γ̃ in M_i with ending points p_i,q_i and q_i=gp_i where g ∈ (G_i')_0. Since (G_i')_0 is path-connected, we may find g(t) ∈ (G_i')_0 , 0 ≤ t ≤ 1, so that g(1)=g and g(0)=1. Then g(t)γ̃(t) is a contractible loop since M_i is simply connected. The quotient image of g(t)γ̃(t) in M_i/(G_i')_0 is exactly γ, which must be contractible. Now we prove that M_i/H_i= (M_i/(G_i')_0))/(H_i/ (G_i')_0)) is simply connected. Notice that H_i/ (G_i')_0 is generated by isotropy actions of G_i'/(G_i')_0 on M_i/(G_i')_0. To simplify the notation, we may assume that (G_i')_0 is trivial and H_i is generated by isotropy actions. Given an loop γ⊂ M_i/H_i with γ(0)=p̅_i, we may lift γ to a path γ̃ in M_i with ending points p_i,q_i. If p_i=q_i, then γ̃ is a contractible loop, thus the quotient image γ=π(γ̃) is also contractible. Now we assume q_i=hp_i for some h ∈ H_i. By the definition of H_i, we may let h=h_1h_2...h_l so that h_j ∈ G_i' is an isotropy action at x_j ∈ M_i, 1 ≤ j ≤ l. Let c̃_j be a path from h_jh_j+2...h_l p_i to x_j, 1 ≤ j ≤ l. Then h_j^-1c̃_j^-1 is a path from x_j to h_j+1...h_l p_i. Then c̃= c̃_1 (h_1^-1c̃_1^-1) c̃_2 (h_2^-1c̃_2^-1)... c̃_l (h_l^-1c̃_l^-1) γ̃ is a contractible loop in M_i. The quotient image π(c̃) ⊂ M_i/H_i is also contractible. Note that π(c̃_j) cancels out π(h_j^-1c̃_j^-1). Thus γ is also contractible. X'=X/G_0 is simply connected. We use f_i':M'_i → X' and Lemma <ref>. Choose ϵ > 0 so that for any x ∈ X' and any loop in B_100ϵ(x) ⊂ X' is contractible in B_1(x); such ϵ exists since X' is strongly semi-locally simply connected and X'/(G/G_0)=X/G is compact. Let i be large enough so that ϵ_i < ϵ/100. For any loop γ in X', we can find a point-wise close loop γ_i ⊂ M_i' for large i. We may assume γ(0)=p' and γ_i(0)=p_i'. Since the fundamental group of M_i' is generated by loops contained in B_2ϵ_i(p_i'), γ_i is 2ϵ_i-contractible. Then the proof of Lemma <ref> shows that γ is contractible. We should mention that the construction in Lemma <ref> is local, so we can use the global map f_i' which locally is a GH approximation. § CONVERGENCE OF DISCRETE GROUPS The following generalized Margulis lemma, proved by Kapovitch and Wilking, is critical in studying the fundamental group of a manifold with Ricci lower bound. (<cit.>) In each dimension n there are positive constants C(n) and ϵ(n) such that the following holds for any complete n dimensional Riemannian manifold (M, g) with Ric ≥ -(n-1) on a metric ball B_1(p) ⊂ M. The image of the natural homomorphism π_1(B_ϵ(p), p) →π_1(B_1(p), p) contains a normal nilpotent subgroup N of index ≤ C. Moreover, N has a nilpotent basis of length at most n. A corollary of Theorem <ref> is that: let N_i be a sequence of n-manifolds with Ric≥ -(n-1) and diam≤ D, assume that their universal covers Ñ_i with fundamental groups G_i converge, (Ñ_i,p̃_i,G_i) eGH⟶ (X,p̃,G), then G_0 is a nilpotent Lie group. In this section, we shall show the above corollary also holds in a more general setting. Let (X_i,p_i) be a sequence of proper geodesic spaces and G_i be a discrete subgroup of Iso(X_i). Suppose that (X_i,p_i,G_i) eGH⟶ (Y,p,G), and G is a Lie group. Then G_0 is a nilpotent Lie group. Our proof follows the idea in <cit.>, where Lemma <ref> was proved in the case that Y/G is compact. We shall apply the following structure theorem of an approximate group by Breuillard–Green–Tao. Recall for a group H, we call a finite symmetric subset A ⊂ H a (global) k-approximate group if there is a subset X ⊂ A^3 so that A^2 ⊂ XA and |X| ≤ k. (The structure of an approximate grouop, <cit.>) Let k ∈ℕ. Then there is k' ∈ℕ, depending on k, such that the following holds. Assume that H is a group generated by a finite symmetric set S containing the identity. Let A be a k-approximate group and S^k'⊂ A. Then there is a finite normal subgroup N of H and a subgroup H' ⊂ H containing N such that (i) H' has the index at most C(k) in H; (ii) H'/N is nilpotent of step at most C(k) and generated by at most C(k) elements; (iii) N ⊂ A^4. We shall prove Lemma <ref> by some sublemmas. On (Y,p,G), for any r>0 and a subset Y' ⊂ Y, define G(Y',r)= { g ∈ G | d(gx,x) ≤ r, ∀ x ∈ Y' }. There exists r,ϵ > 0, depending on the limit (Y,p,G), so that (i) G(B_r(p),10ϵ) is contained in G_0; (ii) G(B_r(p),10ϵ) contains no non-trivial subgroup; (iii) there is an closed set U in the Lie algebra of G_0 so that the exponential map from U to G(B_r(p),10ϵ) is diffeomorphic. Since G is a Lie group, we may find a neighborhood V ⊂ G_0 of the identity so that the lemma holds for V. Then we need to show G(B_r(p),10ϵ) ⊂ V for some r,ϵ. Assume that we can find g_i ∈ G(B_i(p),1/i)-V. Then g_i converges to 1 ∈ G but g_i is not contained in an open neighborhood of the identity map 1, a contradiction. Here we use the fact g_i converges to 1 with respect to compact-open topology if and only if g_i uniformly converges to 1 on each compact subset of Y. Choose ϵ_i → 0 so that the eGH distance between (X_i,p_i,G_i) and (Y,p,G) is less than ϵ_i. (Gap lemma) Given r,ϵ in Lemma <ref>, there exists ϵ_i' → 0 so that ⟨ G_i(B_r(p_i),ϵ_i') ⟩ is same as ⟨ G_i(B_r(p_i),ϵ) ⟩ for all large i. It suffices to show that for any fixed δ<ϵ and all i large enough, G_i(B_r(p_i),ϵ) ⊂⟨ G_i(B_r(p_i),δ) ⟩. Assume that there exists a fixed number δ, 0<δ<ϵ, passing to a subsequence, we can find g_i ∈ G_i(B_r(p_i),ϵ) - ⟨ G_i(B_r(p_i),δ) ⟩. for all i. Passing to subsequence if necessary, g_i converges g ∈ G(B_r(p),ϵ). By lemma <ref>, we may find v ∈ U so that exp(v)=g. We may choose a large integer N so that h=exp(v/N) ⊂ G(B_r(p),δ/2). Then h^N=g. Choose h_i ∈ G_i converging to h, then for large i, h_i ∈ G_i(B_r(p_i),δ), and h_i^N is Nϵ_i close to g_i. Thus g_i^-1h_i^N is Nϵ_i close to the identity map. When i is large enough, g_i^-1h_i^N ⊂ G(B_r(p_i),δ). Thus g_i ⊂⟨ G(B_r(p_i),δ) ⟩, a contradiction. We may assume ϵ_i'=ϵ_i. Passing to a subsequence, suppose that ⟨ G_i(B_r(p_i),ϵ) ⟩ converges a subgroup G_ϵ in G. Since G_ϵ contains a neighborhood of the identity in G, saying G(B_r(p),ϵ), thus G_0 ⊂ G_ϵ. Since only G_0 is considered, we may assume G=G_ϵ and G_i can be generated by G_i(B_r(p_i),ϵ) or G_i(B_r(p_i),ϵ_i). We always assume ϵ > 100ϵ_i. Choose a fixed number k so that G(B_r(p),4ϵ) can be covered by k left translates of G(B_r(p),ϵ/2), i.e., A_i is a k-approximation group. We shall show that (G_i(B_r(p_i),ϵ))^2 can be covered by k left translates of G_i(B_r(p_i),ϵ) for all i large enough. By assumption we can find g_j ∈ G(B_r(p),10ϵ) so that ∪_1 ≤ j ≤ k g_jG(B_r(p),ϵ/2) contains G(B_r(p),4ϵ). Choose g_j' ∈ G_i(B_r(p),11ϵ). We shall show ∪_1 ≤ j ≤ k g_j'G_i(B_r(p_i),ϵ) contains (G_i(B_r(p_i),ϵ))^2. Choose any h_i ∈ (G_i(B_r(p_i),ϵ))^2 ⊂ G_i(B_r(p_i),3ϵ), we can find h ∈ G(B_r(p),4ϵ) which is ϵ_i close to h_i. By assumption, h=g_jl for some 1 ≤ j ≤ k and l ∈ G(B_r(p),ϵ/2). We may find l_i ∈ G(B_r(p_i),2ϵ/3) which is ϵ_i close to l. Then h_i is 10ϵ_i close g_j'l_i. Thus we can find t ∈ G(B_r(p_i),10ϵ_i) so that h_i=g_j'l_it. On the other hand, l_it ∈ G_i(B_r(p_i),ϵ). The claim is proved. Thus we may apply Theorem <ref> for A_i=G_i(B_r(p_i),ϵ) and S_i= G_i(B_r(p_i),ϵ_i) when i is large enough. In Theorem <ref> we shall choose a normal subgroup G_i' ⊂ G_i of index at most C(k); we may assume G_i'=G_i, since the limit of G_i' is a normal subgroup of G of index at most C(k), thus contains G_0. There is a normal subgroup N_i of G_i and N_i ⊂ A_i^4. G_i/N_i is nilpotent of step at most C(k). Passing to a subsequence if necessary, N_i converges to a subgroup N ⊂ G. Moreover, N ⊂ G(B_r(p),10ϵ), thus N is trivial by Lemma <ref>. Consider the equivariant convergence (X_i,p_i,G_i,N_i) @>eGH>> (Y,p,G,Id) @VVπ V @VVπ V (X_i/N_i,p_i,G_i/N_i) @>eGH>> (Y,p,G). Since G_i/N_i is nilpotent with the step at most C(k) and G_i/N_i converges to G, G_0 must be nilpotent. Since G_0 is a nilpotent Lie group, it has a maximal compact subgroup T^k, which is a torus contained in the center. In the setting of Theorem A, X/T^k is simply connected. We have shown that X/G_0 is simply connected. Recall that G_0/T^k is diffeomorphic to some ℝ^l. We claim that G_0/T^k acts freely on X/T^k; in particular, G_0/T^k acts effectively, compare with Remark <ref>. If the claim holds, there is a fibration map G_0/T^k → X/T^k → X/G_0. By the exact sequence of homotopy groups, X/T^k is simply connected since X/G_0 is. Now we prove the claim. Assume that g̅∈ G_0/T^k fixes x̅∈ X/T^k. By Remark <ref>, we may find pre-images x ∈ X of x̅ and g ∈ G_0 of g̅ with gx=x. In particular, g ∈ (G_0)_x. Notice that the isotropy group (G_0)_x is a compact subgroup of G_0. Since T^k is the unique maximal compact subgroup of G_0, g ∈ T^k. Thus g̅ is trivial in G_0/T^k. Therefore G_0/T^k actions are free. Recall that for any complete and simply connected manifold (M,x) and r > 0, the fundamental group of the closed ball B̅_r(x) is finitely generated, thus there exits R > 0 so that any loop in B_r(x) is contractible in B_R(x). However, for the simply connected metric space (X/T^k,p̅), it is unknown for the author whether the fundamental group of a closed r ball is finitely generated since the topology of the boundary is tricky. However, if we keep away the boundary, we can prove the following. Fix r>0. Let p̅∈ X/T^k and I be the image of π_1(B̅_r (p̅),p̅) →π_1(B_r+1 (p̅),p̅). Then I is finitely generated. In particular, since X/T^k is simply connected, there exits R > 0 so that any loop in B_r(p̅) is contractible in B_R(p̅). By Lemma <ref>, X/T^k is strongly semi-locally simply connected. We can find 1/100 > ϵ > 0 so that for any x ∈B̅_r (p̅), any loop in B_100ϵ(x) is contractible in B_r+1(p̅). The following proof mimics the proof of that the fundamental group of a compact manifold is finitely generated. We can find a finite subset {x_j}⊂B̅_r(p̅) so that d(x_j_1,x_j_2) > ϵ for different j_1,j_2 and {B_10ϵ(x_j)} covers B̅_r(p̅). We connect x_j_1 and x_j_2 by a geodesic if d(x_j_1,x_j_2) ≤ 30ϵ, thus get a 1-complex K with vertexes {x_j}. We shall show any loop in B_r(p) is homotopic to a loop in K and the homotopy image is contained in B_r+1(p), thus I must be finitely generated since π_1(K) is. Now we choose any loop γ⊂B̅_r(p). We may find a division 0=t_0 < t_1 < t_2 < t_3... < t_k=1 so that γ([t_l,t_l+1]) is contained in a ϵ-ball for each 0 ≤ l ≤ k-1. For each l, assume that γ(t_l) is contained in B_10ϵ(x_j_l) (just choose one x_j_l if there are many), let c_l be a geodesic form γ(t_l) to x_j_l. Then c_0^-1γ([0,t_1])c_1 c_1^-1γ([t_1,t_2])c_2c_2^-1...c_k-1^-1γ([t_k-1,1])c_0 is homotopic to γ. Now we consider the path c_l^-1γ([t_l,t_l+1]) c_l+1 for each l. It is a path from x_j_l to x_j_l+1. Since |c_l| ≤ 10ϵ and d(γ(t_l),γ(t_l+1)) ≤ϵ, we have d(x_j_l, x_j_l+1) ≤ 22ϵ. By the definition of K, there exists a geodesic c from x_j_l to x_j_l+1 contained in K. Since any loop in B_100ϵ(x_j_l) is contractible in B_r+1(p̅), c_l^-1γ([t_l,t_l+1]) c_l+1 is homotopic to c, a 1-cell in K. Use the above argument for all 0 ≤ l ≤ k-1, γ is homotopic a loop in K. § ISOMETRIC CIRCLE ACTIONS We can prove Theorem A by Lemma <ref> and the following induction lemma for isometric circle actions T^1 on a proper geodesic metric space Y; we always assume that T^1 actions are effective. We denote circle actions by T^1 instead of S^1 to distinguish from the notation of a slice S. The following lemma claims the exact sequence holds π_1(T^1) →π_1(Y) →π_1(Y/T^1) → 0. even though T^1 actions may contain isotropy subgroups. Let (Y,p) be a proper geodesic metric space. Suppose that Y is strongly semi-locally simply connected. Assume that there exist closed isometric circle T^1 actions on Y and the quotient space is (Y/T^1,p̅). Choose any loop γ̃⊂ B_R(p) and π(γ̃)=γ⊂ Y/T^1. Assume that there exists a homotopy map H: [0,1] × [0,1] → B_R(p̅) ⊂ Y/T^1 and a loop γ' ⊂ Y/T^1 so that H(t,0)=γ(t), H(t,1)=γ'(t), H(0,s)=H(1,s)=γ(0)=γ'(0). Choose R'>0 so that the pre-image of B_R+1(p̅) is contained in B_R'-1(p). Then we can find a homotopy map H̃: [0,1] × [0,1] → B_R'(p) and a loop γ̃' so that H̃(t,0)=γ̃(t),H̃(t,1)=γ̃'(t), H(0,s)=H(1,s)=γ̃(0)=γ̃'(0), and π(γ̃')=γ'. Roughly speaking, Lemma <ref> claims that, given γ̃⊂ Y and π(γ̃)=γ⊂ Y/T^1 is homotopic γ' ⊂ Y/T^1, then γ̃ is homotopic to a loop γ̃' with π(γ̃')=γ'. In particular, if γ is contractible, γ̃ is homotopic to a loop contained in the T^1-orbit of a point. Note that we do not claim that H̃ is a lifting map of H on [0,1] × [0,1]. Actually we shall see H̃ is a lifting map of H on a 1-skeleton of a square decomposition of [0,1] × [0,1]; moreover, for each small square, H̃(∂□) is contractible in the T^1 orbit of a 1-ball. Lemma <ref> would fail if we consider a disconnected abelian isometric group. For example, ℤ_2 can act on a unit cycle T^1= {(x,y) ∈ℝ^2 | x^2+y^2=1 } by reflecting on y-axis and the quotient space is an interval. The reason is that we can not continuously modify a lifting path when the group is disconnected. We have shown that X/T^k is simply connected. In the torus T^k, we choose subgroups T^1 ⊲ T^2 ... ⊲ T^k so that T^j+1/T^j is a circle. For any loop γ⊂ B_r(p) ⊂ X with γ(0)=p, let γ_j be the image of γ and p^j be the image of p in X/T^j, 1 ≤ j ≤ k. Then γ_j is contained in a B_r(p^j). By Lemma <ref>, there exists R, depending on r, so that γ_k is contractible in B_R(p^k). Thus there exists H:[0,1] × [0,1] → B_R(p^k), H(t,0)=γ_k(t), H(t,1)=H(0,s)=H(1,s)=p^k. Now we may apply Lemma <ref> inductively for k circle actions π: X/T^j → X/T^j+1, 0 ≤ j ≤ k-1. Finally we can find R'>0 and a homotopy map H̃: [0,1] × [0,1] → B_R'(p), H̃(t,0)=γ(t), H̃(1,s)=H̃(0,s)=p̃,π^k(H̃(1,s))=p^k, where π^k: X → X/T^k is the projection map. In particular, γ is homotopic to a loop in the T^k orbit of p and the homotopy image is contained B_R'(p). Now we explain how to prove Lemma <ref>. There are three cases, 1. T^1 actions are free on Y; 2. all isotropy groups are discrete; 3. the isotropy group is T^1 at some x ∈ Y. In the first case, we can apply the Covering Homotopy Theorem (see <cit.> Chapter II section 7) to get a lifting map H̃ of H. We shall prove for last two cases. §.§ T^1 actions with discrete isotropy groups For any subset Y' ⊂ Y, we use the notation T^1(Y') to be the T^1 orbit of Y'. Let us consider second case that all isotropy groups on Y are discrete. The idea of the proof is that we find a square decomposition K of [0,1] × [0,1] and lift H only on the 1-skeleton K^1, that is, π(H̃)=H on K^1. For each small square in the 1-skeleton, we want to find a lift of H(∂□), H̃(∂□), which is contractible in the T^1 orbit of a 1-ball. However, two adjacent squares in the square decomposition share a common edge, on which two lifting maps may not coincide with each other. The following lemma shows that we can continuously modifying one of lifting maps so that it coincides with the another. Choose any x ∈B̅_R'(p) and let S_x be a slice at x. If there are two paths c_1, c_2 ⊂ T^1(S_x) so that π(c_1)=π(c_2) ⊂ Y/T^1. Assume c_2(0)=g_θc_1(0) for some θ∈ [-π ,π)=T^1, there exists a unique continuous map f:[0,1] →ℝ so that f(0)=θ and c_2(t)=[f(t)]c_1(t) where [f(t)] is the image of f(t) in ℝ/⟨ 2π⟩ =T^1. We identity θ∈ (-π,π] with g_θ∈ T^1. We first show that the definition area of f can extend to a neighborhood of 0. Note that for any y ∈ S_x, T^1_y ⊂ T^1_x. Since the isotropy group T^1_x is discrete and normal, for a large number q, any non-trivial action in [-1/q,1/q] ⊂ T^1 has no fixed point on the T^1 orbit of S_x. Assume c_1(0) ∈ g'S_x, g' ∈ T^1, then c_2(0) ∈ g_θg'S_x. Therefore the partial orbit space (-1/(2q),(1/(2q)) (g_θg'S_x) is a neighborhood of c_2(0); here we identify (-1/(2q),(1/(2q)) with a open set in T^1. There exits ϵ > 0 so that c_2([0,ϵ]) ⊂ (-1/(2q),1/(2q)) (g_θg'S_x). Then for any 0 < t < ϵ, there exists θ_t ∈ (-1/(2q),1/(2q)) so that c_2(t)= g_θ_t+θc_1(t). θ_t must be unique, otherwise there would be another θ_t' ∈ (-1/(2q),1/(2q)) so that c_2(t)= g_θ_t'+θc_1(t). Then g_θ_t-θ_t'(c_1(t))=c_1(t), a contradiction. Define f(t)=θ + θ_t for 0 < t < ϵ, f is continuous since locally f(t)=π(c(t))+θ where π: (-1/(2q),1/(2q)) (g_θg' S_x) → (-1/(2q),1/(2q)) is continuous. Also f is unique on [0,ϵ]. The above argument shows that the defining area of f is open, now we show it is closed. Assume that f is defined on [0,T), 0 < T ≤ 1. We shall show that f is bounded on [0,T) and lim_t → T f(t) exists, then we may define f(T) by the continuity of f. We first show that f([0,T)) is bounded. Assume that there exists t_i → T^-1 so that f(t_i) →∞, passing to a subsequence, we may assume [f(t_i)] converges to g_1. Since f is continuous, we can find t_i' → T so that f(t_i')-f(t_i) = 1/q. Then [f(t_i')] converges to g_2 with g_2-g_1 = 1/q. On the other hand, c_2(T)= lim_i →∞ [f(t_i)]c_1(t_i) = lim_i →∞ [f(t_i')]c_1(t_i'). Thus g_1c_1(T)=g_2c_1(T), g_1/qc_1(T)=c_1(T), a contradiction to the choice of q. Now we prove the existence of lim_t → T f(t). Assume that the limit does not exist. Since f is bounded on [0,T). We may find two subsequence t_1i→ T and t_2i→ T so that lim_i →∞ f(t_1i) and lim_i →∞ f(t_2i) exist but they are different. We may assume lim_i →∞ f(t_2i)>lim_i →∞ f(t_1i)+ϵ for some ϵ<1/q. By the continuity of f, we may find t_3i→ T so that f(t_3i)=f(t_1i)+ϵ. Then c_2(T)= lim_i →∞ [f(t_1i)]c_1(t_1i) = lim_i →∞ [f(t_3i)]c_1(t_3i)=g_ϵc_2(T). A contradiction to the choice of q. Assume that c is a loop in Y and f:[0,1] →ℝ is continuous with f(0)=f(1)=0, then c is homotopic to [f(t)]c(t). The homotopy map is H(t,s)=[sf(t)]c(t). For any point x ∈B̅_R'(p), we can find a slice S_x at x. We may assume that the orbit T^1(S_x) is path-connected; otherwise we may use a smaller slice S_x' so that any two points in S_x' can be connected by a path in T^1(S_x). Since B̅_R'(p) is compact, we may find finite many slices S_x_j so that their orbits T^1(S_x_j) covers B̅_R'(p) where x_j ∈B̅_R'(p). Then π(S_x_j) covers B̅_R(p̅). π(S_x_j) is open in Y/T^1 since its pre-image T^1(S_x_j) is open in Y. Recall that we always assume that S_x_j is contained in a small ball, in particular, any loop in S_x_j is contractible in B_1(x_j). By assumption, γ is homotopic to γ' by H: [0,1] × [0,1] → B_R(p̅). We construct a square decomposition (see Lemma <ref>) K on [0,1] × [0,1] so that for any small square □, the image H(□) is contained in the one of π(S_x_j) where x_j is constructed above. We may fix x_j for each square. We first show that for any small square □⊂ K, there exists a lifting loop c̃⊂ Y of c=H(∂□) ⊂ Y/T^1 so that c̃ is contractible in T^1(B_1(x_j)). Since we assume that T^1(S_x_j) is path-connected, π(S_x_j) is path-connected. We may find c_1,c_2 ⊂π(S_x_j) be two paths so that c_1(0)=c_2(1)=π(x_j), c_1(1)=c(0)=c(1),c_2(0)=c(1/2). (3,3) circle (2); (3,3) - - (1,3); (3,3) - - (5,3); at (3,1) [below] c=H(∂□); at (1,3) [left] c(0)=c(1); at (2,3) [above] c_1; at (4,3) [above] c_2; at (5,3) [right] c(1/2); at (3,3) [below] π(x_j); [- >] (2.8,5.2) - - (3.2,5.2); [- >] (2.2,2.8) - - (1.8,2.8); [- >] (4.2,2.8) - - (3.8,2.8); (3,3) circle (.1); (5,3) circle (.1); (1,3) circle (.1); We lift the loop at π(x_j), c_1c([0,1/2])c_2 (the upper half circle with a line in the picture), to a loop in S_x_j, saying c̃_1c̃[0,1/2])c̃_2; it must be a loop since x_j is the only pre-image point of π(x_j) in S_x_j. The lifting loop is contractible in B_1(x_j) by our choice of the slice. We can also lift another loop at π(x_j), c_1c([1,1/2])c_2 (the lower half circle with a line in the picture,[1,1/2] means we take the inverse orientation) to get a loop c̃'_1c̃'[1,1/2]c̃'_2 ⊂ S_x_j, contractible in B_1(x_j). c̃_2c̃_1 may not coincide with c̃_2'c̃_1'. However, since they are lifting paths of the same path in Y/T^1, by Lemma <ref>, we can find f so that [f]c̃_2'c̃_1'=c̃_2c̃_1. We can extend f by cutting off to 0 so that it is defined on c̃'_1c̃'[1,1/2]c̃'_2; we may image f as a function on c_1,c_2 and extend it to c([1,1/2]) using a cut-off function. By Lemma <ref>, [f](c̃'_1c̃'[1,1/2]c̃'_2) is homotopic to c̃'_1c̃'[1,1/2]c̃'_2, thus is also contractible (the homotopy image is contained in T^1(B_1(x_j))). Since [f]c̃_2'c̃_1'=c_2c_1, the loop c̃_1c̃[0,1/2]c̃_2 ([f](c̃'_1c̃'[1,1/2]c̃'_2))^-1=c̃[0,1/2] ([f]c̃'[1/2,1]) (after cancelling adjacent inverse paths) is a lifting loop of c and is contractible in T^1(B_1(x_j)). Now we define H̃(t,0)=γ(t),H̃(0,s)=H̃(1,s)=γ(0). Our goal is to define H̃ on each small square ∂□⊂ K^1 which is a lifting loop of H(∂□) and H̃(∂□) is contractible in the T^1 orbit of a 1-ball. We lift on each square, starting at first column, from bottom to top, then the second column, etc. Consider □_n1 (the square in the first column and last row) with boundary l_1,l_2,l_3,l_4; see the picture below. (0,0) - - (0,3); (0,0) - - (3,0); (0,2) - - (3,2); (2,0) - - (2,3); at (1,0) [below] l_1; at (2,1) [left] l_2; at (0,1) [left] l_4; at (1,2) [below] l_3; at (3,3) [right] K^1; H̃ is defined on l_1,l_4 from the definition of H̃ and not defined on l_2,l_3. We have shown that there exists a contractible lifting loop of H(∂□_n1), saying c_l_4l_1c_l_2l_3 where c_l_4l_1 is the lift of H(l_4l_1) and c_l_2l_3 is the lift of H(l_2l_3). If c_l_4l_1=H̃(l_4l_1), then we define H̃ on l_2,l_3 by c_l_2l_3. Otherwise we apply Lemma <ref> to get f so that [f]c_l_4l_1=H̃(l_4l_1). We can extend f by cutting-off to 0 as above, then [f](c_l_4l_1c_l_2l_3) is contractible and coincides with H̃ on l_4l_1 . Now we lift H on l_2,l_3 by [f]c_l_2l_3. Since H̃(∂□_n1)= [f](c_l_4l_1c_l_2l_3) is contractible (and the homotopy image is contained in T^1B_1(x_j) for some x_j ∈ Y), we can extend H̃ to a continuous map on □_n1. We can repeat the above process for all squares. We get a homotopy map with image contained in B_R'(p). Then H̃ is a lift of H on the 1-skeleton K^1. Let γ̃'(t)=H̃(t,1), γ̃ is homotopic to γ̃' by H̃ and π(γ̃')=γ'. §.§ T^1 has fixed points In the case that T^1 fixes some points, we need to slightly change the proof in the last section. We still lift H(∂□) to the orbit of a slice T^1(S_x_j). If the isotropy group at x_j is discrete, we just lift as in the last subsection, that is, we first find a contractible lifting loop and patch it on the edges of ∂□ where H̃ has been defined on. When T^1 fixes x_j, we directly construct a lifting loop compatible on edges where H̃ has been defined on, then we show that a lifting loop is always contractible. Choose finite slices S_x_j and square decomposition of [0,1] × [0,1] as in the last subsection. Each H(∂□) is contained in the one of π(S_x_j). If the isotropy group T^1_x_j is discrete, the lifting process in the last subsection still works. Now we assume the isotropy group T^1_x_j=T^1 for some x_j and H(∂□) ⊂π(S_x_j). We shall lift H(∂□). Suppose that the boundary of □ are four 1-cells l_1,l_2,l_3,l_4. By the lifting assumption, H̃ has been defined on two or three connected 1-cells of □. Thus we may assume that H̃ is defined on l_4,l_1 but not on l_2,l_3. (0,0) - - (0,3); (0,0) - - (3,0); (0,2) - - (3,2); (2,0) - - (2,3); at (1,0) [below] l_1; at (2,1) [left] l_2; at (0,1) [left] l_4; at (1,2) [below] l_3; at (3,3) [right] K^1; at (2,0) [below] y_2; at (0,2) [left] y_1; (0,2) circle (.1); (2,0) circle (.1); Assume two end points of l_4l_1 ⊂ [0,1] × [0,1] are y_1 and y_2. We lift H(l_2l_3) staring at H̃(y_2), then we get a path c ⊂ Y with c(0)=H̃(y_2). If c(1)=H̃(y_1), we define H̃(l_2l_3) by c. Otherwise assume that g_θ(c(1))=H̃(y_1) for some θ∈ (0,2π). Define f(t)=tθ, then [f(t)]c(t) is a path from H̃(y_2) to H̃(y_1) and is a lifting path of H(l_2l_3). Now we define H̃(l_2l_3) by [f]c. Then H̃(∂□) is a lifting loop of H(∂□). We next prove that H̃(∂□) is contractible in B_1(x_j). Since the isotropy group is T^1, T^1(S_x_j)=S_x_j is a neighborhood of x_j, any lifted loop of H(∂□) is always contained in S_x_j thus contractible in B_1(x_j) by our choice of the slice. Now we can finish the proof by lifting H on the boundary of each square one by one, as the same way in last subsection. § PROOF OF THEOREM B We prove Theorem B in this section. The original proof shall use the following lemma proved by Santos and Zamora; an alternative proof by Prof Rong is given at the ending of this section. (see section 2.8 and Theorem 81 in <cit.>) Assume that (X_i,p_i,G_i) are simply connected proper geodesic metric spaces and G_i is a discrete subgroup of Iso(X_i) with diam(X_i/G_i) ≤ D. Suppose that (X_i,p_i,G_i) eGH⟶ (X,p,G) and G is a non-compact Lie group, then |G_i| is infinite for all large i. (<cit.>) Fix n and D, there exists D' so that the following holds. For any simply connected compact n-manifold M with Ric≥ -(n-1), if there is a discrete isometric subgroup G so that diam(M/G) ≤ D, then diam(M) ≤ D'. Assume that there is a contradicting sequence of simply connected compact manifolds (M_i,p_i) with Ric≥ -(n-1) and diam(M_i) →∞ with discrete isometric subgroup G_i so that diam(M_i/G_i) ≤ D. Passing to a subsequence if necessary, (M_i,p_i,G_i) @>eGH>> (X,p,G) @VVπ V @VVπ V (M_i/G_i,p̅_i) @>GH>> (X/G,p̅). Since X is noncompact and X/G is compact, G must be a noncompact Lie group. Thus for large i, |G_i| is infinite, which is impossible since G_i is a discrete isometric subgroup on a compact manifold. Since M/G is compact, M splits as ℝ^k × N where N is simply connected compact (n-k)-manifold with non-negative curvature. Moreover Iso(M)= Iso(ℝ^k) ×Iso(N). Define π^1 :Iso(M) →Iso(ℝ^k), π^2 :Iso(M) →Iso(N). We shall show that N admits a discrete isometric subgroup so that the diameter of the quotient space is at most C(n)D, then by Corollary <ref> diam(N) ≤ D'(D,n). Note that π^2(G) is not necessarily discrete in Iso(N) and we take H to be the closure of π^2(G), then diam(N/H) ≤ D. Let K be the kernel of π^1:G →Iso(ℝ^k), then we can identify K=π^2(K) which is a discrete subgroup (thus finite) of Iso(N). π^1(G) ≅ G/K is a discrete subgroup of Iso(ℝ^k). Since G/K acts cocompactly on ℝ^k, by Bieberbach theorem, there is an abelian normal subgroup T of G/K of index at most C(n). Let G' be the preimage of π^-1(T) in G, where π:G → G/K is the quotient map. G' is a normal subgroup of G of index at most C(n) and therefore π^2(G') is a normal subgroup of π^2(G) of index at most C(n). Let H' be closure of π^2(G') in Iso(N), we claim that H' is a normal subgroup of H of index at most C(n). In particular, the diameter of N/H' is at most C(n)D. Now we prove the claim. Choosing any g ∈ H, we may find g_j ∈ G so that π^2(g_j) converging to π^2(g). Passing to a subsequence if necessary, we may assume that all g_j have a same image in G/G', since |G/G'| ≤ C(N), saying g_j=hg_j' where h ∈ G and g_j' ∈ G'. Then π^2(g_j') converges π^2(h^-1g). In particular, π^2(h^-1g) ∈ H'. Therefore H=⟨π^2(G),H' ⟩. Then |H/H'|=|π^2(G)/(π^2(G) ∩ H')| ≤ |π^2(G)/π^2(G')| ≤ |G/G'| ≤ C(n). Since [G',G'] ⊂ K and K is discrete, [H',H'] ⊂ K as well. Notice that H' is a compact Lie subgroup of Iso(N) and H'/[H',H'] is abelian. Thus H'/[H',H'] ≅ S × T^l where S is a discrete abelian group and T^l is a l-torus for some l. For any large integer p, we may choose a discrete subgroup cyclic ℤ_p of a circle group; in particular, ℤ_p^l is a discrete subgroup of T^l. Let H”⊲ H' be the pre-image of S ×ℤ_p^l ⊲ H'/[H',H'] , then H” is discrete since S ×ℤ_p^l and [H',H'] ⊂ K are discrete. Choosing p large enough, diam(N/H”) ≤ 2diam(N/H') ≤ C(n)D. Then we can apply Corollary <ref> thus diam(N) ≤ D'(n,D). An alternative proof of Theorem B is given by Prof Rong. We start with a prelimilary lemma. Assume that X is a proper geodesic metric space and closed isometric group actions G on X are transitive and free, then X is homeomorphic to G (with compact-open topology). Fix x ∈ X, since G-actions are free and transitive, there is a bijection map f(g)=gx from G to X. It suffices to show that f is a homeomorphism map. Choose any open set X' ⊂ X, f^-1(X') is open by the definition of the compact-open topology. Thus f is continuous. Now we prove that f is an open map. Choose any open set V ⊂ G, we may assume that V is the set of actions which map a compact set K ⊂ X to a subset of an open set U ⊂ X. We need show that f(V) is open. Choose any g ∈ V and we shall prove that f(V)=Vx contains an open neighborhood of f(g)=gx. We claim that there exists r' > 0 so that for any h ∈ G with d(hx,gx) ≤ r', then hK ⊂ U, in particular h ∈ V. If the claim holds, f(V) contains B_r'(gx), thus f is an open map. Now we prove the claim. Assume that there exists h_i ∈ G so that d(h_ix,gx) ≤ 1/i while h_ix_i ∉ U for some x_i ∈ K. Since d(h_ix,gx) ≤ 1/i, all h_i are contained in a compact subset of G, since X is proper and G are isometric. Passing to a subsequence if necessary, h_i converges to an isometric map h ∈ G and x_i converges to y ∈ K. Then hx=gx thus h=g since G is free. On the other hand, since X-U is closed, h_ix_i → gy ∉ U, a contradiction to gK ⊂ U. Assume that Theorem B is false, thus we may find a sequence of complete, connected and simply connected manifolds M_i=ℝ^k × N_i with N_i compact (we may take k same by choosing a subsequence), and a discrete subgroup G_i of Iso(M_i) so that diam(M_i/G_i) ≤ D and r_i=diam(N_i) ≥ i. Consider the blowing down sequence, (r_i^-1 M_i,p_i,G_i) @>eGH>> (X,p,G) @VVπ V @VVπ V (r_i^-1 M_i/G_i,p̅_i) @>GH>> X/G=p̅. r_i^-1 M_i= ℝ^k × r_i^-1 N_i and r_i^-1 N_i is a simply connected manifold with diameter 1. Passing to a subsequence, r_i^-1 N_i converges to Y and X=R^k × Y. By Theorem <ref>, Y and X are simply connected. Notice that X/G is a point, thus G is transitive. Notice that X/G_0 is connected and (X/G_0)/(G/G_0) is a single point. Since G/G_0 is discrete, X/G_0 must be a single point. By Theorems <ref> and <ref>, we may assume G=G_0; otherwise consider G_i' ⊂ G_i converging to G_0. We shall show that G is free, thus X is homeomorphic to G. G=G_0 is nilpotent by Lemma <ref>. Consider any isotropy group, saying G_x at x ∈ X. By Lemma <ref>, since G_x is compact, G_x commutes with G. Choose any g ∈ G_x and y ∈ X, since G is transitive, we may find h ∈ G with y=hx. Since gh=hg, gy=ghx=hgx=hx=y. Thus g can only be the identity map. Thus G is free. Since G is transitive and free, G is X is homeomorphic to G. X=R^k × Y where Y is compact metric space with diameter 1. Iso(X)= Iso(ℝ^k) ×Iso(Y). Fix y ∈ Y, let K be the subset of g ∈ G so that g(0,y) ∈ (0,Y) where 0 ∈ℝ^k. Then for any g=(g_1,g_2) ∈ K where g_1 ∈Iso(ℝ^k), g_2 ∈Iso(Y), g_1(0)=0, thus g(0,Y)=(0,Y). In particular, K is a compact subgroup of G and homeomorphic to Y. However, a connected and simply connected nilpotent group G must contain no compact non-trivial subgroup, a contradiction. § APPLICATIONS AND CONJECTURES Assume that (M,g) is a compact n-manifold with diam≤ D and Ric≥ 0. Let M̃ be the universal cover of M and choose any p̃∈M̃. A corollary of Theorem B is that, for any loop γ⊂ B_1(p̃), γ is contractible in B_D'(n,D)(p̃); the diameter of the homotopy image is bounded. A natural question is whether we can control the diameter of the homotopy image if we only assume Ric≥ -(n-1). A partial result can be derived from Theorem A. For any fixed D,V,δ>0 and n ∈N, we can find R=R(n,D,V,δ) so that the following holds. Let (M,g) be a n-dim Riemmannian manifold with diam≤ D, Ric≥ -(n-1), Vol≥ V > 0. Let (M̃,g̃) be the Riemmannian universal cover of (M,g). Then for any point p̃∈M̃ and any loop γ⊂ B_1(p̃), γ is δ-contractible in B_R(p̃). Assume that there is a contradiction sequence M_i satisfying diam≥ D, Ric≥ -(n-1), Vol≥ V > 0 while there exists a loop γ_i ⊂ B_1(p̃_i) ⊂M̃_i so that γ_i is not δ contractible in B_R_i(p̃_i), R_i →∞. Let G_i be the fundamental group of M_i. Passing to a subsequence if necessary, we have equivariant convergence (M̃_i,p̃_i,G_i) @>eGH>> (Y,p̃,G) @VVπ V @VVπ V (M_i,p_i) @>GH>> (X,p). Since X is non-collapsing, the identity component of G must be trivial. By Theorem A, Y is simply connected. By lemma <ref>, we can find R_0 so that any loop in B_1.1(p̃) is contractible in B_R_0(p̃). On the other hand, we assume that γ_i is not δ contractible in B_R_i(p̃_i). We may choose i large enough so that R_i > R_0+1 and the equivariant GH distance ϵ_i < min{δ/10,1/(10R_0+10) }. Choose any loop γ⊂ B_1.1(p̃) close to γ_i. Then γ is contractible in B_R_0(p̃). By Lemma <ref>, γ_i is 10ϵ_i contractible in B_R_0+1(p̃), a contradiction. We next consider the limit of a normal covering space of non-collapsing M_i with almost non-negative Ricci curvature. Assume that {(M_i,p_i)}_i ∈ℕ is a sequence n-dim Riemannian manifolds with diam≤ D, Ric≥ -1/i and Vol≥ V. Let M̃_i' be a normal cover of M_i and choose p̃_i' ∈M̃_i' to be a pre-image point of p_i. Assume that (M̃_i',p̃_i') GH⟶ (X',p̃'). Then for i large enough, there is a subgroup H_i' of H_i=π_1(M̃_i') so that there is a surjective homomorphism H_i' →π_1(X'). Let G_i be the fundamental group of M_i, H_i be the fundamental group of M̃_i' and M̃_i be the universal cover of M_i. Passing to a subsequence if necessary, (M̃_i,p̃_i,G_i,H_i) @>eGH>> (Y,p̃,G,H) @VVπ V @VVπ V (M̃_i',p̃_i',G_i/H_i) @>eGH>> (X'=Y/H,p̃',G/H) @VVπ V @VVπ V (M_i,p_i) @>GH>> (X,p). Since G is discrete, Y is simply connected by Theorem A. By Theorem <ref>, there is a normal subgroup G_i' converging to 1 ∈ G so that the eGH approximation ϕ_i: G_i/G_i'(1/(10ϵ_i)) → G(1/(10ϵ_i)) extends to a group isomorphism ϕ_i: G_i/G_i' ≅ G. By the generalized Magulis Lemma in <cit.> or <cit.>, G_i is virtually nilpotent. Thus G ≅ G_i/G_i' is virtually nilpotent and finitely generated since G is generated by G(20D). Recall that any subgroup of a finitely generated nilpotent group is finitely generated; any subgroup of a finitely generated group of finite index is finitely generated. Thus H is finitely generated. For large i, ϕ (⟨ H_i,G_i' ⟩ /G_i') contains all generators of H, thus contains H. Let H_i' = ϕ^-1(H) ∩ H_i for large i. Then H_i'/(H_i' ∩ G_i') ≅ H. On the other hand, by the proof of Lemma <ref>, π_1(X') = H/H' where H' is a normal subgroup of H generated by isotropy actions. Therefore there is a surjective homomorphism from H_i' to π_1(X'). We list some conjectures. In Theorem A, X is simply connected. In Theorem A, if we only assume that G_i is closed but not discrete, we have shown that X/G_0 is simply connected and X is not necessarily simply connected (in Example <ref>, (S^3,g_i) is homogeneous while the limit is not simply connected). We conjecture that π_1(X,p) is generated by loops contained in the G_0 orbit of p, i.e., Lemma <ref> holds for not only T^1 but also a general connected Lie group. Let M_i be a sequence of n-manifolds with Ric≥ 0 and diam≤ D. Let (M̃_i',p_i') be a normal cover of M_i. If (M̃_i',p_i') GH⟶ (X',p'), then there is a subgroup H_i' of π_1(M_i') with a surjective homomorphism H_i' →π_1(X') for large i. plain
http://arxiv.org/abs/2307.04328v1
20230710034732
Where to Drop Sensors from Aerial Robots to Monitor a Surface-Level Phenomenon?
[ "Chak Lam Shek", "Guangyao Shi", "Ahmad Bilal Asghar", "Pratap Tokekar" ]
cs.RO
[ "cs.RO", "cs.DM" ]
Where to Drop Sensors from Aerial Robots to Monitor a Surface-Level Phenomenon? This work is supported in part by National Science Foundation Grant No. 1943368. ^* indicates equal contribution and authors are listed alphabetically Chak Lam Shek^*, Guangyao Shi^*, Ahmad Bilal Asghar, and Pratap Tokekar University of Maryland, College Park, MD 20742 USA [cshek1, gyshi, abasghar, tokekar]@umd.edu August 12, 2023 ======================================================================================================================================================================================================================================== empty empty We consider the problem of routing a team of energy-constrained Unmanned Aerial Vehicles (UAVs) to drop unmovable sensors for monitoring a task area in the presence of stochastic wind disturbances. In prior work on mobile sensor routing problems, sensors and their carrier are one integrated platform, and sensors are assumed to be able to take measurements at exactly desired locations. By contrast, airdropping the sensors onto the ground can introduce stochasticity in the landing locations of the sensors. We focus on addressing this stochasticity in sensor locations from the path planning perspective. Specifically, we formulate the problem (Multi-UAV Sensor Drop) as a variant of the Submodular Team Orienteering Problem with one additional constraint on the number of sensors on each UAV. The objective is to maximize the Mutual Information between the phenomenon at Points of Interest (PoIs) and the measurements that sensors will take at stochastic locations. We show that such an objective is computationally expensive to evaluate. To tackle this challenge, we propose a surrogate objective with a closed-form expression based on the expected mean and expected covariance of the Gaussian Process. We propose a heuristic algorithm to solve the optimization problem with the surrogate objective. The formulation and the algorithms are validated through extensive simulations. § INTRODUCTION Multi-robot systems have been widely used in scientific information gathering including exploring the ocean <cit.>, tracking algal blooms <cit.>, and monitoring soil <cit.>. The planning problem on this topic is usually named Informative Path Planning (IPP), in which the research focus is on how to design planning algorithms to coordinate multiple robots to collect as much useful information as possible given the limited onboard resources (e.g., sensing and battery). In some cases, the robotic platform and the sensors for scientific monitoring are integrated systems and are treated as mobile sensors as a whole <cit.>. In other cases, the robotic platforms are treated as carriers of sensors <cit.>, and they are separable. The research efforts for such cases are mainly devoted to finding collaborative route strategies for these mobile platforms to serve the sensors to finish the sampling tasks. Our research is also along this line and we are interested in how to airdrop sensors to an area of interest with a team of Unmanned Aerial Vehicles (UAVs). Specifically, we consider the problem of airdropping multiple sensors to the ground with a team of budget-constrained UAVs to reduce the uncertainty of Points of Interest (PoIs) as shown in Fig. <ref>. If the UAVs can precisely drop the sensors to the desired locations, such a problem is closely related to the classic Team Orienteering Problem (TOP) <cit.>. However, due to wind disturbances, when we release one sensor from the UAV, its landing location, i.e., the sampling location, is stochastic. This is the main difference from the existing research on mobile robotic sensors, in which authors usually assume that robots can take samples at precisely the desired location. Such a difference requires to rethink of the underlying optimization for planning. To this end, we propose a new variant of the TOP for airdropping sensors with UAVs, in which the stochasticity of the sensor landing position is explicitly considered. However, the resulting optimization objective is computationally expensive to evaluate. To address this challenge, we resort to a Gaussian approximation approach <cit.> to obtain one surrogate objective with one closed-form expression. With this surrogate objective, we show that the problem can be solved in polynomial time and near optimally. In summary, the main contribution of this paper is: * We propose a variant of the Submodular Team Orienteering Problem to model the sensor dropping problem with aerial robots. * We propose one computationally efficient surrogate objective function for the proposed problem and propose a heuristic algorithm to solve it. * We demonstrate the effectiveness of our formulation and algorithm through simulations. The rest of the paper is organized as follows. We first give a brief overview of the related work in Section <ref>. Then, we explain the problem setup and formulation in Section <ref>. We introduce the technical approach in Section <ref> and validate the formulation and the proposed framework in Section <ref>. § RELATED WORK In this section, we present the work most closely related to ours. We first discuss the related work on airdropping sensors, followed by stationary sensor placement and mobile sensor planning, and finally on estimating stationary fields with Gaussian Processes. §.§ Airdroping sensors Dropping resources from an aerial vehicle has long been of interest, particularly for military and search-and-rescue operations. For example, in military resupply missions, aircrafts are required to accurately deliver supplies to the target areas, taking into account geological factors and weather conditions. Extensive research has been conducted on low-level optimization of the release trajectory to achieve high precision in airdrop operations <cit.>. In this work, we focus on the complementary high-level planning of where to drop the sensors from multiple UAVs to monitor a surface-level phenomenon. We abstract the low-level trajectory control by assuming that for any given airdrop trajectory planner, the associated uncertainty of the landing position of the sensor is known. Specifically, we focus on route-level planning for multiple UAVs to deploy multiple sensors to the area of interest for environmental monitoring applications. Our work is closely related to that of Gerlach et al. <cit.>. They formulate the problem of dropping multiple payloads to multiple targets as a Traveling Salesperson Problem (TSP). However, there are two key differences between their work and ours. First, our objective is to reduce the uncertainty at Points of Interest (PoIs) by dropping sensors and we use an information-theoretic metric. In contrast, the objective in <cit.> is to minimize the risk encountered by the soldiers. Second, our problem involves multiple energy-constrained UAVs, which cannot be modeled as TSP or its variants. §.§ Sensor Placement and Mobile Sensor Planning The sensor placement problem aims to maximize the information gain or sensing quality by strategically selecting sensor deployment locations. The typical approach is to model the phenomenon as a Gaussian Process <cit.> and use information theoretic measures for placing the sensors. The foundational work was done by Krause el al. <cit.> who showed that the partial monotonicity and submodularity allows a greedy placement to achieve a constant-factor approximation algorithm. This work was later extended to mobile sensor planning (also termed as informative path planning). Binney et al. <cit.> introduced the additional constraint of identifying a feasible path that connects these selected sensing locations. One approach to finding such paths is to convert the problem into an orienteering instance with submodular rewards. In <cit.> this problem is solved by constructing an additive approximation for the coverage objective to find a UAV path for image acquisition. A recursive greedy algorithm <cit.> is used in <cit.> to solve the submodular orienteering problem for informative path planning. This approach provides guarantees for the submodular objective but runs in quasi-polynomial time, limiting its use for large problem instances. In the context of a multi-robot setting, the orienteering problem can be solved iteratively, where the single robot performance guarantee can be extended to the multi-robot scenario <cit.>. Our work closely aligns with this body of work on informative path planning with a key difference. Because we are airdropping sensors, the exact sensing location depends on the wind field and is not known, unlike existing work. We show how to deal with this additional source of uncertainty. §.§ GP with Uncertain Inputs We use Gaussian Processes <cit.> to model the spatial function that is to be estimated by the sensors. Since we do not know the exact locations the sensors will fall at before planning UAV paths, the input to GP regression is uncertain. It is shown that the predictive distribution for Gaussian processes with uncertain inputs may not be Gaussian in general in <cit.>. Various approaches have been used to deal with input uncertainty in GPs. In the Bayesian approach, the distribution with uncertain input locations can be obtained by integrating over the uncertainty of the locations  <cit.>. However, these integrals are analytically intractable in general. Taylor expansion about the uncertain locations is used in <cit.> to present an approximate method that requires the derivative of the mean of f. The Gaussian Approximation method <cit.> assumes that the posterior distribution is Gaussian and finds its expected mean and expected covariance by integrating over the uncertainty of the locations. For certain kernel functions, these co-variances can be computed analytically. We employ the Gaussian approximation method in this paper to handle the random sensor locations. § PROBLEM STATEMENT Consider a weighted graph G = (V, E), where the vertex set V represents locations that can be visited by a team of m UAVs. The weight w(u,v) of an edge (u, v) ∈ E represents the time taken or energy spent by the UAVs to travel from vertex u to vertex v. Let (x_v, y_v, z_v) represent the coordinates of vertex v. Each vertex corresponds to a location where one of the UAVs can drop a sensor onto the ground below to observe the spatial field. The sensor's landing position on the surface, denoted by q_v, can vary depending on the wind conditions at the drop location v and the height of the drop location z_v. We assume that q_v follows a normal distribution, specifically q_v ∼𝒩(q̅_v, Σ_v), and that s̅_v and Σ_v are known for each v∈ V. Each UAV i∈[m] has a given number of sensors k_i and limited amount of time (or energy) T_i to visit some locations in V and to drop the sensors from those locations. The path of UAV i must start and end at its designated depot location r_i∈ V. The purpose of dropping sensors is to observe the value of a spatial function f at specific points of interest (POI) U on the ground. Each sensor obtains a measurement of the underlying field with additive Gaussian noise. Since we may have fewer sensors than POI, and due to the stochastic nature of sensor drop, we will need to estimate the value of f at POI. Consequently, there will be inherent uncertainty associated with these estimates. Gaussian Processes associate a random variable with each POI in U and the joint distribution over U can be used to quantify the information gained by the sensors dropped by the UAVs. Given paths P = {P_1,…, P_m} for the UAVs, let S(P) = {S_1,…,S_k} represent the corresponding sensor drop locations, and let Q(P) be the random variable representing the sensor locations, i.e., for every drop location v∈ S, the sensor location q_v ∈ Q. Also, let the length of the path ℓ(P_i) denote the total time taken by the UAV i to visit all the locations in P_i. Let η be the time required to drop a sensor. Therefore, the total time of a path P_i is given as C(P_i)=ℓ(P_i) + S(P_i)η. Let ℱ_U represent the random variable associated with POI U and let ℱ_Q represent the random variable associated with sensor readings at locations in Q. Then Pr(ℱ_U|ℱ_Q(P)=f_Q) is the prediction at U given sensor readings at locations in Q(P). To simplify notation, we will use S and Q going forward, without explicitly indicating their dependence on UAV paths P. We focus on the offline planning problem <cit.> where the plan must be decided before dropping any sensor. The mutual information – as a function of the UAVs' paths – between the random variables ℱ_U and ℱ_Q is defined as, MI(P) = H(ℱ_U) - H(ℱ_U|ℱ_Q), where H(𝒳) represents the entropy of random variable 𝒳. We now formally define the multi-UAV sensor drop problem. [Multi UAV Sensor Drop] Given the points of interest U, sensor drop locations in G=(V, E) along with the mean q̅_v and covariance Σ_v of sensor's location associated with each v ∈ V, k_i sensors and budget T_i for each UAV i∈[m], find path P_i rooted at the depot r_i along with drop locations S_i for each UAV i∈[m] to maximize the mutual information, i.e., max_P_1,…,P_m  MI(P) = H(ℱ_U) - H(ℱ_U|ℱ_Q) s.t.   C(P_i) ≤ T,  ∀ i∈ [m]   |S_i| ≤ k_i,  ∀ i∈ [m]. Note that given drop locations SS, the sensor locations in Q are random. If the locations in Q are deterministic, i.e., the sensors fall at the exact locations desired, and if points of interest U are the same as the vertices in V, we get the traditional informative path planning problem <cit.>. Since the locations in Q are themselves random variables, evaluating the probability distribution Pr(ℱ_U|ℱ_Q) and its entropy is challenging. In the next section, we discuss how we address this challenge and present the planning algorithm. § TECHNICAL APPROACH In this section, we discuss how to evaluate the objective function given in Problem <ref>. We then propose the planning algorithm to solve the problem. §.§ Gaussian Process with Stochastic Drop Locations Trying to agree on random variables and notation/abuse of notation Q random variable for sensor locations q realization of QQ, a vector of sensor locations S random variable representing drop locations s realisation of SS, again a vector U with abuse of notation, joint random variable for f(U)f(U) Then Pr(U|S=s) =∫ Pr(U|Q=q)Pr(q_i=q_i|s=s_i)dq_i or Pr(U|S=s) =∫ Pr(U|Q=q)Π_i=1^a( 𝒩(q_i,Σ_i) dq_i) In order to evaluate the objective function (<ref>), we need to calculate the entropy of the random variable (ℱ_U|ℱ_Q). If the sensor locations in Q were deterministic, this random variable would be a multivariate Gaussian, and its covariance matrix could be used to determine the entropy. However our data is of the form {q_i, f(q_i)+ϵ_i}_i=1^∑_j |S_j| and q_i∼𝒩((q̅_i, Σ_i)). Then, since the locations of sensors are independent of each other, the probability distribution Pr(ℱ_U|ℱ_Q) is given by integrating the distribution given fixed locations over random sensor locations, i.e., Pr(ℱ_U|ℱ_Q) = ∫∫ Pr(ℱ_U|ℱ_Q,{q_1, , q_a})∏_i=1^a(Pr(q_i) )dq_i dq_a. This distribution is not Gaussian and there is generally no closed form expression for this integral <cit.>. Existing literature on Gaussian Processes with input uncertainty <cit.> resorts to approximations in order to solve this integral. A Monte Carlo approach by drawing samples of q from uncertain location distributions is considered in <cit.>. Taylor expansion about q̅ is used in <cit.> to present an approximate method that requires the derivative of the mean of f. The Gaussian approximation method <cit.> assumes that the posterior distribution is Gaussian and finds its expected mean and expected covariance by integrating over the uncertainty of the locations q. For the squared exponential covariance, the expected covariance for normally distributed sensor locations can be analytically computed exactly using the following expression <cit.>. Σ_QQ(i,j) = σ^2 exp( -1/2 (q̅_i - q̅_j)^⊤ (W+Σ_i +Σ_j)^-1 (q̅_i - q̅_j))/| I+W^-1(Σ_i+Σ_j)(1-δ_ij) |^1/2 Here q̅_i and Σ_i are the mean and covariance of the normally distributed sensor location q_i in Q, and W is a diagonal matrix where each diagonal element corresponds to a characteristic length scale for the respective input variable. We use the Gaussian approximation method in this paper because it does not require sampling and is computationally tractable with a simple analytical expression for the covariance matrix. Moreover, since we are planning paths for UAVs offline, before getting any sensor readings, we can use this method to find the mutual information by just using the expected covariance as discussed below. Since the Gaussian approximation method assumes that the distribution of ℱ_U|ℱ_Q is a Gaussian distribution, and because ℱ_U and ℱ_Q are jointly Gaussian, the mutual information is given by MI = H(ℱ_U)- H(ℱ_U|ℱ_Q) = H(ℱ_U) + H(ℱ_Q) - H(ℱ_U,ℱ_Q) = 1/2log( (Σ_UU) (Σ_QQ)/(Σ̅)), where Σ̅ = [Σ_UU Σ_UQ Σ_QU Σ_QQ]. We can use the expression (<ref>) to evaluate Σ_UQ(i,j) by replacing x_i with the known location of i^th point of interest in U and Σ_i by the null matrix. The Objective function (<ref>) and the surrogate objective defined in Equation (<ref>) are submodular and monotonically non-decreasing set functions in S. The objective function is a submodular function. I(f_PoIs4pt,X) = H(f_PoIs4pt) + H(X) - H(f_PoIs4pt∪ X) I(f_PoIs4pt,X') = H(f_PoIs4pt) + H(X') - H(f_PoIs4pt∪ X') The increment of MI denotes EE, such that E_x = H(X ∪ z) - H(f_PoIs4pt∪ X ∪ z) - H(X) + H(f_PoIs4pt∪ X) E_x' = H(X' ∪ z) - H(f_PoIs4pt∪ X' ∪ z) - H(X') + H(f_PoIs4pt∪ X') E_x - E_x' = [ H(X ∪ z)) - H(X) - H(X' ∪ z) + H(X')_(1) ] +35pt [ H(f_PoIs4pt∪ X ∪ z)) - H(f_PoIs4pt∪ X) - H(f_PoIs4pt∪ X' ∪ z) + H( f_PoIs4pt∪ X')_(2) ] ≥ 030pt There are a few nice properties of mutual information. Monotonicity Clearly, the MI objective is also a monotonic function because the conditioning always reduces entropy: H(f_PoIs4pt | X) ≤ H(f_PoIs4pt) In other words, the additional sensor can provide extra information which always helps. §.§ Planner The submodularity and monotonicity of the surrogate objective function allow us to formulate Problem <ref> as a submodular TOP. However, there is one additional constraint in Problem <ref> that is not present in standard submodular TOP, that of the number of sensors k_i that each robot is able to deploy. We address this problem using the following observation. In a complete graph with N≥ k_i vertices for all i, there always exists an optimal solution where the robot i's path consists of no more than k_i vertices, excluding the starting vertex. The proof follows by contradiction. Suppose there is an instance where no optimal solution has at most k_i vertices along robot i's path. The robot is allowed to deploy at most k_i sensors. Therefore, there must be one or more vertices along the robot path that no sensor is dropped. Since the graph is a complete metric graph, we can “shortcut” such vertices without increasing the cost of the path. Therefore, we can recover a solution that consists of exactly k_i vertices. This is a contradiction proving the original claim. With this insight, we present our algorithm (Algorithm <ref>) to solve the Problem <ref>. We first take the metric completion of the input graph. Recall that for a weighted graph G(V, E), each edge (u,v) ∈ E is associated with a cost w(u,v). In the preprocessing step, we generate a complete graph G^'=(V, E^') using G, where the edge cost w^'(u,v) is defined as the length of the shortest path between u and v in G. Then, we sequentially call a subroutine, Generalized Cost-Benefit (GCB), to compute a path for each robot. Compared to the original GCB algorithm <cit.>, in Algorithm <ref>, we add one extra control condition in the while loop to account for the constraint, Eq. (<ref>), on the number of available sensors using Lemma <ref>. The constraints imposed on the paths of UAVs, which limit them to at most k_i vertices and a maximum length of T_i for UAV i, can be regarded as a partition matroid constraint. It has been shown in <cit.> that an α-approximate greedy step for submodular maximization over a matroid yields an approximation ratio of 1/α+1. Hence, given an α-approximation algorithm to solve the submodular orienteering problem for a single UAV, Algorithm <ref> results in a 1/α + 1 approximation ratio for maximizing Objective (<ref>) for multiple UAVs. When the paths of all the UAVs are constrained to be of at most T length and k vertices, we get a uniform matroid resulting in 1-1/e^α approximation ratio. A quasi-polynomial time recursive greedy algorithm to solve the single vehicle orienteering problem with submodular rewards is given in <cit.>, resulting in α=Olog(). In this paper we use Generalized Cost Benefit (GCB) algorithm to solve the single UAV problem as it has better runtime than the recursive greedy algorithm <cit.>. § EVALUATION In this section, we evaluate the performance of our algorithm through a series of numerical experiments. We first explain the setup for the simulation. Then, we will show one qualitative example to illustrate the difference between the proposed approach and the baseline. Next, we will quantitatively evaluate the performance of the proposed approaches w.r.t. the uncertainty reduction of PoIs. Moreover, we will show the running time of the proposed algorithm w.r.t. the number of robots. §.§ Experimental Setup The flying object model used in this study is based on the work described in <cit.>. This model captures the motion of the sensors, considering the gravity, the sensors' surface area, and the speed of the wind. The sensor mass is set to 10kg. The surface coefficient is 1 and the vertical height is 500m. We begin by defining the map, ground truth, and wind field, as shown in Fig. <ref>. The map provides labels for all the potential dropping points and PoIs. The ground truth is generated by combining multiple Gaussian functions. Data points sampled from the ground truth are used to learn the kernel function, where we employ the RBF function. The wind field indicates the speed at specific locations on the map. By combining the sensor motion model with the wind field, we can estimate the landing position of the sensors. Using a given kernel, the Algorithm <ref> is applied to search for a set of sensor dropping locations which is an approximate solution to the main problem. The final sensor locations are determined by sampling from the flying object model with uncertainty. Once the sensor locations are obtained, we can measure the environmental values and compute the posterior of PoIs based on these measurements. §.§ An qualitative example In the following, we present a comparison between a baseline approach and our proposed method using the defined settings. The experiment focuses on a scenario with two UAVs, where each UAV is equipped with four sensors. The UAVs are allocated a distance budget of 870 units to drop all the sensors along their respective paths. §.§.§ Baseline In the baseline case (Fig <ref>), the UAVs tend to drop a higher number of sensors in areas with a higher concentration of PoIs. The objective is to ensure that each sensor can cover one or more PoIs. However, due to the uncertainty introduced by the wind, the sensors tend to cluster in smaller regions. As a result, the four sensors located around coordinates (0,100) are only capable of accurately estimating two PoIs' value, while the remaining PoIs are not sufficiently covered. This can be observed in Fig <ref>, where the two PoIs in the lower right corner exhibit a significantly higher error of estimation. §.§.§ Our Approach Our approach, on the other hand, considers the impact of wind uncertainty and prefers to drop sensors in a wider area. As shown in Fig. <ref>, the wind blows the sensors to a broader coverage area, allowing them to reach and cover more PoIs. This broader coverage results in a significant reduction in the error of PoI estimation compared to the baseline case. Additionally, it is worth noting that the areas where the sensors are dropped but do not have high concentration of PoIs exhibit high error rates. This demonstrates the effectiveness of our approach in adapting to the wind uncertainty and achieving better coverage of the target area. §.§ Comparisons with Baselines In this section, we compare the MSE of three different approaches across three different scenarios. The MSE is computed as the sum of the square of the difference between the posterior of the PoIs and the ground truth values of the PoIs. In the first two scenarios, we assume that the wind speed is uniform and the variance of landing location is the same for all dropping nodes. In the first scenario, the final location of a sensor follows a Gaussian distribution with a variance of 900. Two UAVs are deployed, with each carrying 4 sensors. In the second scenario, the final location of a sensor follows a Gaussian distribution with a variance of 820. Two UAVs are deployed, with each carrying 3 sensors. In both of these scenarios, our approach demonstrates approximately a 10% improvement in MSE compared to the baseline approach. The random selection approach, on the other hand, results in an MSE of 1. The third scenario introduces non-uniform uncertainty w.r.t. the drop point location, where the variance is a function of the non-uniform wind speed. Once again, our approach consistently outperforms the baseline approach, achieving a 12% improvement in MSE. These results highlight the effectiveness of our approach in mitigating the impact of uncertainty in different scenarios and achieving more accurate sensor placements. §.§ Running Time Lastly, we demonstrate the scalability of our approach. In comparison to the baseline approach, our approach may have a slightly longer running time in each scenario. However, both approaches grow polynomially in run time with the number of sensors per UAV. To further evaluate the computational performance, we also simulated a brute-force approach. The brute-force approach generates all possible combinations of sensor dropping points within the budget constraint and selects the set with the highest objective value. The runtime of the brute-force approach grew exponentially, taking hours to days to complete due to the factorial computation of all possible combinations. This stark contrast highlights the effectiveness and efficiency of our approach in finding nearly optimal solutions for sensor placement in a timely manner. § CONCLUSION This paper studies the problem of routing a team of UAVs to drop sensors to reduce the uncertainty of PoIs. The problem is formulated as a variant of TOP. To reduce the computational cost in the evaluation of the objective, we propose one surrogate objective with closed-form expression based on Gaussian approximation. A heuristic algorithm (SGA) is proposed to solve the relaxed problem with the surrogate objective. The formulation and the algorithm are validated in numerical simulation. IEEEtran
http://arxiv.org/abs/2307.06083v1
20230712111344
Searching for universality of dineutron correlation at the surface of Borromean nuclei
[ "A. Corsi", "Y. Kubota", "J. Casal", "M. Gomez-Ramos", "A. M. Moro", "G. Authelet", "H. Baba", "C. Caesar", "D. Calvet", "A. Delbart", "M. Dozono", "J. Feng", "F. Flavigny", "J. -M. Gheller", "J. Gibelin", "A. Giganon", "A. Gillibert", "K. Hasegawa", "T. Isobe", "Y. Kanaya", "S. Kawakami", "D. Kim", "Y. Kiyokawa", "M. Kobayashi", "N. Kobayashi", "T. Kobayashi", "Y. Kondo", "Z. Korkulu", "S. Koyama", "V. Lapoux", "Y. Maeda", "F. M. Marqués", "T. Motobayashi", "T. Miyazaki", "T. Nakamura", "N. Nakatsuka", "Y. Nishio", "A. Obertelli", "A. Ohkura", "N. A. Orr", "S. Ota", "H. Otsu", "T. Ozaki", "V. Panin", "S. Paschalis", "E. C. Pollacco", "S. Reichert", "J. -Y. Rousse", "A. T. Saito", "S. Sakaguchi", "M. Sako", "C. Santamaria", "M. Sasano", "H. Sato", "M. Shikata", "Y. Shimizu", "Y. Shindo", "L. Stuhl", "T. Sumikama", "Y. L. Sun", "M. Tabata", "Y. Togano", "J. Tsubota", "T. Uesaka", "Z. H. Yang", "J. Yasuda", "K. Yoneda", "J. Zenihiro" ]
nucl-ex
[ "nucl-ex", "nucl-th" ]
cea]A. Corsi [mail]Corresponding author [email protected] rik,cns,tuda]Y. Kubota FAMN]J. Casal FAMN]M. Gómez-Ramos FAMN]A. M.  Moro cea]G. Authelet rik]H. Baba tuda]C. Caesar cea]D. Calvet cea]A. Delbart cns]M. Dozono key]J. Feng ipno]F. Flavigny cea]J.-M. Gheller lpc]J. Gibelin cea]A. Giganon cea]A. Gillibert toh]K. Hasegawa rik]T. Isobe miy]Y. Kanaya miy]S. Kawakami ehw]D. Kim cns]Y. Kiyokawa cns]M. Kobayashi tod]N. Kobayashi toh]T. Kobayashi tit]Y. Kondo dae,rik,atom]Z. Korkulu tod]S. Koyama cea]V. Lapoux miy]Y. Maeda lpc]F. M. Marqués rik]T. Motobayashi tod]T. Miyazaki tit]T. Nakamura kyo]N. Nakatsuka kyu]Y. Nishio cea,tuda]A. Obertelli kyu]A. Ohkura lpc]N. A. Orr cns]S. Ota rik]H. Otsu tit]T. Ozaki rik]V. Panin tuda]S. Paschalis cea]E. C. Pollacco tum]S. Reichert cea]J.-Y. Rousse tit]A. T. Saito kyu]S. Sakaguchi rik]M. Sako cea]C. Santamaria rik]M. Sasano rik]H. Sato tit]M. Shikata rik]Y. Shimizu kyu]Y. Shindo dae,rik,atom]L. Stuhl rik]T. Sumikama cea,tuda]Y.L. Sun kyu]M. Tabata tit]Y. Togano tit]J. Tsubota rik]T. Uesaka rik]Z. H. Yang kyu]J. Yasuda rik]K. Yoneda rik,key]J. Zenihiro [cea]Département de Physique Nucléaire, IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France [rik]RIKEN Nishina Center, Hirosawa 2-1, Wako, Saitama 351-0198, Japan [cns]Center for Nuclear Study, University of Tokyo, Hongo 7-3-1, Bunkyo, Tokyo 113-0033, Japan [ECT]European Centre for Theoretical Studies in Nuclear Physics and Related Areas (ECT^*), Villa Tambosi, Strada delle Tabarelle 286, I-38123 Trento, Italy [pd]Dipartimento di Fisica e Astronomia “G. Galilei” and INFN - Sezione di Padova, Via Marzolo 8, I-35131 Padova, Italy [FAMN]Departamento de Física Atómica, Molecular y Nuclear, Facultad de Física, Universidad de Sevilla, Apartado 1065, E-41080 Sevilla, Spain [tuda]Department of Physics, Technische Universitat Darmstadt [pek]Department of Physics, Peking University [ipno]Institut de Physique Nucleaire Orsay, IN2P3-CNRS, F-91406 Orsay Cedex, France [lpc]LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, F-14050 Caen, France [toh]Department of Physics, Tohoku University, Aramaki Aza-Aoba 6-3, Aoba, Sendai, Miyagi 980-8578, Japan [miy]Department of Applied Physics, University of Miyazaki, Gakuen-Kibanadai-Nishi 1-1, Miyazaki 889-2192, Japan [ehw]Department of Physics, Ehwa Womans University [tod]Department of Physics, University of Tokyo, Hongo 7-3-1, Bunkyo, Tokyo 113-0033, Japan [tit]Department of Physics, Tokyo Institute of Technology, 2-12-1 O-Okayama, Meguro, Tokyo 152-8551, Japan [atom]MTA Atomki, P.O. Box 51, Debrecen H-4001, Hungary [kyo]Department of Physics, Kyoto University, Kitashirakawa, Sakyo, Kyoto 606-8502, Japan [kyu]Department of Physics, Kyushu University, Nishi, Fukuoka 819-0395, Japan [osa]Research Center for Nuclear Physics, Osaka University, 10-1 Mihogaoka, Ibaraki, Osaka 567-0047, Japan [tum]Department of Physics, Technische Universitat Munchen [dae]Center for Exotic Nuclear Studies, Institute for Basic Science, Daejeon 34126, Republic of Korea [key]School of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China The dineutron correlation is systematically studied in three different Borromean nuclei near the neutron dripline, ^11Li, ^14Be and ^17B, via the (p,pn) knockout reaction measured at the RIBF facility in RIKEN. For the three nuclei, the correlation angle between the valence neutrons is found to be largest in the same range of intrinsic momenta, which can be associated to the nuclear surface. This result reinforces the prediction that the formation of the dineutron is universal in environments with low neutron density, such as the surface of neutron-rich Borromean nuclei. quasi-free scattering Borromean nuclei three-body model Jacobi coordinates dineutron § INTRODUCTION Halo nuclei appear close to the neutron dripline and present a diffuse matter distribution due to the reduced binding energy of the valence neutrons <cit.>. Nuclei formed by a core and two loosely bound neutrons, such that the subsystem formed by the core and the neutron is unbound, are called Borromean. Most of them present nuclear halos near the neutron dripline. Some examples are ^6He, ^11Li, ^14Be and ^17,19B. The correlation between the neutrons plays an essential role to stabilize these nuclei and has been the subject of a number of studies <cit.>. We discuss here a specific form of spatially localized pairing correlation called dineutron <cit.>. The strength of the pairing correlation evolves with density, going from the BCS (Bardeen–Cooper–Schrieffer) regime of loosely spaced correlations to the BEC (Bose-Einstein Condensate) regime of compact space correlation with decreasing density. This regime is expected to appear at the surface of neutron-rich nuclei, where neutron density is 10^-4 to 0.5 of the saturation density <cit.>. Its onset appears to be strongly linked to the admixture of different parities in the wavefunction describing the valence neutrons <cit.>. Halo nuclei, with their diffuse matter distribution, are an ideal probe to study this low-density correlation. The dineutron was experimentally revealed in ^6He <cit.> and ^11Li <cit.>, however the experimental evidence is still scarce. Typically, dineutron correlation is explored via the opening angle between the two neutrons <cit.>. An opening angle below 90^∘ (90^∘ corresponding to the non-correlated case) in coordinate space or above 90^∘ in momentum space <cit.> points to a strong spatial correlation yielding a compact configuration. Intuitively, the opening angle in coordinate space is related to the inter-nucleon distance. References <cit.> measured the E1 strength after Coulomb dissociation of ^11Li, and extracted the opening angle in coordinate space based on the cluster sum rule and assuming an inert core <cit.>. For ^11Li the average angle obtained was θ = 48^+14_-18 ^∘, corresponding to a strong dineutron correlation. A more refined estimation can be obtained if the average neutron-neutron distance is measured independently. In such a way the authors of Ref. <cit.> deduced from the B(E1) measurement of Ref. <cit.> a value of the mean opening angle of ∼ 56.2^∘. The average neutron-neutron separation was estimated via measurements of the two-neutron correlation function in dissociation reactions. This method has been applied to ^6He, ^11Li, ^14Be <cit.>. A combined analysis of the B(E1) measurement <cit.> and the correlation function <cit.> has given a somewhat larger value of the opening angle in ^11Li, ∼ 66^∘ <cit.>, corresponding to a reduced dineutron correlation. Two-neutron transfer reactions can also be used to study nn correlations <cit.>. In Ref. <cit.>, it was shown that the description of ^11Li data required a model with a large pairing correlation. Nucleon removal reactions are another method to access the opening angle <cit.>. The authors of Ref. <cit.> measured an opening angle in momentum space of 103.4±2.1^∘ for ^11Li, and suggest that a dineutron configuration exists also in ^14Be, although less developed than in ^11Li. In both nuclei, the dineutron appears due to the mixing of different-parity orbitals <cit.>. In contrast, the structure of the halo in ^17B is mainly of d-wave character with small s-wave admixture <cit.>, which should hinder the development of the dineutron correlation as both orbitals have the same parity. We note that the study of these correlations via breakup and knockout reactions is complementary to that performed through 2n decay <cit.>, which has been used to explore the properties of two-neutron unbound systems. Recently, Ref. <cit.> introduced a new method based on quasi-free scattering reactions to study dineutron correlation as a function of its peripherality, i.e., distance from the baricenter of the system, and applied it to ^11Li. The observable related to the peripherality is the intrinsic momentum of the removed nucleon in quasi-free scattering reactions. Being a fast removal process, the impact of final-state interactions on the observable of interest is assumed to be reduced, making its interpretation more straight-forward. In this work we search for dineutron correlations applying this same method to ^14Be and ^17B, measured in the same experiment as Ref. <cit.>. Our goal is to assess whether such correlation appears as a general feature at the surface of neutron-rich Borromean nuclei. The data is compared to calculations using a three-body model for the projectile and a quasi-free sudden model to describe the knockout process <cit.>. § EXPERIMENTAL RESULTS §.§ Setup The experiment was performed at the Radioactive Isotope Beam Factory operated by the RIKEN Nishina Center and the Center for Nuclear Study (CNS) of the University of Tokyo. Secondary beams were produced using projectile fragmentation of a ^48Ca primary beam at 345 MeV/nucleon with a typical intensity of 400 particle nA on a Be target. Fragmentation products were separated, detected and identified via the BigRIPS fragment separator <cit.>. The cocktail beam was composed by ^11Li, ^14Be, and ^17B, with a percentage of ∼ 80%, 12%, and 8%, respectively. It impinged on the secondary target with an average energy of 246, 265 and 277 MeV/nucleon, respectively. The secondary target was the 15-cm thick liquid hydrogen target from the MINOS device <cit.>, and was surrounded by a Time Projection Chamber acting as vertex tracker together with the beam tracking MWDC detectors. The detection system included the WINDS array of plastic scintillators for knockout neutron detection, and a MWDC followed by an array of plastic scintillators for the recoil proton detection. Those two detectors were key for the measurement of the intrinsic momentum of the removed neutron and the opening angle in the (p,pn) reaction. The standard SAMURAI setup consisting of a set of drift chambers, the SAMURAI dipole magnet and two hodoscope walls was used for fragment analysis <cit.>. The neutrons emitted at forward angles were detected by the NEBULA plastic scintillator array <cit.>. We evaluated the acceptance cut they induce on the measurement of the intrinsic momentum and opening angle distribution using a Geant4 simulation. No bias is introduced by the experimental setup on the opening angle distribution, while the acceptance decreases for increasing intrinsic momentum (leading to off-plane scattering), as shown in the inset of Fig. <ref>. More details on the rest of the setup can be found in Ref. <cit.> and references therein. §.§ Dineutron correlation The measurement of the momenta of the outgoing proton and removed neutron allows to reconstruct the intrinsic momentum of the neutron before removal (within the quasi-free approximation): k⃗_⃗y⃗ := k⃗_⃗n⃗1⃗ = k⃗_⃗n⃗1⃗'⃗ + k⃗_⃗p⃗'⃗ - k⃗_⃗p⃗ where k_n1 (k_n1^') is the momentum of the neutron in the initial (final) state and k_p (k_p^') the one of the target (recoil) proton. The correlation angle, or the opening angle θ in momentum space, is the angle between the Jacobi momenta k_x and k_y: cos(θ)=k⃗_⃗x⃗·k⃗_⃗y⃗/|k⃗_⃗x⃗||k⃗_⃗y⃗| with k⃗_⃗x⃗=k⃗'⃗_⃗n⃗2⃗-k⃗'⃗_⃗f⃗ where k'_n2, k'_f are the momenta of the remaining valence neutron and fragment in the final state. This representation of the three-body system in terms of Y Jacobi coordinates is illustrated in the inset of Fig. <ref>. In the following, we illustrate the intrinsic momentum and correlation angle distribution for the case of ^14Be. Both are compared with a theoretical calculation performed within a quasi-free sudden model <cit.>, using the three-body model for ^14Be from <cit.>. Figure <ref> shows the intrinsic momentum distribution of the removed nucleon for two different relative-energy intervals in the ^13Be system. The theoretical distributions are already corrected for the experimental acceptance (see inset of Fig. <ref>a) and convoluted with the experimental resolution of 0.17 fm^-1. Each relative-energy (^12Be+n) interval encompasses a peak in the spectrum of ^13Be <cit.>. The comparison to theoretical calculations show that the 0-1.5 MeV interval is dominated by the p_1/2 component (72% of the total in this energy range), while the 1.5-3 MeV interval is dominated by the d_5/2 component (60% of the total). This is consistent with the interpretation of the relative-energy spectrum of ^13Be provided in Ref. <cit.> as composed of a p-wave resonance centered at 0.5 MeV followed by a broader d-wave resonance. The different lines in Fig. <ref> are labeled as J^π[ℓ_j ⊗ S_c], where the single-particle angular momentum ℓ_j couples with the spin of the core S_c to give the total angular momentum J^π of the binary subsystem ^13Be after knockout. Note that, since the ground state of ^14Be is a 0^+ state, the angular momentum of the knocked-out neutron has to match J^π, e.g., 5/2^+ contributions correspond to a d-wave knockout. It is worth noting that the calculations presented in Fig. <ref> are not a fit to the experimental data but the results of the structure model (and corresponding partial-wave content) of Ref. <cit.>, therefore the agreement is not perfect. In particular the disagreement in the peak in the lower energy range may suggest a larger s-wave component. However, in <cit.>, an increase in s-wave led to a worse description of the low-energy distribution. Similarly, there is a slight disagreement for the largest k_y values that may be associated to missing components in the wave function due to limitations of the model, as discussed in <cit.> for large relative energies. Fig. <ref> shows the correlation angle distribution integrated over all intrinsic momenta, and for intrinsic momenta between 0.2 fm^-1 and 0.4 fm^-1. One can see that the inclusive distribution is rather symmetric, while an asymmetry appears for some range of values of the intrinsic momentum. The calculations are able to capture this behaviour. The range between 0.2 fm^-1 and 0.4 fm^-1 is the one yielding the maximum asymmetry with an enhancement of values of the correlation angle larger than 90^∘. This points towards a geometrically compact configuration of the two-neutron system (the dineutron) at low intrinsic momenta, which can be associated to the nuclear surface. The average correlation angle θ, obtained taking event by event the arccos of the data plotted in Fig. <ref>, is plotted as a function of the intrinsic momentum ranging from 0 to 1.8 fm^-1 in Fig. <ref>. The data for ^14Be are compared to the ones for ^11Li and ^17B measured in the same experiment <cit.>. The nucleus of ^11Li is considered as a reference case of well developed dineutron correlation <cit.> so, as expected, it presents the largest deviation from 90^∘. It is however remarkable that for both ^14Be and ^17B the data also show a significant deviation in the correlation angle distribution in the same range of momenta. This deviation points to the appearance of a dineutron correlation for intrinsic momenta smaller than 0.4 fm^-1, which corresponds to the nuclear surface <cit.>, for all measured nuclei. We note that the larger value of the correlation angle occurring around 0.2 fm^-1 is clearly above 90^∘, even taking into account the errors. This constitutes the first experimental evidence supporting universality of the dineutron correlation in the low-density nuclear surface of Borromean nuclei, which had been previously suggested <cit.>. It is worth noting that, depending on the probe, an inclusive measurement of the correlation angle will be sensitive to a rather large region of the nucleus (including the interior), and the dineutron correlation signal may be damped, as shown in Fig. <ref>. § THEORETICAL ANALYSIS To quantify and understand the mechanism behind the onset of the dineutron correlation, we compare the experimental result of Fig. <ref> to theoretical predictions in Fig. <ref>. The theoretical description combines three-body models within the hyperspherical framework to describe the structure of Borromean nuclei <cit.> and an eikonal sudden description of the (p,pN) reaction <cit.>. This description has been used to study ^11Li(p,pn) in <cit.>. We refer the reader to <cit.> for a detailed overview of the model. The structure model for ^11Li was originally introduced in Ref. <cit.> to describe GSI (p,pn) data <cit.>, and later revisited in <cit.> to incorporate d-wave contributions. This model leads to a large admixture between s and p waves (∼ 60% and 30%, respectively), and the computed core-nn rms distance is 4.9 fm, which compares well with the experimental value derived from Coulomb Dissociation data <cit.>. For ^14Be, we adopt the model in Ref. <cit.>, which is dominated by a low-lying p-wave resonance in ^13Be (∼ 60% of the wave function comes from p waves) and includes the effect of the first 2^+ excited state of the ^12Be core (which amounts to roughly 20% of the norm of the ground state of ^14Be). For ^17B, the three-body wave function was computed by fixing a simple model neglecting the spin of the core, in the same spirit as the ^19B calculations in Ref. <cit.>, with the low-lying s and d states adjusted to reproduce the main features reported in the recent experimental work <cit.>. In such a model, the wave function is mostly governed by the d_5/2 component (∼ 80%), and the p-wave admixture is minimal (≲ 2%) and comes from the non-resonant continuum in ^16B. The calculated matter radii for ^14Be and ^17B are 3.0 and 2.8 fm, respectively, which compare well with the values reported in Ref. <cit.> from interaction cross sections. Using these structure inputs, the calculations capture the general trend of the average correlation angle as a function of the intrinsic momentum, as shown in Fig. <ref>. Figure <ref>a corresponds to ^11Li and was already explored in <cit.> with the same theoretical description. It should be remarked that for missing momenta k_y ≳ 0.5 fm^-1, the distribution is affected by the core-proton interaction, so it is unreliable to extract nuclear structure information from that region <cit.>. In the case of ^14Be (Fig. <ref>b), the calculated average correlation angle (blue solid line) follows the trend of the experimental data but the results for intrinsic momenta smaller than 0.5 fm^-1 are somewhat overestimated. Meanwhile, for ^17B (Fig. <ref>c), the theoretical model describes the maximum even with only a 2% p-wave admixture. This remarkable sensitivity of the maximum to small opposite-parity components was already noted in <cit.>. Only for ^14Be there are significant differences between theoretical calculation and experimental data. To understand these differences, we note that in the analysis of the ^13Be energy distribution in <cit.> the three-body model used in this work was suggested to be missing some core-excited components. Different components of the ^12Be core can give opposing contributions to the average correlation angle, as shown in Fig. <ref>b, where the ^12Be(0^+_gs) component's distribution (red dashed) goes over 90^∘ at low momenta, while the excited ^12Be(2^+)'s contribution (orange dashed) goes under 90^∘. Among the missing components in the used model, those where the ^12Be core is in its first excited 0^+_2 state are particularly significant, since they are more likely to be populated, as its angular momentum and parity are those of the ^14Be ground state. To estimate their effect, we note that the ^12Be(0^+_2) state is usually described as an orthogonal partner of the 0^+ ground state <cit.>, with opposite relative sign between its positive and negative-parity components when compared to ^12Be(0^+_gs). Therefore the components with ^12Be(0^+_2) should present a correlation angle smaller than 90^∘ (opposite to ^12Be(0^+_gs)). Tentatively, for the correlation angle as a function of missing momentum, we have assigned to the ^12Be(0^+_2) components a distribution equal to that of ^12Be(0^+_gs) but mirrored around 90^∘, and a weight of 16%, similar to the 20% obtained with the three-body model for the similar-energy ^12Be(2^+). This estimation produces the magenta dot-dashed line, whose agreement with the data is much improved, pointing to the excitation of the core having a significant effect in the dineutron correlation, which was already indicated in <cit.>. Therefore, the effect of the core may be responsible for ^14Be and ^17B showing a similar correlation angle, despite their very distinct admixture of different-parity components. At this point, a natural question arises about how to compare the degree of dineutron correlation among different nuclei. A possible criterium is based solely on the experimental results, by comparing the maximum correlation angle. The maximum correlation angle for ^11Li, ^14Be and ^17B occurs at k_y=0.25 fm^-1 and corresponds to 100.0(2)^+29_-29, 95.9(10)^+29_-29 and 96.4(19)^+29_-29 degrees, respectively. A second criterion is to make use of the theoretical models employed. The theoretical calculations in Fig. <ref> give the average maximum values of 98.0 (^11Li), 96.6 (^14Be) and 95.4 (^17B) degrees, which compare well to the experimental results. It is worth noting that the corresponding three-body models give rise to maximum of the two-neutrons wave function density around the minimum of the average interneutron distance, as discussed in Ref. <cit.>, and this feature is directly linked to the present observations in momentum space. The three-body model allows also to draw the ground-state probability density as a function of the Jacobi-T coordinates r_nn and r_c-nn, i.e. the distance among the two neutrons and the two neutrons baricenter with respect to the core. This is shown in Fig. <ref> for the three cases considered and allows to gain more insight on the configuration of the neutrons. In a purely non-correlated scenario, the distributions would present equal weights at both sides of the orange lines in the figure, which delimit two distinct regions within the hyperspherical description of three-body nuclei <cit.>. Local maxima above this line, i.e., for small r_nn, are usually associated to the dineutron configuration, whereas the peaks below it correspond to the so-called “cigar”-like structure. The dominance of one of these structures is associated to correlations. We can see that a clear dineutron peak is obtained for the three nuclei. For ^11Li (Fig. <ref>a) the dineutron peak is clearly dominant, with only a relatively small fraction of the probability exploring larger neutron-neutron distances. For ^14Be (Fig. <ref>b), the two configurations are clearly separated, with the dineutron still being more pronounced. In the case of ^17B (Fig. <ref>c), three maxima appear (this is a consequence of the dominant d-wave content of the ground state). To quantify the degree of dineutron development for each nucleus, we may define the quantity χ=P_d-P_c/P_d+P_c, where P_d and P_c are the integrated probabilities above and below the symmetry lines in Fig. <ref>, i.e., P_d is somehow a measure of the dineutron component, while P_c is related to the cigar component. Indeed, with this definition χ=1 (-1) would correspond to a “pure” dineutron (cigar). The integration for ^11Li, ^14Be and ^17B within the present calculations yields χ=0.43, 0.32 and 0.19, respectively. In this case, both criteria agree and support the fact that the dineutron correlation is stronger for ^11Li. The theoretical model also permits the extraction of the average opening angle in configuration space, obtaining ⟨θ_r⟩=66.9^∘ (^11Li), 67.1^∘ (^14Be) and 77.4^∘ (^17B). The results for ^11Li and ^14Be are consistent to those presented in <cit.>, while the angle for ^17B is similar to that presented for ^6He. Since both nuclei have very little admixture of different-parity components, their opening angles should be comparable. From Fig. <ref>, one can extract the correlation between the average r_nn and r_c-nn. The minimum of r_nn corresponds to a dineutron configuration, and its position signals the region of the nucleus where the calculation predicts the dineutron correlation to be stronger. We can notice that this occurs for r_c-nn=3-4 fm, corresponding to the nuclear periphery, again supporting the results in <cit.> and generalizing them to ^14Be and ^17B. As discussed in <cit.>, this behaviour can be interpreted as a transition from BCS-like correlations in the interior to a BEC-like one, the spatially compact dineutron, around the surface. The probability density for the two valence nucleons of ^11Li is also displayed with a grey area. One can notice that the probability maximum and the inter-nucleon distance minimum are attained around the same value of r_c-nn∼ 3 fm, which makes the dineutron configuration dominant for the two valence nucleons. Within the adopted theoretical framework, the maximum in r_c-nn corresponding to the nuclear surface can be associated to the maximum at low intrinsic momentum k_y, validating the use of k_y as a proxy for peripherality. § CONCLUSIONS We have presented a comparative study of dineutron correlation in three Borromean systems, ^11Li, ^14Be and ^17B, based on the average correlation angle as a function of the intrinsic momentum of the removed neutron. This work follows the seminal work of Kubota et al. <cit.> who first proposed to use this observable to probe the location of dineutron correlation inside the nucleus, and extends the study to ^14Be and ^17B. A dineutron correlation appears in the periphery of ^14Be and ^17B as well, but is damped compared to ^11Li. This study provides the first experimental hint of the universality of dineutron correlation in the low-density surface of Borromean nuclei. Even while fast nucleon removal induced by high-energy quasi-free scattering is the tool of choice to reduce the effect of final-state interactions, consistent measurements using different probes may help to confirm the universal character of our observation. The damping of dineutron correlation in ^14Be is interpreted as due to the presence of configurations with an excited core, that can be predicted within the three-body model. Higher statistics data incorporating gamma-ray coincidences, which enable core excitations to be probed, could be used to investigate this explanation. § ACKNOWLEDGEMENTS This work has been supported by the European Research Council through the ERC Starting Grant No. MINOS-258567. J.C., M.G.R. and A.M.M. acknowledge financial support by MCIN/AEI/10.13039/501100011033 under I+D+i project No. PID2020-114687GB-I00 and under grant IJC2020-043878-I (also funded by “European Union NextGenerationEU/PRTR”), by the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 101023609, by the Consejería de Economía, Conocimiento, Empresas y Universidad, Junta de Andalucía (Spain) and “ERDF-A Way of Making Europe” under PAIDI 2020 project No. P20_01247, and by the European Social Fund and Junta de Andalucía (PAIDI 2020) under grant number DOC-01006. J.G., F.M.M. and N.A.O. acknowledge partial support from the Franco-Japanese LIA-International Associated Laboratory for Nuclear Structure Problems as well as the French ANR14-CE33-0022-02 EXPAND. Z.K. and L.S. acknowledge partial support by the Institute for Basic Science (IBS-R031-D1). S.P. acknowledges the support of the UK STFC under contract numbers ST/L005727/1 and ST/P003885/1 and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project-ID 279384907 - SFB 1245. h-physrev
http://arxiv.org/abs/2307.05381v1
20230710143232
Reliable Devices Yield Stable Quantum Computations
[ "Samudra Dasgupta", "Travis S. Humble" ]
quant-ph
[ "quant-ph" ]
Reliable Devices Yield Stable Quantum Computations The manuscript is authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan: https://www.energy.gov/doe-public-access-plan. Samudra Dasgupta^1, 2^*, and Travis S. Humble^1,2^† ^1Quantum Science Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA ^2Bredesen Center, University of Tennessee, Knoxville, Tennessee, USA ^*[email protected], ORCID: 0000-0002-7831-745X ^†[email protected], ORCID: 0000-0002-9449-0498 February 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Stable quantum computation requires noisy results to remain bounded even in the presence of noise fluctuations. Yet non-stationary noise processes lead to drift in the varying characteristics of a quantum device that can greatly influence the circuit outcomes. Here we address how temporal and spatial variations in noise relate device reliability to quantum computing stability. First, our approach quantifies the differences in statistical distributions of characterization metrics collected at different times and locations using Hellinger distance. We then validate an analytical bound that relates this distance directly to the stability of a computed expectation value. Our demonstration uses numerical simulations with models informed by the transmon device from IBM called washington. We find that the stability metric is consistently bounded from above by the corresponding Hellinger distance, which can be cast as a specified tolerance level. These results underscore the significance of reliable quantum computing devices and the impact for stable quantum computation. device reliability, program stability, spatio-temporal non-stationarity, time-varying quantum noise § INTRODUCTION Quantum devices are subject to non-stationary noise sources, e.g. non-uniform spontaneous decay, energy loss, cross-talk, sensitivity to imprecise control pulses, and fluctuations in thermodynamic controls, all of which affect the quality of the quantum register implementation. The field of quantum noise characterization focuses on measuring and tracking noise metrics (such as CNOT gate error) at various points in time. These characterizations inform calibration techniques for hardware engineers as well as error mitigation methods for programmers. However, quantum devices also exhibit temporal variations in their noise sources, which underlies the need for frequent calibration and adjustment of device metrics. Non-stationary noise processes can also stymie attempts at characterization as the underlying noise models must adapt to new and often unpredictable behaviors. How can we monitor changes in the noise itself to better inform these efforts? Here, we address the concern that non-stationary noise process pose to reliable quantum computation. Device reliability is presented as a measure of the statistical similarity of the underlying device metrics, such as gate fidelities and coherence times. This measure captures the similarity between device metrics considering both spatial and time-varying noise processes. We then recall how device reliability bounds the stability of expectation values dervied from noisy quantum computation. Moreover, we validate this bound on stablility using numerical simulations of a circuit modeled by a multi-dimensional correlated noise distribution. § THEORY §.§ Stability Stability in quantum computing refers to the boundedness of the output of a quantum circuit in the presence of noise fluctuations <cit.>. In this study, we focus on the mean value of a quantum observable O as a representative of program output. Let represent the parameter characterizing the noisy realization of a quantum circuit 𝒞. The mean value of O obtained from repeated executions on the noisy circuit is denoted as ⟨O|_⟩. Considering the time-varying nature of device noise, we introduce f(; t) as the probability distribution function of the quantum noise parameter . We define ⟨O|_⟩t as the average value of ⟨O|_⟩ with respect to f(; t), the probability distribution function for the noise parameter. ⟨O|_⟩t = ∫⟨O_|f⟩(; t) d The stability of the quantum observable between two time points, t_1 and t_2, is then quantified by the absolute difference in the mean values of ⟨O|$⟩ obtained at those times, defined as s(t_1, t_2) = | ⟨O|_⟩t_1 - ⟨O|_⟩t_2 | §.§ Reliability We next quantify device reliability by comparing the statistical distributions of various characterization metrics collected at different times and register locations. When these metrics exhibit statistical similarity, the device behavior is considered to be reliable. The statistical distance between distributions is calculated using the Hellinger distance, which offers ease of calculation and interpretation. H( f( ; t_1), f( ; t_2) ) = √(1-∫_√(f( ; t_1) f( ; t_2)) d) The Hellinger distance above quantifies the statistical similarity of a device at different times such that when the distance is small, the device behaves statistically similar at both times. This is expected when the underlying noise process is stochastic. However, larger values of the distance imply that noise processes within the device are non-stationary processes that lead to noticeable changes in device properties. The timescales on which such statistically significant changes are measured represent an important metric for evaluating the reliability of the device relative to a desired tolerance. §.§ Stability Bounds We now establish an analytical and intuitive connection between output stability and device reliability. Specifically, we show how device reliability constrains the outcomes from a quantum program executed on a NISQ device by examining the role of fluctuations in device metrics. Lets_toldenote a specified tolerance on the stability metric introduced earlier. Additionally, let the reliability of the quantum device between timest_1andt_2is quantified by the Hellinger distanceH_X, as discussed previously. We determine the maximum boundH_X^max(t_1, t_2)on the Hellinger distance constrained bys(t_1, t_2) < s_tol. We begin by noting the bound on the stability satisfies s^2(t_1, t_2) ≤( ∫| ⟨O_|{⟩ f(; t_1)dx - f(; t_2)}| dx)^2 where the inequality stems from the absolute value on the integrand. Per Holder's inequality, ifm, n ∈[1, ∞]and1m+1n=1then ∫| f(x) g(x) |dx ≤( ∫ |f(x)|^m dx)^1/m( ∫ |g(x)|^n dx)^1/n Thus, the right-hand side of Eq. (<ref>) becomes ( ∫| ⟨O_|{⟩ f(; t_1)dx - f(; t_2)}| dx)^2 ≤( ∫ | ⟨O_||⟩^m dx)^2/m( ∫ | f(; t_1) - f(; t_2) |^n dx)^2/n Choosem =∞, n=1and definec= sup |⟨O_ ||⟩. This circuit-specific constant satisfies lim_m →∞( ∫ |⟨O_||⟩^mdx)^1/m≤lim_m →∞( ∫ c^mdx)^1/m= c Thus, we may then reduce Eq. (<ref>) as s(t_1, t_2)^2 ≤ lim_m→∞, n = 1( ∫ | ⟨O_||⟩^m dx)^2/m ( ∫ | { f(; t_1)dx - f(; t_2)} |^n dx)^2/n ≤ c^2 ( ∫| √(f(; t_1)) - √(f(; t_2))| . . ( √(f(; t_1)) + √(f(; t_2))) dx)^2 ≤ c^2 ∫( √(f(; t_1)) - √(f(; t_2)))^2 dx ∫( √(f(; t_1)) + √(f(; t_2)))^2 dx Using Holder's inequality withm=n=2, this yieldss(t_1, t_2) = 2cH√(2-H^2)which can be re-arranged to yield the maximum H_max(t_1, t_2) = √(1-√(1-ϕ)) withϕ= s_tol^2 / (4c^2). This sets an upper limitH_maxon the Hellinger distance to ensure the desired stability criterions_tolis met. § VALIDATION §.§ Experimental data We utilized data obtained from the transmon device called washington, a 127-qubit register with heavy hexagonal connectivity developed by IBM. The publicly available characterization data for the washington device was used to create a dataset comprising specific device metrics (refer to Table 1). This dataset was constructed from a subset of the device characterization data spanning a 16-month period from January 1, 2022, to April 30, 2023. The Qiskit software library <cit.> was employed to access the collected characterization data online. These metrics correspond to the minimum requirements for the physical implementation of quantum computing <cit.>, which fall into one of the five classes: SPAM (state preparation and measurement) fidelity, single-qubit gate fidelity, two-qubit entangling gate fidelity, duty cycle (gate length to coherence time ratio), and addressability (ability to measure a register element without interference from other qubits). Specifically, these 16 metrics capture the noise processes of the five-qubits employed in the test circuit illustrated in Fig. <ref>. Our simulation of the test circuit (described in the next section) relies on data pertaining to these 16 metrics, which enables us to estimate the time-varying joint distribution of circuit noise. Utilizing this estimated distribution, Monte Carlo sampling is performed to simulate the test circuit and validate the theory presented earlier. §.§ Test circuit We validate the stability bound using a numerical simulation of of the Bernstein-Vazirani circuit <cit.> with a noise model using the characterization data presented in the previous section. The Bernstein-Vazirani algorithm determines a secretn-bit stringrencoded in an oracle. Our focus is on assessing the success probability of correctly computing the secret bit string using the fewest number of queries possible. In contrast to the classical approach that requiresnqueries, the Bernstein-Vazirani algorithm achieves the same outcome with just one query. Fig. <ref> illustrates the quantum circuit corresponding to a 4-bit secret key. The observable for the problem isO = Π_r = |r⟩⟨r|where|r⟩ = ⊗_i=1^n |r_i⟩withr_i ∈{0,1}. §.§ Method We used numerical simulations to test the reliability of a model noisy quantum device and to investigate the boundedness of the stability metric as predicted by the theory above. We first mapped the 16 noise parameters necessary for simulating the 5-qubit Bernstein-Vazirani circuit shown in Fig. <ref> to distinct independent noise processes, selecting them based on the criteria outlined in <cit.> for the physical implementation of quantum computing. The parameters mapped to gate and register specific noise model in Table <ref>. For example, the asymmetric binary channel for register0flips the measured output bitb_0tob_0⊕1with probabilityx_0, while the coherent phase error channel for the Hadamard gateHapplied to register 0 transforms the underlying quantum state asCP(H ρH) = R_z(θ) H ρH R_z^†(θ). Thermal relaxation is modeled by an exponential dephasing process that depends on theT_2time and the duration of the underlying gate not shown here. While the 16 noise processes above act independently, the underlying noise parameters are assumed to be correlated. We construct a joint distribution of to describe these parameters using the method of Gaussian copula, cf. Fig. <ref>. The copula itself is defined as Θ(y) = exp( -1/2(y-μ)^T Σ^-1 (y-μ))/(2π)^n/2|Σ|^1/2 where the correlation elementsΣ_i,jare derived from Pearson correlation coefficients computed using the daily metric values available from the washington data set. For a Gaussian copula, the corresponding 16-dimensional noise distribution takes the form f_X(;t) = Θ[ F_X_1(_1; t), ⋯ F_X_d(_d; t) ] ∏_j=1^n f_X_j(_j; t) whereF_X(; t)is the cumulative distribution function at timetandΘ(·)is the copula function. These generated distributions are then used to calculate the Hellinger distance in Eq. <ref>. Our numerical studies of the quantum circuit stability generates an ensemble of noisy simulations by drawing samples from the multi-parameter noise distribution represented by Eq. <ref>. We initially establish a joint distribution from the daily data gathered in January 2022 for the washington device, utilizing copulas. Over the next 15 months, we introduce minor perturbations to this distribution, ensuring that the Hellinger distance never exceedsH_maxbetween the perturbed and original January 2022 distributions. In this perturbation scheme, the marginal distribution of the CNOT gate error between qubits 1 and 2 is modeled using a beta distribution, which is based on the aforementioned January 2022 daily data. Small, random perturbations to the beta distribution parameters are incorporated over 15 months for the CNOT error, with Hellinger distance constraint maintained. For each perturbed distribution, we generate 100,000 noise metric samples, and execute 100 qiskit simulations (each with 8192 shots). The stability metric is then computed from the obtained output, as per Eqn. <ref>. §.§ Results Figure <ref> presents the simulation results illustrating the relationship between the stability metric (s) and the reliability of a quantum device characterized by the Hellinger distance (H). The results demonstrate that whenH≤H_maxthe device is reliable such that the temporal difference of the observable (s) remains within the specified upper bound (s≤s_max). In our simulations, we set the tolerance thresholds_tol = 20%, which limits the maximum acceptable deviation in the expectation value over time. According to Eqn. <ref>, this results in an upper limit of 7.1% for the device reliability metricH_max. The lower panel of Fig. <ref> presents the calculated Hellinger distance between the multi-dimensional noise processes characterizing the device. These calculations show how noise fluctuates on a monthly basis while still respecting theH_maxconstraint. While time varying, these process emulate the behavior of a reliable device. The upper panel of Fig. <ref> presents the corresponding stability metric, which never exceeds the 20% tolerance. Moreover, we find the stability is nearly two orders of magnitude smaller than the expected tolerance, with an average of approximately 0.6%. By selecting a reliable device whose temporal noise variation remains within the defined bounds, we can ensure the stability of program output within the desired tolerance. § CONCLUSIONS Output stability is crucial in quantum computing research as non-stationary noise processes in quantum devices can result in unstable results that fluctuate based on time-varying device noise characteristics, rendering them unsuitable for meaningful interpretation and drawing scientific conclusions. The variations in superconducting qubits, attributed to certain oxides on the superconductors' surface modeled as fluctuating two-level systems (TLS) <cit.>, have been extensively studied. Ongoing research focuses on addressing the time-varying nature of quantum noise through modeling <cit.>, characterizing noise sources, tracking their temporal profile <cit.>, and integrating statistical knowledge into quantum architectures using innovative techniques <cit.>. This paper explores the relationship between device reliability and output stability by considering a user-defined upper bound on variations in expectation values. The goal is to assess the stability of program outputs by evaluating the reliability metric within a specified tolerance bound through simulations. § ACKNOWLEDGMENTS This work is supported by the U. S. Department of Energy (DOE), Office of Science, Early Career Research Program. This research used computing resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. unsrt
http://arxiv.org/abs/2307.04537v1
20230710130246
Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception
[ "Chi-Chih Chang", "Wei-Cheng Lin", "Pei-Shuo Wang", "Sheng-Feng Yu", "Yu-Chen Lu", "Kuan-Cheng Lin", "Kai-Chiang Wu" ]
cs.CV
[ "cs.CV", "cs.AI" ]
𝐱 ŁL Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception Chi-Chih Chang11, Wei-Cheng, Lin11, Pei-Shuo Wang11, Sheng-Feng Yu112, Yu-Chen Lu112, Kuan-Cheng Lin11 and Kai-Chiang Wu1 1 National Yang Ming Chiao Tung University 2 Macronix International Co., Ltd. August 12, 2023 =========================================================================================================================================================================================================================== In this work, we present an efficient and quantization-aware panoptic driving perception model (Q-YOLOP) for object detection, drivable area segmentation, and lane line segmentation, in the context of autonomous driving. Our model employs the Efficient Layer Aggregation Network (ELAN) as its backbone and task-specific heads for each task. We employ a four-stage training process that includes pretraining on the BDD100K dataset, finetuning on both the BDD100K and iVS datasets, and quantization-aware training (QAT) on BDD100K. During the training process, we use powerful data augmentation techniques, such as random perspective and mosaic, and train the model on a combination of the BDD100K and iVS datasets. Both strategies enhance the model’s generalization capabilities. The proposed model achieves state-of-the-art performance with an [email protected] of 0.622 for object detection and an mIoU of 0.612 for segmentation, while maintaining low computational and memory requirements. Object detection, semantic segmentation, quantization-aware training, autonomous driving § INTRODUCTION Panoptic perception systems are critical components of autonomous cars, enabling them to perceive and understand their environment comprehensively. These systems solve multiple vision tasks simultaneously, including object detection, lane line segmentation, drivable area segmentation, and generate a rich understanding of the road scene. In order to solve the multi-task problem for panoptic driving perception, we develop a low-power, multi-task model tailored for traffic scenarios, addressing the challenges of object detection and semantic segmentation. The aim is to create efficient algorithms capable of accurately recognizing objects and segmenting both lane line and drivable area while maintaining minimal computational cost, rendering them ideal for deployment in resource-constrained environments such as mobile devices, IoT devices, and embedded systems. To achieve low-power consumption, we adopt a neural network architectures optimized for energy efficiency. The development process involves reducing the size and complexity of the models used for object detection and segmentation, as well as quantizing the model to minimize energy consumption. Our panoptic driving perception system reaches 93.46 FPS on NVIDIA V100 and 3.68 FPS on MediaTek Dimensity 9200 Series Platform. Meanwhile, it attains 0.622 mAP and 0.612 mIoU on the object detection and segmentation tasks of the competition iVS dataset. § METHOD Our model, derived from YOLOPv2 <cit.> and YOLOv7 <cit.>, is specifically designed to address both object detection and segmentation tasks. It comprises five main components: the backbone, the neck, the detection head, drivable area segmentation head, and lane line segmentation head. The backbone is Efficient Layer Aggregation Network (ELAN) <cit.>, optimized for rapid and efficient feature extraction. The neck of our model is a Spatial Pyramid Pooling (SPP) network <cit.>, which facilitates the handling of objects with varying scales and sizes by pooling features at multiple resolutions. This enhancement improves the accuracy and robustness of object detection. The detection head is based on RepConv <cit.>, an innovative neural network architecture that merges the efficiency of mobile networks with the accuracy of more complex models. Subsequently, a non-maximum suppression is applied to the output of object detection process to generate the final predictions. Consequently, our model is capable of accurately detecting objects in images while managing computation and memory requirements. Furthermore, in addition to object detection, our neural network also encompasses task-specific heads for drivable area segmentation and lane line segmentation. These dedicated heads possess distinct network structures that are optimized for their respective tasks. As drivable area segmentation and lane line segmentation generate separate predictions, we allow the result of lane line segmentation to overlap with the result of drivable area segmentation. In summary, our model is engineered to optimize efficiency and accuracy while also addressing the challenges associated with multi-task. Its unique combination of components and specialized task heads make it ideal for real-world applications such as autonomous driving and object recognition in resource-constrained environments. A visual representation of our model architecture is presented in Figure <ref>. §.§ Loss Function As we modify the head of YOLOPv2 <cit.> to support multi-label prediction, we introduce the loss function derived from HybridNets <cit.> to enhance the performance of our approach. The loss function for objection detection task consists of three components, L_det = α_1 L_class + α_2 L_obj + α_3 L_box Specifically, for L_det, focal loss is used in both L_class and L_obj. The classification loss, L_class, is responsible for penalizing classification errors, while L_obj is used for predicting object confidence. Both terms are implemented by focal loss <cit.>. The term L_box represents the similarity between the predicted results and ground truth by considering the overlap rate, aspect ratio, and scale. We implement L_box using the smooth L1 loss function. The coefficient α_1, α_2, and α_3 are hyperparameters used to balance the detection losses. The objective for lane line segmentation task combines three components, L_seg_ll = β_1 L_Tversky + β_2 L_Focal + β_3 L_Jaccard The first term Tversky loss <cit.>, L_Tversky, is used to address the issue of data imbalance and achieve much better trade-off between precision and recall, and the second term L_Focal aims to minimize the classification error between pixels and focuses on hard labels. The third term, L_Jaccard, is utilized to measure the similarity between prediction and ground-truth segmentation masks. The coefficient β_1, β_2 and β_3 are hyperparameters used to balance losses. On the other hand, the objective for drivable area segmentation task only combines two components: L_seg_da = γ_1 L_Tversky + γ_2 L_Focal The coefficient γ_1 and γ_2 are hyperparameters used to balance the losses. The overall objective, L_all, for our final model combines the object detection loss L_det and the segmentation loss L_seg to learn both tasks at the same time: L_all = δ_1 L_det + δ_2 L_seg_da + δ_3 L_seg_ll The coefficient δ_1, δ_2 and δ_3 are hyperparameters used to balance the detection loss and segmentation losses. §.§ Quantization Quantization-Aware Training (QAT) is a technique aimed at making neural networks more amenable to quantization. During QAT, we introduce the quantization error during training by sequentially applying quantize and dequantize operations. This enables the network to learn more robust representations that can be efficiently quantized during inference. We employ the Straight-Through Estimator (STE) <cit.> algorithm for QAT, which offers a simple and efficient approach. With STE, we round the weights and activations to the nearest quantization level during forward propagation, while utilizing the gradients of the unquantized values during backward propagation. In this manner, the network can backpropagate the gradients through the quantization operation, which is not differentiable in its original form. By simulating the quantization error during training, we can ensure that the network learns robust features that are less sensitive to quantization. § IMPLEMENTATION DETAIL §.§ Data Preparation As the organizers of the contest provided only a portion of the BDD100K <cit.> dataset, we opted to use the complete BDD100K dataset to augment the training data. In previous works that used the BDD100K dataset for semantic segmentation, the focus was typically on segmenting only the drivable areas and lane lines. There were no attempts to further classify the drivable areas or lane lines into multiple categories. However, our semantic segmentation task involves categorizing images into six classes: background, main lane, alternative lane, single line, double line, and dashed line. This is different from previous works, which only segmented images into two classes: line and lane. Therefore, we re-generate the six classes of segmentation labels for the BDD100K dataset. For the object detection task, the objective is to detect four types of objects: pedestrian, vehicle, scooter, and bicycle. In the case of scooters and bicycles, both the rider and the respective vehicle are included within the bounding box. However, the BDD100K dataset labels riders, scooters, and bicycles as distinct entities, as depicted in the following figure. To comply with the task requirements, we employ the Hungarian algorithm <cit.> to pair riders with their corresponding scooters or bicycles and label them within the same bounding box. §.§ Training Process In our experiments, the training process consists of several stages: 1) initial pretraining on the BDD100K <cit.> dataset, then 2) pretraining on the BDD100K with mosaic augmentation <cit.>, 3) finetuning on both BDD100K and iVS datasets, 4) quantization-aware training (QAT) on the integrated iVS and BDD100K datasets. Initially, we train our model on the BDD100K dataset without mosaic for 300 epochs, then turning on mosaic augmentation for 150 epochs. Subsequently, we jointly train the model on both the BDD100K and iVS datasets for an additional 150 epochs. Finally, we apply QAT <cit.> for an extra 20 epochs for quantization. Data Augmentation Techniques. To enhance the model's generalization capabilities, we apply several data augmentation techniques during the training process. These techniques include normalization, random perspective transformation, HSV color space augmentation, horizontal flipping, and mosaic. By simulating variations that may occur in real-world scenarios, these techniques improve the model's ability to adapt to new data. The mosaic technique turns on in the second and third stages, and it is turned off for the last 10 epochs of third stage. In detail, all images is normalized with mean (0.485, 0.456, 0.406) and std (0.229, 0.224, 0.225), random perspective transforming with scale factor 0.25, and translation factor 0.1. For HSV color space augmentation, the factor of Hue augmentation is 0.015, the factor of Saturation augmentation is 0.7, and the factor of Value augmentation is 0.4. Weight Initialization. The weight of the backbone and detection head of our model is initialized from YOLOv7 <cit.> pretrained weight, while the other parameters are all random initialized. Implementation Details. We resize all images to 384 × 640 of both BDD100K <cit.> and iVS datasets. The Adam optimizer is used for optimization. Different batch sizes are used for different stages, with 32 during first and second pretraining, 32 during finetuning, and 16 during quantization-aware training (QAT). The default anchor sizes are set as (12,16), (19,36), (40,28), (36,75), (76,55), (72,146), (142,110), (192,243), and (459,401). The learning rate scheduler employed is cosine annealing with a warm-up phase, and the initial learning rates are set to 1e-2 during first pretraining, 5e-3 during second pretraining, 5e-4 during finetuning, and 5e-5 during QAT. The minimum learning rates are set to 1e-5 during first pretraining, 5e-6 during second pretraining, 5e-7 during finetuning, and 5e-8 during QAT. The warm-up phase is set to 5 epochs during pretraining and 0 epochs during finetuning and QAT. The values of the coefficients for the losses are reported as follows: α_1 = 0.5, α_2 = 1.0, α_3 = 0.05, β_1 = 1.0, β_2 = 1.0, β_3 = 1.0, δ_1 = 1.0, δ_2 = 1.0, γ_1 = 0.2, γ_2 = 0.2, and γ_3 = 0.2. These coefficients are used in the computation of the loss function, which is a crucial component of our proposed method. §.§ Inference Process The inference process involves pre-processing the input images, which includes resizing from 1080 × 1920 to 384 × 640. Following this, images are normalized with mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). The post-processing steps for the detection and segmentation parts are carried out. In the detection part, the intersection over union (IoU) threshold of non-maximum suppression (NMS) is set to 0.25, and the confidence threshold is set to 0.05. In the segmentation part, the results from the two segmentation heads are merged, and the output is upsampled from 384 × 640 to 1080 × 1920. § EXPERIMENTAL RESULTS §.§ Environment Setup We conducted our experiments using 8 Nvidia V100 GPUs for training. PyTorch 1.10 <cit.> and TensorFlow 2.8.0 <cit.> were used to implement our models and training pipeline, while OpenCV 4.6.0 <cit.> was used for image pre-processing. Our model architecture was based on the publicly available PyTorch implementations of YOLOP <cit.> and YOLOv7 <cit.>. To migrate the model from PyTorch to TensorFlow, we first translated the PyTorch model into ONNX[https://onnx.ai/] format, and then used the onnx2tflite[https://github.com/MPolaris/onnx2tflite] toolkit to convert ONNX into TensorFlow (.h5) and TFLite model (.tflite). §.§ Main Results We present the performance of our model on the final testing dataset provided by the contest organizer at different training stages. Initially, we trained the model only on the BDD100K <cit.> dataset. However, due to the variation in the data distribution between BDD100K and the target task, the model may not be able to generalize well on the target task. To address this issue, we added the iVS dataset to the training process and performed mix data finetuning (i.e. the third stage). This approach enabled the model to adapt itself to better fit the target task, as the iVS dataset provided additional data with a similar data distribution to the target task. By training on this diverse dataset, the model was able to learn more effectively from the data and improve its performance on the target task. The performance of our proposed model is evaluated through various training stages. In the pretraining without mosaic stage, as depicted in Table <ref>, the model is trained on BDD100K dataset, which effectively boosts the performance of all. Based on YOLOv4 <cit.>, we integrate mosaic technology in our model training. However, in the pretraining stage with mosaic shown in Table <ref>, we notice a decrease in performance across all tasks. The implementation of the mosaic technique does not yield improved performance, which could potentially be attributed to its training exclusively on the BDD100K dataset. As a result, the model may be more suited to the BDD100K dataset, leading to a slight decline in performance when applied to the iVS dataset. Nevertheless, further finetuning on the iVS dataset enables the model to achieve enhanced performance. In the third stage, the model is finetuned using a mix of the BDD100K and iVS datasets with mosaic augmentation, which resulted in a significant improvement in object detection and lane line segmentation performance. Additionally, in the last 10 epochs, the mosaic augmentation was turned off to allow the model to recover its adaptability to normal images. §.§ Testing Results in the Competition Table <ref> shows the testing results of public dataset in the competition provided by the contest organizer. Our approach is effective for both object detection and segmentation tasks, achieving 0.495 mAP and 0.401 mIoU on pretraining with mosaic stage. Finetuning the model on the mix dataset improved the performance to 0.540 mAP and 0.615 mIoU, demonstrating the importance of the mix dataset in overcoming domain shift. Applying QAT to the finetuned model not only maintained the model's performance but also improved the detection task, which achieved 0.622 mAP and 0.612 mIoU. The testing results of private dataset in the competition provided by the contest organizer is shown in Table <ref>. Our approach achieves state-of-the-art performance in both object detection and segmentation tasks, with 0.421 mAP and 0.612 mIoU. Moreover, Table <ref> shows that our quantization strategy effectively reduced the model size by 4 times and improved inference speed by 3 times. These results demonstrate the effectiveness of our quantization strategy not only in improving model performance but also in reducing computational cost and memory footprint, which is important for real-world deployment of deep learning models. §.§ Quantization Strategy The performance of the quantized network using different quantization paradigms is presented in Table <ref>. We first observe that Post-Training Quantization led to a significant performance drop in the segmentation tasks, with only 0.285 and 0.248 mIoU achieved for drivable area and lane line segmentation, respectively. However, this performance drop can be mitigated by adopting a Quantization-Aware Training (QAT) strategy. Our experimental results demonstrate the effectiveness of QAT in mitigating the performance drop caused by quantization. Specifically, the quantized network achieved an 0.569 mAP for object detection and 0.852 mIoU for drivable area segmentation and 0.402 mIoU for lane line segmentation. These findings demonstrate the effectiveness of the QAT strategy in boosting the performance of quantized network, as compared to the Post-Training Quantization strategy. § CONCLUSION In this work, we have successfully implemented a light-weighted object detection and segmentation model. To improve its efficiency, we explored the effectiveness of two techniques: quantization-aware training and mix data finetuning (i.e. the third stage). Through extensive experimentation, we have demonstrated the effectiveness of these techniques in improving the accuracy and efficiency of our model. Our final model has achieved competitive results on the target dataset, demonstrating its potential for real-world applications. IEEEbib
http://arxiv.org/abs/2307.04428v1
20230710090716
Analysis of CN emission as a marker of organic compounds in meteoroids using laboratory simulated meteors
[ "Adriana Pisarčíková", "Pavol Matlovič", "Juraj Tóth", "Stefan Loehle", "Ludovic Ferrière", "David Leiser", "Felix Grigat", "Jérémie Vaubaillon" ]
astro-ph.EP
[ "astro-ph.EP", "physics.geo-ph" ]
inst1]Adriana Pisarčíková[email protected] [inst1]organization=Faculty of Mathematics, Physics and Informatics, Comenius University in Bratislava, addressline=Mlynská dolina, city=Bratislava, postcode=84248, country=Slovakia inst1]Pavol Matlovič inst1]Juraj Tóth inst2]Stefan Loehle inst3]Ludovic Ferrière inst2]David Leiser inst2]Felix Grigat inst4]Jérémie Vaubaillon [inst2]organization=High Enthalpy Flow Diagnostics Group, Institute of Space Systems, University of Stuttgart, addressline=Pfaffenwaldring 29, city=Stuttgart, postcode=70569, country=Germany [inst3]organization=Natural History Museum Vienna, addressline=Burgring 7, city=Vienna, postcode=1010, country=Austria [inst4]organization=IMCCE, Observatoire de Paris, addressline=PSL, 77 Av Denfert Rochereau, city=Paris, postcode=75014, country=France Fragments of small solar system bodies entering Earth's atmosphere have possibly been important contributors of organic compounds to the early Earth. The cyano radical (CN) emission from meteors is considered as potentially one of the most suitable markers of organic compounds in meteoroids, however, its detection in meteor spectra has been thus far unsuccessful. With the aim to improve our abilities to identify CN emission in meteor observations and use its spectral features to characterize the composition of incoming asteroidal meteoroids, we present a detailed analysis of CN emission from high-resolution spectra of 22 laboratory simulated meteors including ordinary, carbonaceous, and enstatite chondrites, as well as a large diversity of achondrites (i.e., ureilite, aubrite, lunar, martian, howardite, eucrite, and diogenite), mesosiderite, and iron meteorites. We describe the variations of CN emission from different classes of asteroidal meteor analogues, its correlation and time evolution relative to other major meteoroid components. We demonstrate that CN can be used as a diagnostic spectral feature of carbonaceous and carbon-rich meteoroids, while most ordinary chondrites show no signs of CN. Our results point out strong correlation between CN and H emission and suggest both volatile features are suitable to trace contents of organic matter and water molecules present within meteoroids. For the application in lower resolution meteor observations, we demonstrate that CN can be best recognized in the early stages of ablation and for carbon-rich materials by measuring relative intensity ratio of CN band peak to the nearby Fe I-4 lines. * First analysis of CN emission from various ablated meteorites * CN emission identified as a diagnostic spectral feature of carbon-rich meteorites * CN and H emission linked to organic matter and water content * Method for identification of CN in lower resolution meteor spectra proposed astrobiology spectroscopy meteorite 0000 1111 0000 1111 § INTRODUCTION The various abundances of organic matter found in asteroids and comets originate from the formation processes of the interstellar medium <cit.>. Small solar system bodies are assumed to have been responsible for the emergence of prebiotic molecules necessary for the origin of life on Earth <cit.>. The impacting interplanetary material is considered to be one of the main contributors of organic molecules to early Earth <cit.>. While organic matter is abundantly present in all comets and spectral studies can focus on revealing the variations in the contents of different compounds, the abundance of organic matter in different types of asteroids remains an open question <cit.>. Studies of the fragments of asteroids and comets – meteoroids – which continuously enter the Earth's atmosphere from various sources in the solar system can provide important spectral data to help tackle this issue. Meteoroids ablate in the Earth's atmosphere as meteors emitting strong radiation. Analyzing the emission spectra allows to gain detailed information about the atoms and molecules present in the meteoroid and interacting with the surrounding air <cit.>. A suitable trace feature for the detection of organic compounds in small solar system bodies captured through meteor observations appears to be the cyano radical (CN). CN has been detected by remote optical observations in several comets since the 19th century <cit.>. In recent years, even the first in-situ detection of CN in the coma of comet 67P/Churyumov–Gerasimenko has taken place <cit.>. The origin of the CN observed in the cometary coma has been long associated with hydrogen cyanide (HCN) as the sole source. However, in the last few decades, two main CN sources have been considered: CN-bearing refractories (HCN polymers, hexamethylenetetramine (HMT), tholin, CHON dust grains) and CN-bearing volatiles <cit.> as the dominant source of CN production (mainly HCN, cyanogen (C2N2), cyanoacetylene (HC3N), acetonitrile (CH3CN)). Refractories carrying CN are assumed to be released from the interior of the comet along with dust particles and could generate HCN or CN radicals. CN products derived from volatile CN-bearing species are formed during the photodissociation of these species. There has been several efforts to detect CN in meteor spectra in the past decades (see e.g., <cit.>). The detection was unsuccessful likely due to the typically insufficient resolution of meteor spectrographs, which are unable to resolve the CN band from the strong emission of surrounding Fe I lines. Because of its strong B → X transition of low excitation energy <cit.>, the CN emission is the most suitable tracer of organic compounds in the visible and near-UV range meteor spectra. At typical meteoric temperatures and instrumental resolutions, this vibrational band structure peaks at around 388.3 nm <cit.>. CN emission from meteor spectra is expected to originate either directly from the meteoroid composition and it may be generated from reactions with N bound in organic matter, or due to the interaction of the meteoric C atoms with molecular N2 originating in the atmosphere <cit.>. In this work we present an analysis of the CN emission in spectra of different types of meteorites tested in a plasma wind tunnel simulating meteoric conditions. The fitting of the CN band was previously done in the terrestrial rock argillite tested under the same laboratory conditions <cit.>. We provide the first overview of the presence, relative intensity, and time evolution of the CN emission in different meteorites representing a wide range of asteroidal materials. This way, we aim to indicate CN as a suitable tracer of organic matter in meteoroids, demonstrate the detectable variations of organic matter in different asteroidal materials, and help constrain the instrumental limits for an efficient detection of CN in meteors. First, in Section <ref>, we describe the laboratory conditions and instrumentation used for the meteorite ablation tests in the plasma wind tunnel and the data processing methodology. The following Section <ref> contains our results of a detailed study of the CN emission in spectra of different meteorites. In this section we focus on an analysis of the presence of CN and Hα in meteorite spectra and their mutual correlation, and an analysis of the relative intensity and time evolution of the CN band emission based on monochromatic light curves. The conclusions derived from the obtained results are summarized in Section <ref>. § LABORATORY EXPERIMENTS AND METHODS <cit.> established an experimental setup in an arc-jet wind tunnel facility suitable for the analysis of meteoroid entry physics. Overall, three experimental campaigns (2020-2022) were performed within a cooperation between the Comenius University in Bratislava, Slovakia (CUB) and the High Enthalpy Flow Diagnostics Group at the Institute of Space Systems, University of Stuttgart, Germany (HEFDiG). The following analysis is based on the measurement of spectra of 22 meteorite samples simultaneously captured by the high-resolution HEFDiG Echelle spectrograph and the spectrograph AMOS-Spec-HR from the CUB, which is used within the global AMOS (All-sky Meteor Orbit System) network <cit.> for observing spectra of meteors in the Earth's atmosphere <cit.>. Given the relatively low resolution of AMOS-Spec-HR, we only use these data to evaluate the possibility to recognize CN in corresponding low resolution spectra. The analysis of relative intensities and time evolution of CN emission from different meteorite types is based on the detailed Echelle spectra. The uniquely large dataset of tested meteorites allows us to examine the presence of CN in almost all major meteorite classes including different ordinary chondrites, enstatite chondrites, carbonaceous chondrites, achondrites and mesosiderite group of stony irons. These types represent the most abundant meteorite falls. §.§ Experiment conditions and instrumentation The Institute of Space Systems of the University of Stuttgart operates several plasma wind tunnels <cit.>, which were developed in the early 1980s for basic testing of thermal protection materials required for spacecraft to safely enter the atmosphere of planets. The meteorite experiments were carried out in the Plasma Wind Tunnel 1 (PWK1) with a plasma flow condition with local mass-specific enthalpy of 70 MJ kg-1 at a stagnation pressure of ∼24 hPa. This corresponds to the entry of a meteoroid with a diameter of ∼4 cm at an altitude of ∼80 km in the Earth's atmosphere, with an assumed meteoroid entry velocity of ∼12 km s-1. This plasma flow condition was used for the first tests with meteorite samples <cit.>. The Echelle spectrograph of HEFDiG is a fiber-fed system providing a wavelength range of 250–880 nm <cit.>. From the lower to the upper end of this spectral interval, the spectral dispersion varies from 43 pm px-1 to 143 pm px-1 (resolving power R ≈ 10 000). An Echelle high-order diffraction grating of 300 grooves per millimeter (gpmm) is utilized at orders 40–60. Another mounted diffraction grating with higher diffraction of about 1000 gpmm causes dispersion and alignment of obtained spectra. As a result, a high resolution and long wavelength interval is obtained. The AMOS-Spec-HR system provides an image resolution of 2048 x 1536 px (1.76 arcmin px-1) and a resulting field of view (FOV) of 60° x 45° and a frame rate of 15 fps. The essential components of this spectrograph are 6 mm f/1.4 lens and a digital camera. The setup of holographic diffraction grating with 1000 gpmm provides a dispersion of 0.5 nm px-1 (resolving power R ≈ 550). The spectral system allows to analyze spectral events in the visual spectrum range of approximately 370–900 nm. The large selection of tested meteorite samples, obtained from and in collaboration with the Natural History Museum Vienna, mainly consist of meteorite falls rather than finds to limit terrestrial contamination. Meteorite samples were cut into 1 cm diameter cylinders with lengths varying from ∼1 to 2 cm or into ∼1 cm diameter cubes depending on the availability and fragility of the samples. These dimensions are required for accurate experiment conditions with respect to the mentioned entry conditions. The meteorite samples were attached to a copper stick mounted on a standard ESA (European Space Agency) probe holder on a four-axis moving platform inside a 6 m long and 2 m wide vacuum chamber of the PWK1 plasma wind tunnel during experiment. The PWK1 plasma wind tunnel was evacuated, and subsequently, the magnetoplasmadynamic generator was turned on. The moving platform was used to transport the probe held outside the plasma flow to its direction after the air flow is stabilized. The duration of exposure of the sample to the air plasma flow ranged around 3-12 s depending on the meteorite composition and durability of the meteorite holder. For some smaller samples or samples with higher risk of fragmentation upon drilling for the copper stick holder, a high-temperature ceramic glue Resbond 940HT was used to attach the sample to the copper holder. The spectrum of a pure glue sample was obtained to ensure that the contamination from the glue to the obtained spectra was negligible, as was confirmed. An example of a melting Chelyabinsk meteorite sample in the PWK1 plasma wind tunnel is shown in Fig. <ref>. §.§ Data processing The calibration of the Echelle spectra of the ablated meteorites was performed after each laboratory experiment using a calibration lamp located at the position where the meteorite sample was previously placed. The radiation of the calibration lamp measured in the laboratory environment is used to convert the ADU camera units (Analog-to-Digital Units) to spectral radiance. Considering the typical case of the meteorite ablation of ∼4 s duration and the effort to obtain the highest possible camera gain, 15-70 frames of the meteorite emission spectrum are recorded. The resulting emission spectrum was obtained by summing the intensity profiles of the individual calibrated frames. The last step before calculating line intensities is subtracting spectral baselines using the Fityk program <cit.>. Within this program, a synthetic spectrum consisting of the main emission multiplets of the meteor was modeled and then fitted to the calibrated spectrum using the damped least-squares method (the Levenberg–Marquardt algorithm) to measure the relative intensities of spectral emission lines. For the shape of all modeled lines in the synthetic spectrum, Gaussian line profiles were used with appropriate full width at half maximum (FWHM) adjusted by automatic fit, typically ∼0.1 nm. The error bars calculation of the measured line intensities was estimated based on the signal to noise ratio (SNR) in each meteor spectrum. The multiplet numbers used in this work are taken from <cit.>. The spectral data analysis of AMOS data was carried out according to the procedure described in <cit.>. Each meteorite spectrum was corrected for noise, other sources of illumination and spectral sensitivity of the system and later manually scanned in individual frames of the video recording. The resulting meteorite spectrum was obtained by summing all intensity profiles and scaled using well-known lines and a polynomial fit of the third order. In this work, we analyzed the presence and relative intensity of the CN band in emission spectra of tested meteorites. The strongest peak of this band is located near 388.3 nm (studied in our analysis), followed by weaker CN peaks near 387.1 nm and 385.0 nm. Due to the high-resolution (R ≈ 10 000) of the Echelle data, the measurement of CN intensity in the spectra of ablated meteorites is straightforward, while in the case of the lower resolution (R ≈ 550) of the AMOS data, intensity of the CN band is affected by contributions of surrounding Fe lines. The comparison of the emission spectrum of Murchison meteorite in B → X CN band region in lower resolution data of AMOS and higher resolution Echelle data is displayed in Fig. <ref>. An illustration of the model of the CN (B → X) Δν = 0 band fitted to an observed spectrum of the Murchison meteorite is displayed in Fig. <ref>. The CN model was obtained using the line-by-line emission code PARADE <cit.>. Equilibrium was assumed between the translational, rotational, vibrational and electronic temperatures, which were manually varied to fit the simulated spectrum to the data recorded with the Echelle spectrometer. In order to reduce the effect of noise on the fit, the spectra of five successive frames were averaged for each temperature estimate. The fit of the CN band displayed in Fig. <ref> was obtained at the resulting rotational Trot = 6500 K and vibrational temperatures Tvib = 6500 K. § RESULTS §.§ The detection of CN in meteorite spectra and correlation with H We have studied high-resolution Echelle spectra of 22 different meteorites obtained during their simulated ablation in plasma wind tunnel facility. The primary focus of this section is on the study of the occurrence and relative intensity of the CN band measured relative to the main meteor emission multiplets of Fe I-15 and Mg I-2 representing silicate and metallic components in meteoroids. These multiplets were selected as they are among the most universally observed features in visible-range meteor spectra. Additionally, we have studied the correlation between CN and Hα near 656.3 nm since both features originate from volatiles embedded in meteorites and are potentially the best candidates for tracing water molecules and organic compounds in small solar system bodies <cit.>. The generated free plasma flow at the beginning of each experiment enabled us to observe the plasma spectrum before moving the meteorite sample to the plasma flow, i.e. before the meteorite ablation started, allowing us to identify plasma lines and possible contribution to H emission from outside source. All spectra were thoroughly examined for possible H contamination, and four meteorite spectra with CN emission (Bilanga, Eagle, Lancé, and NWA 11303 meteorites) were confirmed with additional source of H emission. In the case of the Bilanga, Eagle and Lancé meteorite spectra, H emission was already observed before the meteorite insertion and increased after ablation started, indicating that some fraction of water molecules and organic compounds originate in these meteorites. The additional source of H was also detected in the spectrum of the NWA 11303 meteorite but without significant intensity change after meteorite ablation starts, pointing out the absence of the original H source from the meteorite. The external H source probably originated in the evaporated water of the internal cooling system of the plasma wind tunnel facility. No contamination of the detected H emission was found in the majority of the presented meteorite spectra. In addition, out of the meteorites used in the analysis, four finds (Dhofar 1575, Mincy, NWA 13303, and Ragland meteorite) may be to some degree affected by terrestrial weathering, whereas all the other tested meteorites are falls and, thus, much more pristine. The effects of the terrestrial weathering will be further examined based on the time evolution of meteorite spectra from individual frames. In a previous work by <cit.>, based on a limited number of samples, a correlation was found between the intensity of the Hα line and the CN band, which we here confirm based on an extended set of samples (namely Eagle [EH5], Bilanga [DIO], Lancé [CO3.5], Mincy [Mesosiderite], and Northwest Africa (NWA) 11303 [LUN]). Fig. <ref> shows that meteorites with increased CN content also exhibit higher volatile H content. The strongest H and CN emissions were detected in the CM2 carbonaceous chondrite Murchison, which is, in fact, rich in hydrocarbons, amino-acids and water content ∼ 10 wt.% <cit.>. We have also found stronger CN and H emissions in other carbonaceous chondrite meteorites, namely the CO3.5 Lancé and the CV3 Allende meteorites. The Allende meteorite contains on average < 1 wt.% water content <cit.>, which was manifested by a significantly lower Hα line intensity compared to Murchison. Moreover, among all three tested carbonaceous chondrites, the Murchison meteorite (CM2) has the highest carbon content of 2.7 wt.% (mean elemental abundance), followed by Lancé (CO3.5) and Allende (CV3) meteorites with 0.65 wt.% and 0.27 wt.% carbon content, respectively <cit.>. Our results well reflect the real bulk elemental composition of these meteorites, as the Murchison meteorite exhibits the strongest CN/Fe I-15 ratio, followed by Lancé and Allende meteorites, as shown in Fig. <ref> (upper panel). To account for the differences in the bulk composition of the individual meteorites, we display the intensities of Hα line and CN relative to both Fe I and Mg I emission (Table <ref>). Within the group of achondrites, the strongest CN and H emissions were detected for the meteorite Dhofar 1575, belonging to the achondrite carbon-rich ureilite group. In ureilites, carbon is bound in the form of tiny grains of graphite and (nano)diamonds. Since this meteorite is a find, it is necessary to consider the potential influence of terrestrial weathering, although according to <cit.>, the weathering grade for this meteorite is low. The time evolution of the CN emission observed on monochromatic light curves (<ref>) revealed continuous release of CN during the ablation, also supporting embedded source of CN within the meteorite sample. Among other tested achondrites, relatively strong CN emission was detected for the lunar meteorite NWA 11303, mostly originating from the early stages of the meteorite ablation (see further discussion in Section <ref>). On the contrary, the martian meteorite Tissint (Shergottite) did not exhibit any CN or H emission. Moderate CN and H emission was detected in the aubrite meteorite Norton County. Here we have found the most significant difference in the intensity of CN and Hα relative to Fe I-15 and Mg I-2. The reason is its composition consisting of Mg-rich silicates and depleted in iron <cit.>, which is reflected in low CN/Mg and H/Mg ratios and relatively high CN/Fe and H/Fe ratios, respectively (Fig. <ref>). The Norton County aubrite contains ∼ 0.3 wt.% water content <cit.>. Moderate CN emission was also detected in the diogenite meteorite Bilanga. Interestingly, out of all the tested HED (howardite-eucrite-diogenite) meteorites, Bilanga is the only meteorite with detected CN as no CN was found in the eucrite Stannern or the howardite Sariçiçek. However, measurements of carbon isotopes which differentiate the presence of C caused by terrestrial contamination from indigenous content, confirmed the presence of indigenous carbon content in HED meteorites, including in the howardite Sariçiçek <cit.>. This level of carbon content however did not produce a detectable CN emission during the simulated ablation of the eucrite Stannern or the howardite Sariçiçek. To our knowledge, the carbon content of the Bilanga meteorite was not measured by previous authors, thus, we cannot compare with the other tested HED meteorites. The H and CN line intensities are below the detection limit (log10(H/Fe I) < -1.6 and log10(CN/Fe I) < -1.3, respectively) in most of the tested ordinary chondrites, including Košice (H5), Pultusk (H5), Buzzard Coulee (H4), Mocs (L5-6), NWA 869 (L3-6), Knyahinya (L/LL5), Chelyabinsk (LL5), and Kheneg Ljouâd (LL5/6). Therefore, CN content was considered absent or unreliable in these tested meteorites. While CN and H emissions were absent in most of the tested ordinary chondrites, they were surprisingly clearly detected in the LL3.4 ordinary chondrite Ragland. It has been reported that the terrestrial weathering altered metallic Fe, Ni and troilite to iron oxides and hydroxides <cit.> in the Ragland meteorite, and therefore we can assume a slight modification of its composition. However, Ragland has unusual mineralogical and chemical composition features for an LL ordinary chondrite. It has relatively high water content of 2.45 wt.% and it is the least metamorphosed ordinary chondrite investigated in this study <cit.>. Therefore, the observed spectral features may also represents its original, atypical composition <cit.>. We have found very faint CN band peak in the EH5 enstatite chondrite Eagle, which also corresponds with detected faint Hα emission. Most of the detected CN emission originated from early stages of the meteorite ablation, implying source from the outer layers of the meteorite. Influence of the terrestrial weathering of the sample therefore cannot be excluded, although the sample originates from an observed fall. The water and carbon content of the Eagle meteorite are ∼ 0.5 wt.% and ∼ 0.3 wt.%, respectively <cit.>. It is believed that besides carbonaceous chondrites formed in the outer solar system as the main source of hydrated minerals delivered to Earth, enstatite chondrites from the inner solar system also contributed to the origin of Earth´s water <cit.>. Moreover, enstatite chondrites are considered to be the material from which the proto-Earth was formed, as they have identical isotopic abundances to terrestrial rocks <cit.>. No CN emission was detected from the mesosiderite Mincy or the iron meteorite Mount Joy. Interestingly, we observed an onset of H emission in the early stages of the Mincy meteorite ablation with a gradual decrease and disappearance of the Hα line. Since significant hydration is not assumed to be present in mesosiderites, the detection of H at the beginning of the ablation may reflect an effect of terrestrial weathering on the outer layers of the sample. We note that Mincy is a meteorite find. §.§ CN/Fe I-4 intensity ratio measurements and detection in lower resolution spectra We have found that one of the most straightforward methods to recognize the presence of the CN band, which is also applicable to the lower resolution data, is by measuring the relative intensity ratio of the CN peak at 388.3 nm to the one line from the Fe I-4 multiplet positioned near 386.0 nm, as shown in Fig. <ref> and Table <ref>. Meteorites without CN present exhibit only very faint Fe I peak at the 388.3 nm. At 386.0 nm, all tested meteorites show a strong Fe I - 4 line peak. Without a considerable contribution of CN emission, the ratio between the Fe I lines at 388.3 nm/386.0 nm should remain relatively constant for different meteorite types, given that the ablation behavior of the sample is steady. Fig. <ref> shows the distinction of meteorites in which this intensity ratio was increased compared to meteorites with no detected CN emission. The boundary for recognition of CN emission in our data seems to be the value of 388.3 nm/386.0 nm ≈ 0.1. In general, we did not find CN in meteorites with the intensity ratio below this value. However, we note that the distinction of CN contribution based on the 388.3 nm/386.0 nm intensity ratio must be considered carefully. In this work, the presence of CN was confirmed by also taking into account the overall intensity of the CN band relative to other element lines and studying the time evolution of its emission. The majority of all meteorites tested in the wind tunnel with apparent detection of CN emission belong to the group of carbonaceous chondrites (Murchison, Allende, Lancé) and distinct achondrites (Dhofar 1575, Bilanga, Norton County, and NWA 11303 meteorites). The surprising detection of CN in the spectrum of the ordinary chondrite Pultusk is assumed to be due to a contaminating source, as we only observed CN emission at the beginning of the ablation (see Fig. <ref> and the discussion in Section <ref>). The measurement of the 388.3 nm/386.0 nm intensity ratio presents a suitable method of identifying the presence of CN emission in lower resolution data. To validate this method, we measured the intensity ratio in the meteorite ablation spectra captured by the AMOS-Spec-HR spectrograph routinely used for meteor observations. Our results show that the presence of the CN band is accompanied by increasing the 388.3/386.0 nm peak intensity ratio and decreasing line width ratio of these two peaks due to the contribution of the surrounding CN peaks near 387.1 nm and 385.0 nm (Table <ref>, Fig. <ref>). However, in the case of meteors, it is necessary to study these distinguishing features carefully, as the line emission depends on the flow enthalpy, i.e., on the entry speed (line intensity ratios may differ at different temperatures). §.§ Monochromatic light curves Next, we studied the time evolution of the CN emission in the spectra of tested meteorites by analyzing their monochromatic light curves. We have found that the early sharp increase of CN intensity can be observed in the earliest stages of the meteorite ablation, along with the onset of the emission of Na, due to the volatility of its atoms and the low excitation potential of the Na lines. An example of the monochromatic light curve of the CM2 carbonaceous chondrite Murchison with the most notable CN emission is displayed in Fig. <ref>. Five frames before and after the meteorite ablation showing spectrum noise level are displayed for a better distinction of the onset of the CN emission. The value of 0.0 s on the x-axis corresponds to the start of the meteorite ablation, i.e. to the onset of Na lines which are the first to begin to radiate. The intensity of Fe I-4 multiplet is only represented by the intensity of one line at 388.6 nm, close to the strongest CN peak. One can note a different behavior in the time-dependent CN emission, which peaks in the early stages of ablation compared to other elements detected in the meteorite spectrum showing slow onset of emission. In the later stages of the meteorite ablation, the CN generally radiates in a similar trend as the other elements, including very short and subtle flares. Similar behavior of early strong radiation to that of CN emission was observed for the low excitation Na lines. Due to the saturation of Na lines in most frames of the Echelle spectra, the relative intensity of the Na I lines was not investigated in this work. An interesting behavior was observed in the monochromatic light curve of the Murchison meteorite as a bright flare after the final stages of the meteorite ablation with increased Cr I-7 and Mn I-2 intensity (near 107 W/m^2/sr/nm and 170 W/m^2/sr/nm, respectively). This flare may be associated with a sudden release of a droplet of molten material with a specific composition consisting of chromium and manganese-bearing minerals. Monochromatic light curves of other ablated meteorites plotted from the highest to the lowest CN intensity can be found in the <ref>, demonstrating different content of organic compounds and the time evolution of the CN emission. Slight variations are observed in the onset of CN emission among spectra of the different ablated meteorites. Meteorite spectra of Murchison, Dhofar 1575 and Eagle exhibit CN emission simultaneously with Na emission. In the spectra of Allende, Bilanga, Norton County, NWA 11303 and Ragland meteorites, the onset of CN emission was observed from the second frame (around 0.08 s of ablation) and from the third frame (about 0.17 s of ablation) in the case of Lancé meteorite. In addition, the light curve shape of CN emission for the Dhofar 1575 meteorite (<ref>) is not characterized by a typically very steep initial increase of brightness but rather by a very slow, gradual increase and decrease in brightness, which does not indicate an effect of terrestrial weathering, but may rather reflect a heterogeneous distribution of carbon within the ureilite meteorite. However, a steep increase in CN intensity in the early stages of ablation and a subsequent steep decrease towards the noise level of the recording in the case of the lunar NWA 11303 meteorite (<ref>) points to a source from the surface layer of the meteorite, which may also result from terrestrial weathering. We note that the weathering grade of this meteorite is low <cit.>. In this case, the origin of the detected CN emission was not clearly resolved. Further laboratory analysis of the bulk composition of this meteorite could help resolve this issue. For further insight into the differential ablation of the studied meteorites, we measured the CN band peak intensity relative to the emission of other major atoms in individual frames. Fig. <ref> displays the time-dependent CN/Mg I-2 intensity ratio for all meteorites with detected CN content. Similarly to Fig. <ref>, five frames are plotted before the meteorite ablation starts. Fig. <ref> presents the evolution of CN emission relative to Mg I in the first 3.2 s of ablation for better visualization. However, as can be seen, the ablation duration of the Dhofar 1575, Eagle, Lancé, and NWA 11303 meteorite was shorter than for the other investigated meteorite samples. The analyzed relative CN intensity ratios can be affected by the specific composition of the meteorite fragments, as observed by the high Mg content in the Norton County meteorite or the high Fe content in the Ragland meteorite (Fig. <ref>). The CN line intensity measured relative to Fe I-15 in individual frames of meteorite ablation can be found in the <ref> for comparison. Regardless of the specific meteorite composition, a clear feature of the CN band is its strong peak in the early stages of the ablation and the subsequent sharp decrease in the CN/Mg and CN/Fe intensity ratio due to the gradual release of Mg and Fe dominating in the later stages. This result implies that, in lower-resolution spectra of real meteor observations, CN can be best detected in the early stages of the ablation in the upper atmosphere, before the strong emission of the surrounding Fe I begins. Sufficient instrument sensitivity and high frame rate may therefore play a key role for the detection of the early CN emission from meteors. As already mentioned, the most notable CN emission was detected from carbonaceous chondrites. The light curve shapes of three ablated carbonaceous meteorites do not show significant variations with the exception of CN content (<ref>). We observed the strongest CN peak in the CM2 Murchison meteorite, followed by the CV3 Allende and CO3.5 Lancé meteorites. In the case of the Lancé meteorite, it should be mentioned that the CN emission was observed starting from 0.17 s after the first detection of the meteorite spectrum (from the third frame) and the slight decrease of CN/Mg I-2 up to 0.5 s of ablation is caused by the significantly stronger Mg line (see also the monochromatic light curve of Lancé in <ref>). The time evolution of CN emission varies slightly between the achondrite meteorites (<ref>). The strongest peak of the CN band in the diogenite Bilanga was observed at around 0.08 s of the ablation process while in the lunar NWA 11303 and ureilite Dhofar 1575 meteorites, slightly later at 0.16 s and 0.3 s, respectively. The measured CN/Mg I-2 intensity ratio in the aubrite Norton County is continuously low (from the onset of CN emission at 0.08 s), but this fact is slightly affected by the relatively Mg-rich composition of this meteorite. When analyzed relative to the Fe I emission, the aubrite Norton County exhibits relative CN intensities closer to some other achondrites with notable CN presence (Fig. <ref>). The apparent increased trend of CN/Mg I-2 intensity ratio in Dhofar 1575 meteorite in the last stages of the meteorite ablation is the result of a significant decrease of Mg lines and relatively steadily slow CN release at the same time (see also the monochromatic light curve of Dhofar 1575 in <ref>). Our results suggest that ordinary and enstatite chondrites do not exhibit CN emission or only faint unrecognizable contribution near the background noise (<ref>). As discussed earlier, the ablation of LL3.4-type Ragland meteorite was accompanied by strong CN emission with the most atypical time-depended behavior. The monochromatic light curve of Ragland is characterized by a gradually increasing CN/Mg I-15 intensity ratio in the first few frames and a subsequent very short and strong flare for the longest period of time compared to other meteorites with CN emission (see also the monochromatic light curve of Ragland in <ref>). This behavior can potentially be related to terrestrial weathering of this meteorite. Although the CN emission was not reliably resolved from the summed spectral profile of the H5 ordinary chondrite Pultusk (Fig <ref>), we detected CN emission from the first second of the ablation (Fig. <ref>). The observed distinct light curve of Pultusk likely reflects the presence of a surface layer rich in CN content. In this specific case, it is difficult to advocate the terrestrial weathering as this meteorite is a fall that was recovered quickly after its landing on Earth. § CONCLUSIONS We present here the first in-depth analysis of CN emission from a wide range of laboratory tested meteorites serving as asteroidal meteor analogues. The simulated ablation conditions correspond to an atmospheric flight of a slow meteoroid (∼12 km s-1) at an altitude of approximately 80 km. The observed variations of CN emission in various meteorite types demonstrate that CN can be used as a diagnostic spectral feature of carbonaceous and relatively carbon-rich meteoroids. The strongest CN emission was found in carbonaceous chondrites (CM2, CV3 and CO3.5) and a C-rich ureilite. Moderate CN emission was found for diogenite, aubrite and lunar meteorite samples. Low CN contribution in the early stages of the ablation was found in an enstatite chondrite. In general, the CN band was either absent or not clearly detected in most of the ablated ordinary chondrites with the exception of the Ragland meteorite (LL3.4), consisting of moderately weathered material with originally atypical composition for an ordinary chondrite. CN was not detected in the tested eucrite, howardite, martian shergottite, mesosiderite and iron meteorites. Our results point out strong correlation between CN and H emission and suggest that both volatile features are suitable to trace contents of organic matter and water molecules present in meteoroids. While this study only focus on analogues of asteroidal meteors, our previous survey <cit.> pointed out strong H emission as a marker of the high volatile contents in cometary meteoroids. The analysis of monochromatic light curves of ablated meteorites has shown that CN emission can be best recognized in the early stages of the meteorite ablation, before the onset of surrounding Fe I lines. For application in lower resolution meteor observations, we therefore suggest that efficient detection of CN can be achieved during the early stages of meteor ablation in the upper atmosphere. Additionally, using lower resolution data from the meteor spectrograph AMOS we found that the measurement of the intensity ratio and line width ratio of the CN band peak near 388.3 nm to the Fe I-4 line peak near 386.0 nm can indicate the contribution of CN in lower resolution spectra dominated by surrounding iron lines. Our results suggest that terrestrial weathering of meteorites can affect their spectral signature including potentially affecting the tracers of water molecules and organic content. Such effects, typically detected in the samples that were more weathered meteorite finds, were resolved in specific monochromatic light curves showing CN or H emission only in early stages of the ablation. This may be explained by a contaminant source of carbon on the surface layers of the meteorite. Nevertheless, the strong spectral distinction between the carbon-rich materials, specific achondrites and ordinary chondrites confirms that the studied diagnostic spectral features correlate with their bulk composition and thus can be used to trace the original contents of organic compounds in meteoroids. § ACKNOWLEDGEMENTS We are thankful to the High Enthalpy Flow Diagnostics Group (HEFDiG) team of the Institute of Space Systems, University of Stuttgart for carrying out the meteorite ablation experiments. This work was supported by ESA grants under contracts No. 4000128930/19/NL/SC and No. 4000140012/22/NL/SC/rp, the Slovak Research and Development Agency grant APVV-16-0148, the Slovak Grant Agency for Science grant VEGA 1/0218/22, and the Comenius University Grant G-21-193-00 and G-22-145-00. J. Vaubaillon was supported by CNES, the French space agency, in the framework of the MALBEC project. We particularly thank all colleagues from HEFDiG in Stuttgart who supported and inspired the MetSpec campaigns. G. Batic (NHMW) is thanked for the preparation of the meteorite samples. § DATA AVAILABILITY The spectral data of presented ablated meteorites will be made available upon a reasonable request. § ADDITIONAL MONOCHROMATIC LIGHT CURVES elsarticle-harv
http://arxiv.org/abs/2307.04368v2
20230710064918
ECS -- an Interactive Tool for Data Quality Assurance
[ "Christian Sieberichs", "Simon Geerkens", "Alexander Braun", "Thomas Waschulzik" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.SY", "eess.SY" ]
New results on the dynamics of critical collapse Cheng-Gang Shao2 August 12, 2023 ================================================ With the increasing capabilities of machine learning systems and there potential use in safety-critical systems, ensuring high-quality data is becoming increasingly important. In this paper we present a novel approach for the assurance of data quality. For this purpose, the mathematical basics are first discussed and the approach is presented using multiple examples. This results in the detection of data points with potentially harmful properties for the use in safety-critical systems. § INTRODUCTION The development of machine learning (ML) based systems has led to a widespread use in research, industry as well as in the everyday life. Even though ML systems show great performance in solving complex tasks, their use is mostly limited to domains, where wrong decisions only have minor consequences. The application of ML systems in high-risk domains currently is problematic due to the needed quality, lack of trustworthiness and the expected legal basis. To give a legal framework for the application of ML systems the European AI act <cit.> is at the moment under development. Simultaneously, multiple projects from research and industry are dealing with the topic of ML systems in high risk areas, such like "KI-Absicherung" <cit.> and "safetrAIn" <cit.>. All of those projects highlight the high requirements that are needed to protect humans from errors made by ML systems. High risk ML systems have to fulfill the requirements according to <cit.> Chapter 2 "REQUIREMENTS FOR HIGH-RISK AI SYSTEMS" Article 10 "Data and data governance" Point 3: "Training, validation and testing data sets shall be relevant, representative, free of errors and complete". In this paper we introduce a new approach that will contribute to the future fulfillment of this requirement. It is showcased how different relevant aspects of the data can be analysed and how relations between the given data can be used for quality assurance aspects. The presented approach is part of the QUEEN-method (Qualitätsgesicherte effiziente Entwicklung vorwärtsgerichteter künstlicher Neuronaler Netze, quality-assured efficient development of neural networks) <cit.> which is a comprehensive approach for the development of quality assured neural networks. In the scope of the QUEEN-method two data quality assurance methods were developed, namely (integrated quality indicator) <cit.> and ECS (equivalent classes sets) <cit.>. These methods were developed simultaneous in close cooperation. In this paper we want to show the mathematical basis and the use of ECS on the topic of quality assurance. The abilities and usage of the is covered in another submission <cit.>. The ECS is particularly used to analyse the local and global composition of data sets. Based on this a wide variate of data quality properties is addressed. Be it the identification of single data points like outliers, false annotations or isolated data or the identification of groups of data points like decision boundaries and local data point groups of identical output. The ECS makes it possible to identify all data points which do not match specifiable conditions. The method itself is thereby created in such a way that interactions between the user and the data are supported in order to simplify and speed up the quality assurance process. § RELATED WORK/STATE OF THE ART Despite the fact that data quality and quality assurance are widely necessary and researched, there exists no single general accepted definition. Instead, there are several attempts to define data quality based on current developments. One example is given by <cit.> who define data quality with respect to the intended use of the data. It is argued that data quality has to be a context dependent term to be appropriately used in the context of a given tasks. In addition, the term of "data quality" is split into multiple properties like accuracy, consistency, completeness, safety and more. In <cit.> many of these properties are listed and defined separately. In <cit.> data quality is additionally split into subjective and objective assessments of data quality. A general definition on data quality can thereby not be given. Instead is high data quality considered to be data which is fit for its intended purpose <cit.>. If data quality is used in standards it is typically split into the different properties which have to be analysed separately like in <cit.>. A first step to assure the data quality is the use of descriptive statistics <cit.>. Herein statistical methods are used to gain greater insights into the given data. Common methods are the visualization via scatter plots and histograms, often combined with the measurement of central tendencies, dispersion and location parameters. Our proposed method extends the descriptive statistical methods, enables the visualization of multiple quality assurance aspects in one plot and enables a direct interaction between the visualization of the quality indicator visualization and the data. When trying to assure the quality of data, another possible approach is the representation of given data points in lower dimensial space using methods of dimensinality reduction. Commonly used methods are PCA <cit.>, tSNE <cit.> or UMAP <cit.>. These methods often produce representations interpretable by humans if the output dimensionality is chosen to be low enough. However, such methods often result in considerable loss of information. The ECS on the other hand is computed on the original values and takes all the given information into account. Some approaches try to cover as many dimensions of the data quality as possible. One way to do this is by testing the data against predefined rules and assumptions. An example of such an approach is the pointblank R package <cit.> which is created for an agent based data quality assurance. In this package, specific elements of the data are tested against predefined functions. As part of this it can be tested if the data is greater, equal, lower and so on. Another method is given by DEEQU published in <cit.> and <cit.>. This package allows for assumption based unit tests which can be defined by the user. Tests on specific parameters of the data, similar to those already mentioned with regard to the pointblank package, are possible as well. A last method that should be mentioned here is shown in <cit.>. This approach showcases a probability-based method which calculates a value representing the probability that a data set is free of internal errors with respect to entered rules. The entered rules are based on the presence of data of certain values, comparable to the package pointblank. The main problem in using the mentioned approaches is the large amount of required knowledge about the data to create accurate assumptions. On top of this, the efficient creation of assumptions is only given if the user is aware that the data quality is influenced in some regards. Due to the reliance on the relationship between data points, our methods do not need any assumptions or rules that are to be specified by an user. Instead our approach can be used without any knowledge about the data. A different approach is the focus on just a single dimension of the data quality. On the topic of outlier detection these are for example density-based algorithms like <cit.>. In this approach, the amount of local neighbouring data points is calculated and the thereby generated local density is compared with the nearest neighbours. Another approach is using the DBSCAN algorithm <cit.> to cluster the given data. Based on this clustering the method proposed by <cit.> calculates values to identify clusters of minimal sizes. These clusters are then regarded as possible anomalies. Another data quality property is the detection of possible outliers, which can also be solved by density based clustering. One example of such an algorithm is given by <cit.>. This algorithm uses a fixed clustering to identify clusters followed by the computation of cluster distances. The clusters are classified as anomalous based on the inter-cluster distances and the deviation from the mean inter-cluster distance. Two quite similar approaches are <cit.> and <cit.>. Both approaches use a clustering of the given data in a first step. The first one uses the previously mentioned DBSCAN, the second one uses a cluster algorithm named OPTICS <cit.>. In a second step, anomalous clusters are identified, once based on inverse distance weighting (IDW) and once using the kringing method. The main advantage of all of the mentioned methods is the reliable calculation of their data quality property. However, due to the methods focus on one specific data quality property, they are only useful if the assumption exists that this property could contain errors. The advantage of our proposed method is that multiple data quality properties may be analyzed with one approach. § METHOD The ECS is based on the idea that a data set can to be split into input data and output data. The input data defines the dimensions of the data, henceforth are called features, which can be used to predict the output features. The amount of all possible inputs creates the input space I. Accordingly is the output space O created by all possible outputs. To use the ECS properly all feature values have to be numbers. Features which are not created by numbers have to be represented in some way as a number or a combination of numbers. To start the calculation of the ECS two metrics are needed. These metrics should be chosen in such a way that "similar" data points according to the semantics of the task that has to be solved have a relatively small distance to each other. At the same time "dissimilar" data points should have a relatively large distance. The distances between two data points can be calculated in the input space and in the output space independent from each other. By doing so, it is possible to use different metrics for the distances in I and in O. Which metric is best suited for the data set depends on the given type of data and the task to be be solved. In the following, the difference between data points in the input space is named input distance d_RI. Accordingly, the difference between data points in the output space is called output distance d_RO. To differentiate between "similar" and "dissimilar" data points, the distances can be separated into different groups. The minimal approach is to create two groups. One group for relatively small distances and another one for relatively large distances. Doing so requires a threshold, which is called δ_in for distances in the input space and δ_out for distances in the output space. These δ can be absolute distance values or a percentage of the maximum known distance between data points. They are set based on the data quality properties that should be identified and the used data type. By comparing two data points with each other, four possible scenarios can be distinguished: * small input distance - small output distance * small input distance - large output distance * large input distance - small output distance * large input distance - large output distance Each of these scenarios shows a relation between the data points. If for example the distances are both small, than the data points may showcase a common use case with a typical output. A small input distance in combination with a large output distance on the other hand could showcase complex areas in the input space or an outlier. Either way, the identification of data properties based on two data points is not enough. Due to this the following four ECS-sets are calculated. In these sets, the compared data points are saved, which are part of one of the above scenarios. ECS_EE(D) {d_c|d_c ∈ D^2 ∧ d_RI(B) ≤δ_in ∧ d_RA(B) ≤δ_out} ECS_EU(D) {d_c|d_c ∈ D^2 ∧ d_RI(B) ≤δ_in ∧ d_RA(B) > δ_out} ECS_UE(D) {d_c|d_c ∈ D^2 ∧ d_RI(B) > δ_in ∧ d_RA(B) ≤δ_out} ECS_UU(D) {d_c|d_c ∈ D^2 ∧ d_RI(B) > δ_in ∧ d_RA(B) > δ_out} Each of the four ECS-sets represents all comparisons between data points which result in one of the four scenarios. Thereby, an E showcases a small distance whereas a U showcases a large one. The first of the two letters of the ECS-sets represents the input distance and the second represents the output distance. Following this, ECS_EU contains all data point comparisons which result in a small input and a large output distance. The ECS_UE on the other hand contains comparisons which result in a large input and a small output distance. The information of the ECS-sets can be used to analyse the data points for each of the four scenarios. This way, it is possible to identify data points with specific properties. It would, for example, be possible to identify all data points which have the many dissimilar data points in close proximity. It would also be possible to identify data points which showcase small distances in the input and the output space. By doing so, certain areas of the input space can be identified which correlate with a certain outputs of the output space. It could also be possible to identify features which differentiate certain data points from each other. The ECS-sets contain all information of the data set which could be used for quality assurance. However, the formatting of the sets is difficult for humans to read. This is especially the case when entire data sets should be analysed and not just a small subset of the data. The solution is the comprehensive representation of the ECS-sets in such a way that interesting data points can easily be identified. Before this can be done, it has to be determined which combinations of data points are the most interesting ones. The expectation hereby would be that similar input data would create output data that is related in some way. Based on this, it can be assumed that a combination of data points with a small input distance also has a small output distance. On the other hand it would not be expected that data points with large input distances to each other would showcase similar output data. The most interesting combinations of data points would thereby be combinations which result in a small input distance. This comparisons can be display particularly by sorting the data point comparisons based on the input distance. An example of the sorted representation of the ECS_EE is shown on the right side of the figure <ref>. Listed on the x-axis is the comparison between data points. This comparison is data point based and showcases the comparisons of any data point with the kth smallest distance in the input space. On the y-axis, it is displayed how many of these comparisons are part of the current ECS-set, which is in this case the ECS_EE. In this process functions are created representing every data point. To showcase the entire data set these functions are superimposed over each other. Every function visually displays if and which of the nearest data points are part of the ECS_EE. The data set which was used to create the displayed ECS_EE is shown on the left side of figure <ref>. It is a simple data set created by two input features (a, b) and one output feature (color and shape). Increasing functions display that most of the data points with the kth smallest distance are part of the current ECS-set. Functions which do not increase display that the comparisons are part of another ECS-set. It should be emphasized here that the function in the kth position only increases for one of the four ECS-sets. The created ECS-histograms consist of a large number of functions. Areas in the ECS-histogram, in which large amounts of functions showcase the same behavior, are displayed darker. Accordingly smaller amounts of functions are display brighter. For the representation of the amount of functions, gamma correction is used. This way, even singular functions should stay visible. As stated before, it would be expected that a small input distance influences the output distance. An ideal data set would have a strong correlation between the position in the input and the output space. The resulting data point combinations would just have small input and output combinations for all the nearest neighbouring data points. This would result in a steep increase of all functions in the ECS_EE until all possible similar data points are combined with each other. From this point on the functions in the ECS_EE do not increase any further. The ECS_EE function created by a single data point in such an ideal data set is shown schematically in the figure <ref>. The main diagonal is thereby displaying the maximum speed at which a function is able to increase. Data sets or individual data points which are not ideal do create different functions. One extreme example would be that a function would not increase at all in the ECS_EE. This would be the case because there are no possible combinations with small input distance, small output distance or both. Which of these possibilities is actually the case, can be tested by using the other ECS-histograms. The benefit of the representation of the information of the data set in the form of the ECS-histogram is the data point based presented information. The neighbouring data points to each data point and there relation to each other are shown and can be compared to set expectations. This basis helps at identifying functions which do not live up to the expectations fairly easy. The reason why the functions are not behaving the way they should, is intrinsically given by the combination of the behaviour and the ECS-set. Additionally, it would be possible to define limits in the ECS-histogram which can be tested autonomously. § APPLICATION We want to showcase the abilities of the ECS by the application on two different data sets. Simultaneously we want to show how the ECS can be used explicitly to detect certain data quality properties. With this in mind we created a data set which is used as an example. The data set is created in such a way that properties detected by the ECS can be verified by displaying the critical data points. In addition is the ECS used on the MNIST data <cit.> set to display the detection of data quality properties on a commonly known example. The application on the data sets is focused on the the data quality properties created by outliers, isolated data points and local groups of data points with identical output values. §.§ ECS on point cloud To demonstrate the usage of the ECS, an artificial data set is created which is displayed at the left side of figure <ref>. This data set is similar to the one displayed in figure <ref>. The most important difference is, that the clustered data points can not clearly be separated from each due to the clusters overlapping partially. The data set contains 1000 data points which are grouped in four clusters. As in figure <ref> each cluster has a different amount and a different density of data points. The data set was created this way, because it demonstrates a simple classification task. At the same time, properties like outlier and local groups with identical output are present and can be visualized. In the following, it is shown how these properties can be identified by using the ECS. All the ECS-histograms are created using a δ_in of 0.3 times the maximum distance in the input space and a δ_out of 0 to differentiate between all differing outputs. §.§.§ outliers An outlier is a data point which has an unexpected output for the given input. This output is typically very different from an output that would be expected. Here an outlier is just be considered to vary in the output space. Unwanted variations in the input space are treated in the following section. The reasons for an outlier can be different. The output may for example be wrong or the data point showcases a rare but correct input. Due to their character, outlier appear in areas which are dominated by data points with a different output. Given this information, it can be stated that an outlier has close neighbours with large output distances. The ECS_EU is used to identify these cases. Functions in the ECS_EU are increasing if there are data point combinations with small input and large output distances. Such functions that already increase for the nearest neighbours can thus be regarded as outliers. By targeting these functions, the corresponding outliers can be identified. How many combinations for how many nearest neighbours should be part of the ECS_EU is dependent on the given data set. In the given point cloud example, 100 nearest neighbours were chosen to be enough to represent the local data points. If out of these 100 combinations more than 70 have a large output distance, then the data point is regarded as an outlier. The ECS_EU is shown in figure <ref>. The area of importance in which the functions of outliers appear is highlighted by a rectangle. It can be noticed that some data points in the point cloud are highlighted which means that they are considered to be outliers. It can also be noticed that most of the functions are not increasing by much. §.§.§ isolated data points Isolated data points are data points which have a large input distance to many or all of there nearest neighbours. This means that the data point showcases an input which is rare or possibly wrong. In the literature these type of data points are often referred to by "Out-of-distribution-data". The ECS_UE and the ECS_UU are used to identify data points which have large distances to the nearest neighbours. In both ECS-sets are combinations saved which have large input distances to each other. The difference between these two ECS-histograms is the differently sized output distance, which is not considered for isolated data points. The corresponding functions of isolated data points increase very early in the ECS-histograms. Most of the time, the functions increase in the ECS_UE as well as in the ECS_UU. This is the case, because the nearest neighbours themselves may have large input distances to each other and thereby showcase very different outputs. The sooner a function increases, the fewer data points are given in the local area of an isolated data point. The amount of neighbouring data points that should exist is on the given task and data set. Typically, this means that every data point should at least have a few neighbouring data points with small input distance. If many data points have no near neighbours, an adjustment of the parameter delta_out can be considered. By using the ECS_UE and the ECS_UU it can be stated that, there are no isolated data points in the current data set with less than 50 close neighbours. This can be confirmed by the fact that the sample data set used here was created with clusters of data points §.§.§ local groups of identical output A local group with identical output is a structure created by multiple data points. All data points in such a group have small distances to each other regarding the input and the output distances. There are no greater amounts of data points which showcase a large output distance, besides possible outliers or false data points. The identification of these groups showcase the ability of the used metric to differentiate between different outputs on the basis of the corresponding input. This means that the input data of the groups share similar features which in turn leads to the differentiation. It would be possible to solve the given task at least for these groups based on these similar features. The combination of small input distances and small output distances can be identified using the ECS_EE. The functions correlating with data points as part of a local group with identical output increase strongly. The functions will increase as long as there exist data points with small distances in the input and output space in the data set. These strongly increasing functions showcase every data point which is part of such a group of data points. Using the ECS_EE, there is the possibility to also identify groups of different amounts of data points. This can be done by choosing the function increasing the strongest for different amounts of neighbours. If the functions have increase up to the chosen amount, it means that there is a minimum of this amount of data points in the group. In the given case in figure <ref>, groups with 100 data points and identical output should be identified. The area of importance in the ECS_EE is marked by an rectangle in the upper right corner. It should be noticed that not just the function increasing the strongest were marked but also some functions which increase a little bit slower. This has be done to make the identified groups more robust against false data points and outliers. In the given case, this means that also functions with 95 out of 100 data points are regarded as local groups with identical output. In addition, it can be noticed that most of the functions in figure <ref> are increasing. This indicates that there are many data points arranged in local groups. This is logical because the point cloud was created this way as four groups of clustered data points. The detected data points which are part of a local group are marked on the left side of the figure <ref>. §.§ ECS on MNIST In contrast to the previous example, MNIST is a data set which was created to represent the specific task of classifying handwritten numbers. The data set consists of 60000 images of size 28*28 serving as the input and as many numbers between zero and nine for classifying the input. The most important difference between the previous used point cloud and the MNIST data set is the amount of data and input features. The much greater amount leads to much more functions in the ECS-histograms. The ECS-histograms thereby get more complicated. This can be counteracted by applying more specific metrics to the data. Here the pixel-wise euclidean distance is chosen as a metric. The euclidean distance is typically not used on images due to its bad performance. But in the case of MNIST, this metric is applicable, as the image pixels are given as centered grayscale values. It can be shown that the abilities of the ECS are still given using the euclidean metric. Another problem which appears by using data with many features is the curse of dimensionality through which all distances are getting closer to each other. As a result a larger δ_in of 0.75 times the maximum distance in the input space is used in the following. The δ_out is still 0 to differentiate between all differing outputs. To show the input data of the MNIST data set in a way, a representation is used in the following chapters. This representation is created by using UMAP <cit.>, a dimensionality reduction method. The cluster, which are created this way are marked with a number to showcase the corresponding output. The ECS is used based on the original MNIST input and output data. §.§.§ outliers As shown in the chaper "ECS on point cloud - outliers", the ECS_EU used to identify outliers. The ECS_EU of the MNIST data set for the nearest 200 neighbours is shown in figure <ref>. As mentioned before, this representation has much more functions. These are too dense for any singular function to be identified without an interaction. But it can be noticed that most functions do not increase by a lot, as indicated by the darker visualization. The amount of functions (|F|) which increase to a specific value of fulfillment (v_f) until the 200th neighbour are shown in the following table <ref>. r0.35 v_f |F| 101-200 6021 51-100 7337 11-50 14914 0-10 31728 0 16813 Amount of data point combinations which are part of the ECS_UE for the 200 nearest neighbours. It must be noticed that more than half of the given data points have a maximum of 10 combinations showcasing a small input distance and a large output distance. On the other hand, there are more than 6000 data points for which half of the nearest 200 neighbours have a large output distance. Not all of these are outliers, some may be positioned between classes others may have badly assigned distances. To identify outliers only functions performing worse than a random assignment of distances are used. This means that all data points having more than 180 data points with large output distance among the nearest 200 neighbours are interpreted as outliers. By choosing these functions 804 data points where identified as outliers. In figure <ref>, a random sample of nine of these outliers is displayed. It is noticeable that all of these data points do look strange. Most may also be mistaken with a different number. It would for example be possible to remove these data points from the MNIST data set to achieve a higher data quality. §.§.§ isolated data points The identification of isolated data points in MNIST is identical with the identification of data points in the point cloud. The ECS_UE and ECS_UU used are shown in figure <ref>. In the ECS_UE are 129 and in the ECS_UU 132 data points with less then 200 neighbouring data points. Most of the correlated data points appear in the ECS_UE as well as in the ECS_UU. Due to the relatively small amount of increasing functions the histogram is created darker. The earliest functions start increasing in the ECS_UE and ECS_UU for less than 10 neighbours. The input data of the earliest increasing functions is shown in figure <ref>. One function in the ECS_UE is noticeable do to its steep and early increase. The corresponding input data is shown in figure <ref> in the third image from left. This data point has a large distance to its closest neighbours. At the same time, most of these neighbours have the same output "4". This lead to the conclusion that the data point has still some of the most important features which are correlated with the output "4", even if the data point is very isolated. Overall it is noticeable that most isolated data points shown use many input pixels to display the number. This is not often the case in the MNIST data set. In addition, the pixel-wise euclidean distance used reacts especially on pixel-wise differences by assigning higher distances in the input space. §.§.§ local groups of identical output The ECS_EE, which is used for the identification of local groups of data points with identical output, is shown on the right side of figure <ref> for the nearest 500 neighbours. As it is the case for the detection of outlier, the amount of functions is much larger than in the point cloud example. It is also not possible to identify single functions but instead overall trends of the functions. It can be noticed that most of the functions are increasing very steeply. This means that most data points do have small input as well as output distances in combination with their nearest neighbours. This in turn means that most data points are located in local groups with identical output. The amount of data points which should be part of the groups can be changed by using different amounts of neighbours in the ECS_EE. In table <ref>, the amount of data points (|dp|) which belong to a local group of different size (gs) is shown. These amounts where created by allowing a maximum of 5 data points a different output which could exist due to outliers. Noticeable are most of the data points located in groups of a few hundred data points. But still more than 4000 data points could be detected which are part of local groups with more than 1500 data points. r0.45 gs |dp| 100 38383 200 27351 500 14745 1000 7851 1500 4329 Amount of data points which are part of the different sized local groups with identical output. The entire data set contains 60000 data points. The position of these data points is highlighted in dark in the UMAP representation in figure <ref>. It can be noticed, that especially data points with an output of 1, but also of 0 and 6 show local groups of identical output. This means, that the used metric has the ability to differentiate these data points from each other. The local groups which can be identified can than be used to solve the given task, based on there location in the input space. § CONCLUSION In this paper we presented a novel approach for the data quality assurance based on local similarities. It was shown how the ECS is calculated and can be used on an artificial example. The thereby presented procedure was used to detect data quality properties on the MNIST data set. Besides the possibility to detect outlier, isolated data points and local groups of similar output has the versatile applicability of the ECS been shown. ECS could also be used to validate quantitative data set requirements for data quality properties. These can state the minimum amount of elements per group, the amount of outliers or a maximum amount of local groups. Some of these properties, like the amount of accepted outliers in local groups, may be dependent from associated safety requirements and the required safety integrity level.
http://arxiv.org/abs/2307.05599v1
20230710194737
AlephZero and Mathematical Experience
[ "Simon DeDeo" ]
math.HO
[ "math.HO" ]
AlephZero and Mathematical Experience]AlephZero and Mathematical Experience Department of Social & Decision Sciences, Carnegie Mellon University, Pittsburgh PA 15123 USA & the Santa Fe Institute, Santa Fe NM 87501 USA [email protected] Contribution to a special issue of the Bulletin of the American Mathematical Society, “Will machines change mathematics?”. I thank Cris Moore, David Kinney, and John Bova for helpful discussions. This work was supported in part by the Survival and Flourishing Fund. [ Simon DeDeo August 12, 2023 =================== This essay explores the impact of automated proof construction on three key areas of mathematical cognition: on how we judge the role one piece of mathematics plays in another, on how we make mistakes in reasoning about mathematical objects, and on how we understand what our theorems are truly about. It concludes by speculating on a new form of mathematical experience that these methods could make possible: “glitching”, a game-like search for uncanny consequences of our definitions. § INTRODUCTION The advent of proof assistants such as Lean and Coq, combined with progress in Large Language Models and self-play systems such as AlphaZero, raises the question of what happens, to the practice of mathematics, when they are combined. In a recent paper that formed the basis of a 2022 Fields Institute Symposium <cit.>, Akshay Venkatesh even asks us imagine an “AlephZero”, trained not on the rules of Go, but on those of mathematical deduction, and that gains, in turn, human or post-human capacities, and is integrated into the mathematical community. This essay draws on basic ideas in cognitive science to predict three consequences of the contemporary turn to automated methods. First, it predicts a shift in mathematical judgement, as automation eliminates or blurs out experiences of impasse, crucial to anchoring judgements of value. Second, a shift in how we grasp mathematical objects, as automated systems prevent us from believing, even temporarily, false things about them. Third, a shift in how we relate to mathematical ideas, as our ability to create truths outpaces our ability to know what they might be about. To bring these changes into relief, I will talk in terms of loss: to the extent that we rely on automation in certain ways, we will no longer have certain kinds of mathematical experiences associated with impasse, error, and aboutness. Some of these experiences will be fewer in number, or lower in resolution—vaguer, briefer, less precise. Parts of my discussion might, as a consequence, remind readers of a dystopian future by the writer Ted Chiang <cit.>, where humans are outcompeted by oracular “metahumans”, and cease seeking knowledge altogether. It can be useful to imagine such futures, because they can serve as intuition pumps <cit.>. However, the nature of human curiosity suggests that now is not the time to expect them. Impasse, error, and aboutness will not disappear, and mathematicians at the cutting edge of automated methods (e.g., Ref. <cit.>) provide vivid accounts of all three. Those same accounts, however, also emphasise the ways in which their experiences are fundamentally different from what has come before. In the final section, “Glitching, Clipping, and Logical Exploits”, I speculate on the future evolution of this process. § VALUE AND IMPASSE Venkatesh's account of mathematical value emphasizes the importance of a conjecture being central: a conjecture acquires value when it is “linked with many other questions of (prior) importance” <cit.>; Jeremy Avigad <cit.> uses similar language. Mathematicians seem to follow a heuristic familiar to both the sciences and day-to-day reasoning: just as we often value, even over-value, explanations that link together an apparent diversity of prior observations <cit.>, mathematicians value conjectures that link together prior questions. On the surface, such a criterion is both intuitive and clear. Consider the Langlands Program, a celebrated example of value-through-linkage in modern mathematics. Even I, a non-mathematician, have heard of Langlands—for example in Michael Harris' admirable Mathematics without Apologies <cit.>. What I hear makes it sound like something that ought to be valued very highly indeed. If pressed, however, on what the Langlands Program actually is, I would say that it is an attempt to prove a set of theorems that link together high-value but hard-to-prove facts in number theory (on the one hand) and geometry (on the other). There are definitely curves involved, and integers, but what characterizes the theorems that make it into the Program and why the particular correspondences they govern, rather than others, are the object of such deep fascination, remains mysterious to me. My knowledge of the value of the Langlands Program is partial. This is not (just) because I don't know the theorems implicated in its correspondences, but also because my acquaintance with how this or that correspondence plays out when trying to prove things is at (at best) second-hand. My beliefs about the Program's value-relevant properties (“centrality”) come to me through the testimony of people who have tried to prove things when a Langlands correspondence is relevant, and not through the mathematical experience of trying to do so myself. Even if my beliefs about that value are correct, in other words, there is something suspect about my holding them without, at least silently, adding, “according to those who know”. Intuitively, the situation is analogous to the phenomenon of testimony in aesthetic matters <cit.>: for me to wax enthusiastic about the “deep importance” of Langlands, and the “true centrality” of its aims, is akin to someone praising the prose style of a book he has never read or the transporting delights of a cathedral he has never visited. If we transpose Ref. <cit.>'s account of aesthetic testimony to the judgement of mathematical value, we might say that Harris' book can transmit the correct beliefs about the value of the Langlands Program, but not the necessary understanding of those values. This can only come from the experience of working with Langlands itself. We need not restrict ourselves to something as exalted as Langlands. Similar concerns apply to any proof or conjecture, and the judgements of the depth to which it connects to the rest of mathematics. The validity of a judgement that Theorem A's involvement in Theorem B is important, depends, in the final analysis, on someone trying to prove Theorem B, and experiencing, directly, both the difficulty of the impasses that A resolves, and the ways in which A resolves them.[In the usual process, these experiences are shared socially, where they become testimony—not just to outsiders, but also to other mathematicians without the time or technical training to experience the process directly. There is nothing intrinsically wrong with this: just as in the case of aesthetic judgement, it is not always wrong to rely on testimony, for example, in the awarding of grants and prizes. Ref. <cit.> argues for an asymmetry in the aesthetic case: testimony can establish that something ought to be found beautiful upon acquaintance, but not that it is, indeed, so.] It seems difficult to eliminate the importance, for mathematical judgement, of working something through. In the sciences, by contrast, I can discover the key relationships between propositions simply by varying their likelihoods of being true and seeing how one affects another <cit.>: no matter how complex the underlying theory for why, say, X affects Y, I can determine the importance of a binary variable Z by turning it on and off (or, in probabilistic theories, by making it more or less likely to be on). No similar analysis works in the mathematical case, because I don't know how mathematics (or even logic) works in a world where (say) there were only a finite number of primes. One can imagine varying a mathematician's confidence in the validity of a step in a proof—this is what my colleague Scott Viteri and I do in Ref. <cit.>—but what this leads to is an (approximate) cognitive account of mathematical experience, not a paraconsistent theory of mathematics itself. If we see mathematics as a computer does—as a formal, “timeless” deductive system—we can say at best that, in mathematics, everything depends on everything: every (proven) theorem depends on every other simply because, if it were false, mathematics itself would be inconsistent. Automated methods seem to pose a challenge to how these judgements get made because testimony about value no longer necessarily “bottoms out” in human experience. The very goal of these tools is to take some part of the reasoning process out of human experience, and into a more reliable, mechanical realm.[The transposition is—we hope—truth preserving, i.e., we are just as justified, if not more justified, than before in believing in the truth of the result. The argument here concerns testimony about value, not (as in Ref. <cit.>) judgements about truth.] These methods might tell us that an efficient proof of Theorem B has, in its syntax tree, a crucial lemma of Theorem A. By their very nature, however, these trees are extraordinarily complicated, and simply knowing which theorems are cited is not decisive: Theorem B will depend upon many other theorems, including a host of more or less trivial things, and it may also depend on deep things but only in what a human would consider a trivial fashion. (This is the best-case scenario: a truly efficient proof of Theorem B might, in fact, make the role of Theorem A's lemma harder to notice, distributing it in fragments across the entire tree.) In response to this challenge, we might conduct a post-hoc analysis of the syntax tree of the proof of Theorem B and show how some graph-theoretic property shows that subtrees, associated with Theorem A, are especially “central”; my colleagues in network science, for example, might use “betweenness centrality” <cit.>. What we gain, however, is not the relevant experience of value, but only knowledge about a hypothesised proxy for value, an operationalization. Such a proxy may correlate with value judgements we already believe from experience (“the role of Theorem A is graph-theoretically similar, at p<0.01, to the celebrated role of...”). Proxies, however, are not the thing itself: at best, they are another form of testimony, one that lacks even the backing of experience. Deployed widely enough, the reliance on such proxies—even if they correlated perfectly with ideal judgement—would lead to a strange scenario: a kind of zombie mathematics, where mathematicians celebrate a theorem not for how it untangles and reorders their reasoning, or the reasoning of their colleagues, but because it has a high centrality score. Such a dystopian fantasy is unlikely, of course, to actually occur. We can expect automated methods to hide, behind mechanical search, certain experiences of impasse and resolution. But mathematicians, like all humans, have an insatiable drive for experiences and we should not expect the new experiences to be any less characterized by this dialectic. We should, however, expect them to transform; we return to how they might in Section <ref>. § MODELS AND ERRORS As alluded to in Section <ref>, mathematical errors are particularly hard to understand in the formalist picture. If mathematics is solely a matter of logical deduction from axioms, then errors are nothing more than “illogical thoughts”—on a par with claims about “square triangles” or “the third even prime”, forms of nonsense that are not truly thoughts about anything at all <cit.>. Such errors may be of interest to psychologists and cognitive scientists, perhaps, as physical causes of belief, but have no relevance for the subject of mathematics itself beyond the bare fact of their ungrammaticality. Putting “AlephZero” systems to the side for a moment, automated proof assistants, such as Coq and Lean, are only too happy to accommodate this point of view. Assuming that the underlying code implements the type system correctly, a proof assistant will never allow a mathematician to introduce a falsehood into the text. Because formalism excludes error as a meaningful component of mathematics, it is particularly interesting to see how mathematicians themselves make sense of what happens when they make errors. Some accounts are purely psychological, in a trivial sense; the eight errors listed by Ref. <cit.>, for example, point to standard human failures such as hubris and the Dunning-Kruger effect. In these cases, error is no more relevant to mathematical experience than sleep is: we have limits on our ability to be reasonable, and these limits can stop us doing mathematics. Other accounts are more cognitive; they see error as something that emerges organically from the process of mathematics itself. Ref. <cit.>, for example, describes the difference between “local” and “global” errors in a proof, and how the former have mathematical properties that make them easier to fix; statistical study of proofs <cit.> suggests that their logical structure is indeed modular in ways that help make sense of Tao's local-global distinction. In discussions with mathematicians, one learns that they have a variety of heuristics they use to identify errors in their reasoning, including particular signatures and traces that error leaves on downstream steps. One sign of an error, as I learned during the 2022 Symposium, is that subsequent results become “too easy” to obtain, in a kind of limited principle of explosion that temporarily levels the hierarchy of value. The extraordinary difficulty of mathematics, one imagines, makes these kinds of accounts all but inevitable: if mathematics only happened when one was reasoning correctly, then the majority of mathematicians would be spending the majority of their time speaking nonsense, no mathematics at all, and, as Lear says to Cordelia, “nothing can come of nothing”. If, by contrast, we believe we can experience errors of logic as having mathematical meaning—if we are willing to say, for example, that one can assert falsehoods while making progress—we must also grant that some essential aspect of the activity goes beyond the formalist picture. In as much as automated methods help us be wrong less often, they foreclose an aspect of the mathematical experience. For many mathematicians, of course, the prospect of spending less time being wrong is delightful: one wants the maximum amount of truth per unit time. Foreclose away! Is anything truly lost along with the experience of error? From the cognitive point of view: very possibly, yes. This is because mathematicians, just as much as any other human, are expected to rely in part on the construction of mental models <cit.>; in particular, reduced and partial mental models of the mathematical objects in play. In day-to-day life mental models are tuned by interaction with the world, through a process of learning and feedback; to develop our mental models, we use them to generate predictions about the world that we then compare to reality <cit.>. If our model fails to predict correctly, we update it. This update is far from instantaneous: sustained model failures help direct our attention—we attend more closely to aspects of our experience that our models failed to predict <cit.>, and prediction failures seem to be not only a core feature of low-level cognition <cit.> but useful guides to high-level decision-making processes such as scientific exploration <cit.>. The mathematical parallel is, most naturally, the use of mental models to be wrong about mathematics; without the possibility of being wrong, the mental model can not change and the process of attention is diffused. This leads to a second dystopian fantasy, one where automated methods lead to a world in which our mental models become increasingly vague and low-resolution. Such a loss would be more than simply epistemic. It is not just that mathematicians will have more impoverished representations of what they are doing, but also that they would be deprived of particular experiences. Mathematicians may have a horror of error, but wandering into error is an unavoidable consequence of taking deliberate risks with our mental models—an experience inseparable from the act of exploring the world with curiosity. Just as in the previous section, I suggest that this is unlikely to happen: mathematicians are simply too curious about their objects not to want to explore and reason about them in ways that allow them to be wrong. The most obvious way to continue doing that is in trying—and, naturally, sometimes failing—to predict what an automated theorem prover will do given an input. This is only a partial compensation, however, because “cyborg” errors last only as long as it takes to phrase and type the first step of the erroneous intuition. Automated systems such as Lean can defer a subgoal in a proof with a keyword such as sorry, but they are not (yet) able to humor their human counterparts by suggesting that a subgoal has been achieved when the assumption is false. § DEFINITIONS AND ABOUTNESS In his talk at the 2022 Fields Institute Symposium about the Liquid Tensor Experiment <cit.> and its challenge to formalize, in Lean, a key theorem of Peter Scholtze's work in Condensed Mathematics, Johan Commelin draws attention to an illuminating risk of formalization: the risk that, through malice or accident, one's definitions may make a proof trivial. A casual observer of the automated proof can look at the Lean statement of the main theorem, and see how it parallels, more or less, Peter Scholtze's original human text. There are references to profinite sets, p-Banach spaces, and so forth, in the expected places. In the end, however, when we glance back and forth between Scholtze's LaTeX statement and Lean's monospaced font, ...as Magritte told us “Ceci n'est pas une pipe”. We could have done something very evil[Commelin's reference to “evil” may seem extreme, but it is not entirely out of bounds; consider, for example, the recent (Spring 2023) scandal of the “Space Zoom” feature on Samsung phones: unknown to the public, the internal code was able to recognize when a user was taking a picture of the moon, and filled in details of the image that would have been impossible for the detector itself to have captured given atmospheric conditions.] or we could have done something very stupid—what if we had just defined X groups to be zero to begin with, because that's what the main statement is about. We need to prove that for all i some X group is zero, well, if we just define it to be zero then we're done ... even if we're not evil we could have done something stupid that would have completely trivialized the proof. [Johan Commelin, “Abstract Formalities”, 2022 Fields Institute Symposium, minute 59] The deeper concern, as Commelin articulates it, is that automated methods often place us in a position where we have thousands of definitions, and even in the presence of abstraction boundaries, documentation, and test cases, one is, in the final analysis, thrown back on abductive reasoning to figure out what, exactly, the proof is about. We might say, in turn, that we have two forms of aboutness in play. On the one hand, we have the way in which Scholtze's proof is an attempt to reason about ideas[I follow Scholtze's use of the word “idea”; see <https://bit.ly/scholze_harris>.] that Scholtze had in mind, ideas created through a (collaborative, iterative) process of forming intuitions, making definitions that answered to them, and attempting to prove things. On the other hand, we have the way in which a set of code fragments, labelled “definitions” and input into Lean's axiomatic system, are mutually constrained by the laws of type theory to output some sets of code fragments downstream, and not others. Commelin is in the business of mediating between these forms: poetically speaking, asking if the “shape” of the code fragments that the Lean proof enables (the representation of the pipe) matches the “shape” of Scholtze's ideas (the pipe itself). The need to square competing forms of aboutness is reminiscent of the dialogues in Lakatos' Proofs and Refutations <cit.>, which provide an extended example of how students, challenged to prove something, go back and forth exchanging claims and counterexamples, gradually realizing the limits of their intuitions—that what they think they are proving things about is not quite what they think it is—and correcting the terms of the proof as they go. Catarina Dutilh Novaes <cit.> formalizes this idea as a game between “prover” and “skeptic”; the end point—or the asymptote—of this dialectic is the justified belief that the proof (in the end) truly is about the ideas in question (in the end). The machine version of this, however, seems distinct: it is unclear what it would mean to achieve intersubjective agreement with a machine. It is, certainly, possible for two human mathematicians to come to agreement about the “aboutness” of a Lean proof, but this is a distinct different task—more analogous to two scientists attempting, through experiments, to determine the most efficient account of the causal structure of a black box, and perhaps the “stochastic mathematical systems” of Ref. <cit.>. Commelin's reference to the need for abductive reasoning about a Lean proof may point in this direction, but it is important to distinguish this from the students in Lakatos's dialogues. The Lakatos students are engaged in dialectical logic, not empirical modeling: they argue with, rather than about, each other. It is this “arguing with” experience, and the errors and vagueness it involves, that automated methods seem to exclude. § ℵ(0) IN THE LOOP: GLITCHING, CLIPPING, AND LOGICAL EXPLOITS Much of what we know about the impact of automated methods comes from proof verification systems. These somewhat constraining tools, however, are already being combined with systems like OpenAI's “Codex” code-completion technology, which can propose next steps, respond dynamically to criticism and coaching, and which maintains hidden states analogous to mental models of the task at hand. Mathematicians who become familiar with the syntax of these tools may soon be engaged in quite extensive forms of co-construction, where the machine not only “hammers”—fills in small gaps—but proposes new definitions, attempts to prove lemmas of its own devising, and even enlists the human partner in new goals that may be only partially transparent. One analogy to this experience, sometimes made by those on the cutting edge of this process, is to the iterative feedback found in a video game. If the dialectic of traditional mathematics is somewhat like the human-on-human play of Dungeons and Dragons—with the prover serving as “Dungeon Master”, and the skeptics attempting to play within, or subvert, the boundaries of the prover's vision—the new era heralded by projects such as the Liquid Tensor Experiment might be thought of a massively multiplayer online role-playing game (MMPORG), with an engine behind the screen that responds, in sometimes counterintuitive ways, to the community's inputs. Once moves in a proof system are allowed to produce side-effects (i.e., once we move beyond the functional programming paradigm of a system such as Lean to more general Codex-like systems), the analogy to video games is tighter than it might at first appear. As long as they do not crash, violate memory boundaries, or fall into an infinite loop, proof assistants and video games are just ordinary computer programs, a complex system of inter-related states and state-to-state transition rules. Even within the functional programming paradigm, game-like experiences seem to be common for those who engage at sufficient depth. Video games are inherently pleasurable creations, and the 21st Century may well be defined by this novel form <cit.>. Unlike other forms of art, they give agency a central role <cit.>, which parallels key aspects of mathematics: the experience of mathematics is an agentic one, one of dialectic, choice-making, puzzle-solving, backtracking, and error—far removed from the passive contemplation of timeless structures urged by early versions of formalism. Even if automated systems take on a video game quality, however, we should not expect mathematicians to be content with “playing” an automated proof system the way an ordinary person plays, say, Super Mario Bros. In “Mathematics and the Formal Turn”, for example, Jeremy Avigad notes how some Lean users talk in terms of “golfing a proof”, a practice analogous to “code bumming” described in early histories of the computer revolution <cit.>—and reminiscent of the practice today of the “speed run”, where someone attempts (say) to finish all the levels of Super Mario Bros in an absolutely minimal amount of time. One phenomenon not yet explored (to my knowledge) is the attempt to discover, and make use of, glitches. Glitches are a particularly uncanny source of interest, which emerge out of how ordinary video games attempt to simulate a physical reality for the user to explore. Misjudgements in the designers' vision sometimes means that particular objects or locations, despite following the prescriptions of the “physics engine” perfectly, have exceptional properties that violate our mental models of the intended reality and produce novel game logics. In one game, for example, the chain on a swing set in a playground—a minor background object intended mostly as scenery—serves as a fount of energy that can launch a player into the sky; in another, a player can walk through walls (“clipping”) if he holds a plate (or other dining utensil) in front of him as he moves. Glitches arise at the intersection of the technological and the human <cit.>. They are semantic errors (i.e., they reveal that the physics engine in question is not actually mirroring the world the player expects), but not syntactic ones (i.e., the code has, indeed, compiled correctly, and there is no crash, buffer overflow, or subversion). Following the discussion of Section <ref>, it seems likely that such glitches should appear in automated systems even when—indeed, precisely because—the type-checker is working as expected. They occupy a space between the intuitively true and the logically false—not true antinomies or logical errors, but uncanny experiences of what ought not to be. Video game glitches are difficult to find by examination of the code. They tend to be discovered by communities engaged in obsessive forms of play well beyond ordinary use <cit.>, and similar levels of obsessiveness may be needed for the mathematical case. Glitches might also be found by automated means: for example, fuzzing <cit.>, a technique for finding security vulnerabilities in code by trying trillions of randomly-chosen, but synatactically valid, inputs. Fuzzing could be applied to a system like Lean and checked against basic human intuitions to seek out the “security vulnerabilities” in our axioms. The day when we discover unexpected, uncanny glitches in the definitions we have to hand, or the ones we will co-create with our machines, may be near. Bertrand Russell once wrote, partly in jest, that “mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true” <cit.>. With the advent of AlephZero it seems likely that, while we will continue to not-know these things, we will come to not-know them in new and unexpected ways. amsplain
http://arxiv.org/abs/2307.04357v1
20230710055929
Survey-scale discovery-based research processes: Evaluating a bespoke visualisation environment for astronomical survey data
[ "C. J. Fluke", "D. Vohl", "V. A. Kilborn", "C. Murugeshan" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.GA" ]
Next generation astronomical surveys naturally pose challenges for human-centred visualisation and analysis workflows that currently rely on the use of standard desktop display environments. While a significant fraction of the data preparation and analysis will be taken care of by automated pipelines, crucial steps of knowledge discovery can still only be achieved through various level of human interpretation. As the number of sources in a survey grows, there is need to both modify and simplify repetitive visualisation processes that need to be completed for each source. As tasks such as per-source quality control, candidate rejection, and morphological classification all share a single instruction, multiple data (SIMD) work pattern, they are amenable to a parallel solution. Selecting extragalactic neutral hydrogen (Hi) surveys as a representative example, we use system performance benchmarking and the visual data and reasoning (VDAR) methodology from the field of information visualisation to evaluate a bespoke comparative visualisation environment: the encube visual analytics framework deployed on the 83 Megapixel Swinburne Discovery Wall. Through benchmarking using spectral cube data from existing Hi surveys, we are able to perform interactive comparative visualisation via texture-based volume rendering of 180 three-dimensional (3D) data cubes at a time. The time to load a configuration of spectral cubes scale linearly with the number of voxels, with independent samples of 180 cubes (8.4 Gigavoxels or 34 Gigabytes) each loading in under 5 minutes. We show that parallel comparative inspection is a productive and time-saving technique which can reduce the time taken to complete SIMD-style visual tasks currently performed at the desktop by at least two orders of magnitude, potentially rendering some labour-intensive desktop-based workflows obsolete. § INTRODUCTION Next generation astronomical surveys will pose challenges for a range of human-centred visualisation and analysis workflows that currently rely on the use of standard desktop display environments. Knowledge discovery activities that were, or perhaps still are, feasible for a human to perform when the quantity (i.e. volume) or rate (i.e. velocity) of data available was low are becoming more reliant on automated or autonomous solutions. While desktop computing has already been augmented through the adoption of supercomputing and cloud-style remote services, the visualisation and display of astronomical data is still strongly dependent on the utilisation of laptop screens or monitors located in the astronomer's office. To address the specific needs of individual astronomers, and astronomical research teams, a collection of data analysis and visualisation tools are required. This includes continuing to take full advantage of existing, well-established options that are able to be scaled-up effectively, along with developing and assessing the potential of novel solutions or systems that either provide extra functionalities, or that can be connected into extensible workflows (e.g. virtual observatory model). §.§ Comparative visualisation Seeing many sources together – comparative visualisation – is an approach that naturally supports pattern-finding (“those galaxies all show similar kinematic properties”) and anomaly detection (“why is that one source so different to everything else?”). Such multi-object comparisons might include quality control activities (e.g. assessing whether a source finder or automated calibration pipeline is functioning as expected by selecting a sample of sources for assessment, which might include fine-tuning to check or verify a machine learning algorithm), investigating outcomes of model-fitting (e.g. examining the residual signal once different types of kinematic models are applied), or any of a range of standard analysis tasks that can be performed based on morphological or environmental selection criteria (e.g. field compared with cluster galaxies, dwarf galaxies versus grand design spirals, or the discovery of novel classes of objects when a new discovery space is opened). We will refer to all such activities as survey-scale discovery-based research processes, as the purpose is to explore data in order to make sense of it [see the model of “sensemaking” presented by <cit.>, and applied in Section <ref>]. Limited scope for comparative visualisation can occur by either loading data into several independent instances of a visualisation tool (usually on the same computing platform) or by switching between individual views of multiple objects, requiring loading and unloading of data. When working with large-scale survey data, desktop-based visualisation strategies may lead to a reduction in the ability for an individual to see patterns across a sizeable portion of the survey. In practice, effective comparative visualisation cannot be achieved by moving between visualisations of one or two objects at a time. At each stage, there is a loss of time to input/output, and a strong reliance on the visual recall abilities of the astronomer [see <cit.> for a related discussion]. Individual instances are unlikely to have linked camera actions (e.g. panning, rotation, zoom, scaling), requiring the use of repetitive interaction processes. Moreover, if performed at the desktop, the small physical display space of a standard monitor is not always conducive to real-time, collaborative inspection for those researchers who prefer, or find it more productive, to work this way. §.§ Single instruction, multiple data work patterns Survey-scale discovery-based research processes, such as those described above, are all highly repetitive, and may need to be completed for each individual source. Many repetitive research processes share a single instruction, multiple data (SIMD) work pattern, and so are amenable to a parallel solution. One approach to the parallelisation of human-centred visualisation and analysis tasks is to share the work out amongst multiple team members [e.g. as occurred while preparing catalogues for the Hi Parkes All Sky Survey – see <cit.> and <cit.>], or further afield via crowd-sourcing of citizen scientists <cit.>. A limitation to these distributed processes is one of consistency in decision-making between team members with diverse skill levels [see, for example, <cit.>]. An investment in training may be required, or a complex task must be abstracted to one of group-consensus classification. Furthermore, while serendipitous discoveries do occur in citizen science activities, that is not the norm. An alternative is to change the viewing paradigm, so that a more suitable mode of parallel inspection by a single researcher, or co-located team, can be achieved. This is the approach we investigate in this work using encube[Long term access to open source software described by <cit.>.]: a visual analytics framework for collaborative and comparative visualisation, designed to work on a multi-monitor tiled display wall and dedicated compute nodes <cit.>. Figure <ref> shows encube operating on the Swinburne Discovery Wall (see Section <ref>), providing simultaneous display of 80 spectral cubes sampled from three extragalactic neutral hydrogen (Hi) surveys (described in more detail in Section <ref>). §.§ The visual data analysis and reasoning methodology In order to best utilise non-standard or novel visualisation systems, it is important to understand their strengths and weaknesses. The suitability of any visualisation approach or environment – software or hardware, standard or bespoke – should be examined or evaluated using appropriate methodologies. Looking to the broader field of information visualisation, such evaluations can include investigation of either the process of visualisation or the nature of visualisation systems and algorithms <cit.>. For our investigation of survey-scale discovery-based research processes, we select the empirical visual data analysis and reasoning (VDAR) methodology. A VDAR evaluation is usually approached via a case study: a cohort of experts assess their ability to derive knowledge about a relevant dataset while using a new visualisation system, software or strategy to perform domain-specific tasks <cit.>. As our relevant dataset, we utilise existing extragalactic Hi survey data (see Section <ref>), available as an ensemble of spectral cubes (two spatial dimensions and one spectral dimension). We consider three representative survey-scale discovery-based research processes that can occur in the preparation and analysis of large-scale extragalactic Hi surveys: * Quality control of individual sources, ensuring that calibrations have been applied correctly and bad channels (e.g. impacted by interference or instrumental features) have been flagged or removed; * Candidate rejection, whereby false-positive detections from automated source finders are identified and removed from the catalogue. This can also help to improve training sets of “non-source” examples for use with machine learning and related automated methods; and * Morphological classification, identifying and sorting sources into categories based on observed structural, kinematic or environmental properties. The classification process may also include anomaly detection, wherein unexpected discoveries are made based on the observed structural properties. Through a mix of visual analytic functionalities, including interactive three-dimensional (3D) volume rendering methods, encube provides ways to explore both spatial and spectral features, which can be matched to other observed or derived parameters. A 3D approach can help to reveal complex kinematic structures or system artefacts that might otherwise appear only in projection using moment maps or position-velocity diagrams. We choose to perform our evaluation with 3D methods as they: (1) are the current defaults within the public encube code; (2) present an upper bound in terms of the computation required for benchmarking purposes; and (3) provide the VDAR user cohort with access to novel comparative sensemaking strategies via the Swinburne Discovery Wall. For other applications, alternative data visualisation modes such as moment maps[A camera projection parallel to any axis of a spectral cube can be used to generate a two-dimensional (2D) projection of the data <cit.>, and hence can be used to generate 2D solution space representations while still retaining access to the full representation of the data in memory for fast calculations using graphics shaders.] or scatter plots could be utilised as they are supported by the underlying visualisation framework. §.§ Overview In this paper, we consider a specific visualisation problem that is not feasible to address using a desktop-based visualisation solution: interactive, comparative visualisation of ≥100 data instances. We evaluate the practicality of using a bespoke visualisation environment (viz. encube and the Swinburne Discovery Wall) for survey-scale discovery-based research processes through: (1) system benchmarking, which provides quantitative information on system performance and scalability; and (2) a visual data analysis and reasoning study. For five different display configurations, supporting simultaneous visualisation of 20, 40, 80, 120 or 180 spectral cubes, selected from representative extragalactic Hi survey datasets, we report benchmarking in terms of the two most critical factors: (1) the time taken to load an ensemble of spectral cubes; and (2) the typical minimum interactive frame rate. Together, these values allow us to estimate the visualisation throughput, V_ tp (sources/hour), that might be achieved by a single user when undertaking SIMD tasks such as quality control, candidate rejection or morphological classification. Compared to the serial case of viewing one data instance at a time on a standard desktop monitor, encube and the Swinburne Discovery Wall could decrease the time taken to complete survey-scale comparative visualisation workflows by a factor of 100 or more. In Section <ref>, we explain the main technical elements of the bespoke visualisation environment. In Section <ref>, we provide background on the extragalactic Hi case study. We evaluate the visualisation environment through system benchmarking (Section <ref>) and via the VDAR evaluation (Section <ref>), which considers three typical discovery-based SIMD activities: quality control, candidate rejection and morphological classification. We present a discussion of our finding in Section <ref>, and present our conclusions in Section <ref>. Further technical and implementation notes can be found in <ref>. Our approach can be generalised to any survey datasets comprising more individual observations or instances than can be comfortably analysed or scrutinised by one investigator on a standard desktop display. This might include two-dimensional images or moment-map projections, optical/infrared spectral cubes (e.g. from integral field spectroscopy), or simulation data products. The comparative visualisation strategies demonstrated here are applicable to any similar SIMD-style activity, and are not restricted to the specific use of encube with the Swinburne Discovery Wall. As an open source solution, users are encouraged to modify the functionality of encube (e.g. in order to provide alternative 2D or 3D visualisation modes or to handle domain-specific data formats) or reconfigure the arrangement of the display environment to suit their own survey-scale discovery-based research needs. § A BESPOKE COMPARATIVE VISUALISATION ENVIRONMENT In this section, we provide a technical overview of the two main components of the bespoke comparative visualisation environment used in this work: (1) the encube framework, which enables visualisation of multiple data instance (in the form of spectral cubes for our case study); and (2) the Swinburne Discovery Wall, a specific instance of a large-area tiled display wall. Encube was conceptualised and developed specifically to support SIMD visualisation and analysis tasks, with an aim to accelerate data-intensive comparative visualisation and discovery workflows. Encube displays multiple individual data visualisations across single or multiple display devices, with interaction coordinated through a user interface on the master node. For related approaches, see the virtual reality implementation of BentoBox <cit.> and the “shelves” metaphor for small-multiples that considers utilisation of immersive space <cit.>. §.§ The encube framework The encube framework <cit.> supports comparative visualisation and analysis of survey data (also referred to as an ensemble in other domains). The primary development emphasis was for structured 3D data: spectral cube data from astronomy and magnetic resonance imaging data from medical imaging. Encube provides an interactive data exploration and analysis experience, employing a strategic mixture of software (data processing, management, visualisation, analysis) and hardware (graphics processing units, computer cluster, displays). Encube is a modular and open-source code base <cit.>, where each module targets a specific set of tasks within a visual analytics workflow: (1) processing and visualisation of data; (2) workflow and communication management; and (3) user interactions. Similar to a microservices-style architecture, the modular design allows individual components to be connected, enhanced or replaced as required, so that encube can be kept compatible with, and scalable to, the requirements of future science operations. For instance, customisable code for 3D visualisation is currently created using the C/C++ languages for good performance with the S2PLOT interactive programming library <cit.>, which builds on the OpenGL[<http://www.opengl.org>] graphics library. From a system architecture standpoint, encube comprises a process layer and an input/output (I/O) layer. The process layer performs data processing tasks (load data, compute statistics, render visualisation), and the I/O layer responds to user inputs and generates visual outputs. Each layer contains units where specified tasks are performed. Depending on the task, a unit can be instantiated once, or multiple times for parallel operation (generally on different compute hardware). In its current form, the encube process layer comprises a single manager unit and one or more process and render units, while the I/O layer contains an interaction unit and one or more display units. Units can communicate between each other in order to pass workflow information across the architecture. The communication pathway between units can be represented as a directed graph [see Figures 2 and 4 of <cit.>]: ↕ ↕ ↓ where the arrows indicate the information flow direction between two unit vertices on the graph. Based on the number of instances of a unit, communication can include serial or parallel messages. We note that peer-to-peer communication within a unit type is not currently implemented (e.g. direct message passing between two interaction units). The manager unit orchestrates the overall software workflow. It first reads a configuration file containing network information about the available compute nodes, characteristics of the tiled visualisation output, along with system metadata and the location of the dataset. This unit also schedules and synchronises the workflow, sharing metadata as well as commands with other neighbouring units. Here, the manager unit acts as a messenger between an interaction unit and a process and render unit. Moreover, given that all commands pass through the manager unit, the workflow history and system state can be recorded (if requested) so that actions can be revised, replicated, or continued later. The interaction unit is where a user interacts with the dataset. In particular, the user can specify which data files to load and visualise, change visualisation parameters (e.g. ray-tracing method), select and organise individual visualisations, and request diagnostic plots. The interaction unit provides a “world in miniature” view of the display setup, mapping regions within the user interface to the physical display. Metadata is presented in a table, which can be sorted by categories. Visualisations are generated after selecting rows of the table, either individually or by ordered batch (e.g. sorted by parameters such as distance, size, etc.). Once data is loaded into memory on a process and render unit, visualisation parameters (e.g. histogram thresholds, spatial cropping, colourmap selection) can be updated in real time to modify one or more visualisations. Global or partial statistical values can also be computed on request for selected data files and gathered to summarise properties of a subset. The process and render unit provides functionalities such as loading data files to GPU memory, computing statistics (e.g. mean, standard deviation, histogram), creating visualisation callbacks (e.g. including responses to input via keyboard, mouse, or the remote user interface), and generating the visualisations through texture-based volume rendering. Finally, a visualisation rendered by a process and render unit is displayed on screen via the display unit. A display unit provides a mapping to one or more physical screens via the configuration file read by the manager unit. §.§ The Swinburne Discovery Wall From its inception, encube was designed for use in high-end visualisation environments comprising multiple off-the-shelf displays, i.e. a tiled display wall (TDW). See <cit.> and <cit.> for detailed investigations of the role of TDWs in astronomy. A TDW provides several advantages over a standalone workstation monitor: many more pixels, a greater display area, and, in some cases, access to additional co-located computing power. Initial deployment and testing of encube was undertaken with the CAVE2^ TM hybrid high-performance computing and visualisation space at Monash University [as reported in <cit.>]. The Monash CAVE2^ TM <cit.> comprised 80 stereoscopic-capable displays, with a cylindrical configuration (330 degrees to allow entry and exit from the physical space) of four rows and 20 columns. Collectively, the environment provided 84 million pixels for two-dimensional display and 42 million pixels in stereoscopic mode. The Monash CAVE2^ TM was linked to a real-time compute cluster with a peak of 100 Tflop/s and 240 GB of GPU memory. Additional development, and the activities presented in this work, utilised the Discovery Wall (Figure <ref>) operated at Swinburne University of Technology. The Swinburne Discovery Wall is a TDW comprising ten Philips BDM4350UC 4K ultra high-definition (4K-UHD) monitors arranged in a matrix of two rows and five columns. The total pixel count is approximately 83 Megapixels and the accessible screen area is just under 5.0 m^2 (see Table <ref>). Each column of the Discovery Wall is connected to a Lenovo ThinkStation P410 Mini Tower (2.8 GHz, 16 GB RAM) with an NVIDIA GTX1080 graphics card (8 GB). The workstations operate with the CentOS[<http://www.centos.org>] Linux operating system (Version 7.4.1708), noting that we use the version of CentOS that was installed on the Discovery Wall when it was commissioned in 2018. The original iteration of the Swinburne Discovery Wall, which operated until November 2021, had one additional column of two 4K-UHD monitors such that the total screen area was 6.0 m^2 and a pixel count closer to 1 million pixels. In December 2021, the Discovery Wall hardware was transferred to a new location, but with insufficient wall-space to accommodate all six columns. Reconfiguration of encube to work on the relocated and reduced-scale Discovery Wall in February 2022 required approximately two minutes to remove references to the sixth Lenovo MiniTower workstation from the encube source and scripts. § CASE STUDY: EXTRAGALACTIC HI ATRONOMY Consider the specific case of extragalactic Hi astronomy, which is based on observations of the 21 cm (1420.40576 MHz) hyperfine spin flip transition of the hydrogen atom. Theoretically predicted by <cit.>, and first detected by <cit.>, <cit.> and <cit.>, the 21 cm line provides a valuable signature of the neutral gas content of galaxies. Apart from being the primary component from which stars are eventually formed, the Hi gas in galaxies is also typically much more extended than their stellar discs [see <cit.>] making it an important tracer of the effects of both internal properties of galaxies, such as feedback and angular momentum <cit.>, as well as environmental processes such as ram pressure and tidal stripping to name a few [see <cit.> and <cit.>]. For these reasons, high spatial and spectral resolution studies of the HI gas distribution in galaxies are paramount for our understanding of galaxy evolution. Historically, extragalactic Hi surveys fall into three broad categories: (1) spectral line observations, using single-dish radio telescopes; (2) spatial mapping with multi-beam receivers <cit.>, whereby it became feasible to undertake spectral-line surveys at a large scale <cit.>; and (3) high-resolution spectral cube observations, utilising aperture synthesis. §.§ Extragalactic neutral hydrogen surveys The number of sources available from Hi surveys is undergoing a step-change. New wide-field and deep surveys have been enabled through instruments and facilities including: * The APERture Tile In Focus (APERTIF) upgrade to the Westerbork Synthesis Radio Telescope (WRST) – see <cit.>, with Hi survey descriptions in <cit.>, <cit.> and <cit.>; * The Australian Square Kilometre Array Pathfinder (ASKAP) – see <cit.> and Hi survey descriptions for the Widefield ASKAP L-band Legacy All-sky Blind SurveY (WALLABY) in <cit.> and <cit.>; and * MeerKAT <cit.>, with local <cit.> and ultra-deep <cit.> Hi surveys planned. The scale and rate of data collection from these programs provide a first opportunity to prepare for the future of Hi astronomy that will occur with the Square Kilometer Array (SKA). Using WALLABY as an example, these surveys will produce three main categories of data: * Large-scale survey cubes. Over a period of five years, WALLABY is expected to cover up to 1.4π sr of the sky with ∼ 550 full-resolution spectral cubes. Each cube is anticipated to have 4200 × 4200 spatial pixels and 7776 spectral channels, requiring ∼ 600 Gigtabytes (GB) per cube. The total data storage required for WALLABY will exceed 1 Petabyte. * Small-scale source cubelets. By running the Source Finding Application <cit.> on the survey cubes, candidate source cubelets can be extracted and stored separately, or simply have the coordinates of their bounding boxes within the survey cubes stored [see <cit.> for an overview, and <cit.> for a comparison of Hi source finders]. As source cubelets take up only a small fraction of the survey cubes, this is a much more manageable data volume to work with. Estimates of the number of Hi detections from WALLABY exceed 200,000 sources. Approximately 15–20 % of these sources are expected to be spatially resolved (i.e. where the spatial distribution of Hi is visible, which is anticipated to require at least 3-4 resolution elements or synthesised beams across the source). * Catalogues of derived data products. Along with the key parameters (e.g. position, velocity dispersion, Hi flux) generated by source finders such as SoFiA and Selavy <cit.>, further automated processing and analysis tasks can provide additional data. This includes activities such as disk-based model fitting [e.g. TiRiFiC <cit.>, ^ 3DBAROLO <cit.>, or 2DBAT, <cit.>, and see also the description of the WALLABY Kinematic Analysis Proto-Pipeline (WKAPP) in <cit.>], computation of integral properties (e.g. total Hi mass, star formation rates), or cross-matching with optical/infrared catalogues. Each of these data products will aid the development of insight and improved understanding of Hi's role in galaxy formation and evolution. §.§ Visualisation-dominated workflows The data-intensive demands of new Hi surveys has motivated the development of a number of customised tools for interactive qualitative and quantitative spectral cube visualisation <cit.>. Moving beyond the well-established and widely-utilised solutions such as Karma[https://www.atnf.csiro.au/computing/software/karma] <cit.> and CASA[https://casa.nrao.edu] [the Common Astronomy Software Applications package; <cit.>], alternatives for desktop-based visualisation and analysis include AstroVis <cit.>, SlicerAstro <cit.>, FRELLED [<cit.> using the free, open-source Blender animation software], FITS3D <cit.>, Shwirl <cit.>, and CARTA[https://cartavis.org/] <cit.>. <cit.> prototyped a solution using the Unity[https://unity.com] real-time 3D engine, which can be deployed on a desktop or operate with a variety of advanced display technologies. With their iDAVIE solution, <cit.> have successfully moved spectral cube visualisation and analysis into interactive and immersive virtual reality environments. Finally, targeting data products that greatly exceed the processing capabilities of standard desktop computers, <cit.> achieved real-time interactive visualisation of Terabyte-scale spectral cubes using a high-performance solution with graphics processing units (GPUs) and the GraphTIVA framework. For most of these examples, the workflow for visualisation and analysis of the gas in galaxies emphasises the study of one galaxy at a time. When the data volume is low and the data rate is slow, a great deal of human time can be dedicated to examining individual data cubes or source cubelets. While highly appropriate in an era of small surveys, this serial processing presents a bottleneck for knowledge discovery once the ASKAP and MeerKAT surveys scale up to include many thousands of spatially resolved sources. The transformation of a survey cube to a subset of source cubelets, and ultimately, a reliable, science-ready catalogue of data products can be encapsulated as a workflow. Parts of the workflow are expected to be fully automated [e.g. the Apercal calibration pipeline for Apertif surveys <cit.> or ASKAPSoft for ASKAP <cit.>]. Other stages will rely on some level of human intervention, either through computational steering (selecting parameters for the workflow, setting thresholds on source finders, etc.) or data visualisation for analysis and discovery. §.§ Survey data While future applications of the comparative visualisation strategies examined here may include the Hi surveys to be conducted with ASKAP and MeerKAT, we perform the benchmarking and VDAR evaluations using data from three extant Hi surveys that targetted nearby spiral and irregular galaxies: * WHISP: Westerbork Observations of Neutral Hydrogen in Irregular and Spiral Galaxies[http://wow.astron.nl], undertaken with the Westerbork Synthesis Radio Telescope <cit.>; * THINGS: The Hi Nearby Galaxy Survey[https://www2.mpia-hd.mpg.de/THINGS/Data.html] comprising high-spectral and high-spatial resolution data from the National Radio Astronomy Observatory Very Large Array <cit.>; and * LVHIS: The Local Volume Hi Survey[https://www.atnf.csiro.au/research/LVHIS/LVHIS-database.html], which obtained deep Hi line and 20-cm radio continuum observations with the Australia Telescope Compact Array <cit.>. We categorise the survey data products in terms of: (1) the number of sources (N_ s) in each survey catalogue; (2) the typical dimensionality of the data cubes (measured as spatial or spectral pixels); (3) the number of voxels (in Megavoxels or Mvox); and (4) the storage size (in Megabytes or MB) for an individual cube. For all three datasets, the spectral cubes were stored (and loaded into encube) using the Flexible Image Transport System (FITS) format <cit.>. See Table <ref> for further details, where we present the minimum, maximum and median values for the dimensions, voxel counts and storage sizes for the WHISP, THINGS and LVHIS catalogues. To simplify both the benchmarking investigation and VDAR evaluation, we make several minor modifications to the datasets in their published forms: * WHISP: Initial inspection of a sub-set of WHISP galaxies revealed that many of the spectral cubes have high levels of flux (relative to the peak source flux) at either end of the spectral band. Rapid identification of such systematic effects is an example of the type of SIMD quality control activity that comparative visualisation can address (see Section <ref>). For all of the WHISP cubes, we created new FITS files where we set the data values in the first eight and last eight spectral channels to zero. This does not change the load times for the mock surveys but does improve the default visualisation via texture-based volume rendering. * THINGS: We did not use the spectral cube for NGC 3031 (M81) in our benchmarking. As NGC 3031 is a nearby grand design spiral in Ursa Major, the spectral cube is much larger than other galaxies in the sample with 2201 × 2201 spatial and 178 spectral channel pixels. The file size of 3.45 GB is approximately half of the available memory on a GTX1080 GPU. Such a large source would not be typical of new extragalactic sources discovered with blind surveys. * LVHIS: A spectral cube data for NGC 5128 (LVHIS 048) was not available from the survey web-site, and we note a replication of data between sources LVHIS 014 and LVHIS 016, which are both identified as the dwarf irregular galaxy AM 0319-662. Removing LVHIS 016 and LVHIS 048 from the samples leaves us with N_ s = 80. § BENCHMARKING COMPARATIVE WORKFLOWS In this section, we report on benchmarking activities undertaken with the implementation of encube on the Swinburne Discovery Wall. §.§ Benchmarks Previous system benchmarks reported in <cit.> were performed with the Monash CAVE2^ TM. For deployment on the Swinburne Discovery Wall, we report: (1) the total (i.e. parallel) load time, T_ Load, for a configuration displaying N_ cube spectral cubes; and (2) the steady-state minimum frame rate, F_ rate, in frames/second. We consider both the frame rate per column, looking for variations in performance, along with the overall mean, standard deviation, and median of F_ rate. Frame rate quantities are calculated from the S2PLOT displays on columns 2 to 5 (see Figure <ref>). Column 1 is used for additional management and coordination tasks, and in order to access the user interface in the web browser, the S2PLOT display is not resized over both 4K-UHD monitors. The higher F_ rate values reported for column 1 show the overall reduced graphics workload when data is visualised on one 4K-UHD monitor instead of two. We obtained a total of 54 independent benchmarks for five different configurations (Sets A–E), displaying N_ cube = 20, 40, 80, 120 or 180 spectral cubes in total using the per-column configurations summarised in Table <ref>. The main limiting factors on N_ cube are the available GPU memory (8 GB/GPU for each of the five NVIDIA GTX1080 GPUs of the Swinburne Discovery Wall) and the number of columns of monitors. A simple upgrade path to improve performance is to replace these five older-generation GPUs with higher-memory alternatives. The benchmark configurations were generated comprising either spectral cubes from a single survey (denoted as [W]HISP, [T]HINGS or [L]VHIS) or from the combination of the three input surveys (denoted as [C]ombination). For scenarios where N_ cube exceeds the survey size, N_ s (see Table <ref>), random sampling with replacement is used to generate an appropriately-sized data set. For the combination survey, random sampling with replacement is used to generate a mock survey that is roughly equally split between the three input catalogues. Figure <ref> demonstrates the use of the two different colour-mapping methods for a mock LVHIS survey with 180 spectral cubes. The top panel uses a heat-style colour map, while the bottom map colours based on the relative velocity with respect to the middle spectral channel, which is assumed to be the kinematic centre. To mitigate the impact of memory caching on measurements of T_ Load, we generated three independent combinations of spectral cubes for each of the W, T, L and C configurations. A single benchmark value of T_ Load was obtained for each of the three alternatives, along with the measurements of F_ rate. For the 80-cube instance, we note that all LVHIS cubes are used, but they are randomly assigned between the five columns of the Discovery Wall for each benchmark instance. We did not generate configurations with N_ T > 80 as these data volumes exceed the memory capacity of the GPUs. The THINGS galaxies are the highest-resolution spectral cubes considered in this study, and are not as representative of the typical resolved or partially-resolved new detections that will arise from ASKAP or MeerKAT Hi surveys. Due to the presence of differing numbers of key-value pairs in the FITS headers, there is slight variation (see Table <ref>) in the ratio between V_ Store (the total data volume in GB) and N_ vox (the total number of voxels in Gigavoxels) for the 54 independent survey configurations. The result of a least-squares fit to the these two quantities was: V_ Store = 4.07 N_ vox- 0.084 , with the mean and sample standard deviation between measured and modelled values for V_ Store calculated to be -9.4 × 10^-6 GB and 0.13 GB respectively. For simplicity, we can approximate V_ Store∼ 4 N_ vox as expected for a data format using four bytes per voxel. §.§ Procedure All of the spectral cubes are stored on the workstation associated with column 1 of the Swinburne Discovery Wall (the Master Node - see Figure <ref>), and the other workstations access this data through a network file sytem (NFS) mount (see <ref>). Consequently, we expect that the limiting factors on T_ Load are: (1) the network bandwidth between each Process and Render workstation and the Master; (2) the read time from the NFS-mounted drive; and (3) the processing overheads due to pre-computation of statistical parameters, as noted at the end of <ref>. The following procedure was used to conduct each of the benchmark trials: * The set of spectral cubes is randomly selected either without replacement (when N_ cube≤ N_ s) or with replacement, and a database file is generated in the comma-separated variable (CSV) format required by encube. * Symbolic links are generated to each of the N_ cube spectral cubes, to minimise the duplication of data on the Master workstation. * Modifications to the encube configuration file (keyword-value pairs using JavaScript Object Notation[JSON: https://www.json.org/json-en.html]) are made, specifically the number of rows and columns of S2PLOT panels per column of the Discovery Wall, the total number of panels per workstation, and the names of the workstations. * Encube is launched from the Master workstation using the JSON configuration file, with calls to start the software on the Process and Render nodes. Socket connections are established between the Master and the Process and Render nodes, and a port is opened for connection to the user interface (UI). * The encube UI is activated as a web-page in the Firefox browser on the Master machine. The UI displays the database of spectral cube files. The required files are selected and timing for T_ Load commences on mouse-clicking the Load button. * Timing ends when all spectral cubes are displayed. As timing is performed by hand, all times are rounded up to the nearest whole second to account for the timekeeper's reaction time. * For the subset of configurations where frame rates are also recorded on a per-column basis, an autospin signal is triggered from the UI which causes all of the spectral cubes to rotate around the vertical axis. At each of the five keyboards attached to the columns (see Figure <ref>), the d key is pressed, activating the S2PLOT graphics debug mode, which reports the instantaneous frame rate (measured over a moving window of 5 seconds duration). After each spectral cube has completed several complete rotations, the lowest measured frame rate is recorded. This presents the worst-case scenario, as the frame rate is a strong function of both the viewing angle of a spectral cube and the fraction of the screen that is mapped to data voxels. * Once benchmark quantities have been recorded, a signal to stop the encube instances is initiated from the UI, and all of the processes are stopped from the Master workstation. It takes approximately 60 seconds for all nodes to release their socket connections ready for the next full iteration of the procedure. The outcomes of the benchmarks are reported as follows: * A statistical summary (mean, sample standard deviation, and median) of T_ Load for the three independent instances of each survey configuration is presented in the final two columns of Table <ref>. * The survey load time is plotted as a function of the storage volume in the left-hand panel of Figure <ref>. All 54 independent benchmarks for T_ Load are presented, with symbols for WHISP (squares), THINGS (circles), LVHIS (triangles) and the Combination survey (diamonds). * Individual values, and statistical characterisation of F_ rate is presented in Table <ref>. A subset of 21 configurations was considered here: Set A, with N_ cube = 20 and Set E, with N_ cube = 180. * The minimum frame rates for each of columns 2-5 for Set A (circles) and Set E (triangles) is plotted in the right-hand panel of Figure <ref> as a function of the mean memory per GPU on the Discovery Wall. A linear relationship exists between T_ Load (s) and V_ Store (GB), with a least squares fit result: T_ Load = 8.07 V_ Store + 4.58 . The mean and sample standard deviation between measured and modelled values for T_ Load were calculated to be 5.6 × 10^-4 seconds and 13.9 seconds respectively. The Pearson correlation coefficient between T_ Load and V_ Store was r = 0.98. For completeness, we find: T_ Load = 32.83 N_ vox + 4.063 with N_ vox in Gigavoxels. We discuss the implications of our benchmarking activities in Sections <ref> to <ref>. In the next section, we provide details of our VDAR evaluation. § VISUAL DATA ANALYSIS AND REASONING STUDY <cit.> <cit.> proposed a taxonomy for understanding and evaluating visualisation methods. We select the VDAR approach to examine typical survey-scale discovery-based research processes, relevant for current and future extragalactic Hi surveys. VDAR includes methodologies for evaluating the effectiveness or efficacy by which a visualisation tool helps to generate domain-specific actionable knowledge or understanding. VDAR methods, which often are based on case studies, investigate “the tool used in its intended environment with realistic tasks undertaken by domain experts” <cit.>, with an emphasis on the process rather than measurements of outcomes. Our user group for the VDAR study comprises only the authors of this work. This cohort includes domain experts (i.e. Hi astronomers with relevant experience in the observation, analysis and visualisation of spectral cubes), as required with the VDAR methodology. We assert that these experiences are representative of the broader Hi research community. Alternative evaluation methodologies for visualisations and visualisation systems <cit.> that we did not pursue include Evaluating Collaborative Data Analysis (CDA), which focuses on the process of collaboration and how it is supported by a visualisation solution, and User Performance (UP), which uses controlled experiments to measure, for example, the time taken for different users to complete tasks. As a point of comparison, <cit.> used the UP methodology to measure task performance when novice and expert participants completed an object identification activity using either a standard desktop monitor or a TDW. To provide relevant scenarios for the VDAR study, we consider three important SIMD processes that may be required when analysing extragalactic Hi survey data: (1) quality control of individual candidate spectral cubes; (2) candidate rejection, whereby false-positive detections from automated source finders are rejected; and (3) morphological classification, identifying and sorting sources into categories based on observed structural or kinematic properties. These three processes currently require some level of visual inspection [which may include the use of either projected moment maps or 3D visualisation methods, depending on the workflow preferences of the researcher(s) involved] in order to produce reliable, science-ready catalogues from large-scale, next-generation surveys. It is important to note that our VDAR study does not intend to demonstrate new knowledge about any of the three input Hi surveys – WHISP, THINGS, and LVHIS – as all have been well-studied in many other contexts. They stand in as proxies for future Hi survey data products that are, potentially, being viewed for the very first time by members of the research team. As such, there may be unexpected, or unexplained, features that are present in the data products, necessitating appropriate follow-up actions once they have been identified. Alternatively, the comparative visualisation stage may reveal that all is well with automated calibration or processing steps (e.g. model-fitting) at an early stage of science operations, thus serving its purpose. For a related example where the use of an alternative display technology evolves throughout the lifetime of an astronomical research project, see Section <ref>. §.§ Quality control When an Hi source finding pipeline is applied to a large-scale survey cube, the output is a set of individual source cubelets. Prior to their use in further analysis, there is value in performing by-eye quality control, to ensure that there are no significant issues with the data quality. This step would be expected to include looking for: (1) bad channels; (2) calibration errors such as poor continuum subtraction; (3) objects that have not been correctly extracted, such as extended sources that exceed the boundaries of the extracted cubelet; and (4) radio frequency interference. The VDAR study we performed to understand the quality control process relates to our observation when first visualising a sub-set of WHISP galaxies with encube. As noted in Section <ref>, spectral channels at both ends of the band-pass contain excess flux. We illustrate this issue in the top panel of Figure <ref>, using an 80-cube configuration. The excess flux is visible in 77 of the cubes displayed. This is seen as the strong blue and red features in each cube, making it difficult to see the WHISP galaxies themselves. With encube, it is immediately clear that a quality control issue is present and is impacting a sizeable portion of the survey. From Table <ref>, it takes less than 90 seconds to load the 80 WHISP cubes, and then less than 60 seconds to identify the 3 cases that do not appear to be affected. Performing this task in a serial fashion would require individual loading and inspection of spectral cubes: it would take much longer than 150 seconds to determine the extent of the quality control issue in order to take an appropriate action. Our solution was to replace data values in the first eight and last eight channels of each WHISP spectral cube. This has the desired effect, revealing the kinematic structures of the sources (see the lower panel of Figure <ref>). There will be an additional quantity of time required to resolve any quality control issue. In this case, we needed to write and execute a C-language program using the CFITSIO[https://heasarc.gsfc.nasa.gov/fitsio/] <cit.> library to create modified FITS-format data cubes for the WHISP galaxies. For a future Hi survey, it may require modification or re-tuning of an automated calibration pipeline. However, this time is independent of whether the quality control visualisation is approached in a serial or parallel fashion. Indeed, comparative visualisation provides a more rapid demonstration that the intervention had the desired effect. Our approach to comparative quality control with encube is consistent with the model of sensemaking presented by <cit.>. Here, our use of the Discovery Wall has two dimensions: (1) a foraging loop, organising data, searching for relations, and gathering evidence; and (2) a sensemaking loop, where alternative hypotheses are posed and examined, leading to a presentation of the outcomes. In the foraging loop, we determine that a quality control issue exists, as the initial volume renderings are not consistent with the expected profiles of Hi-detected sources. This issue impacts a significant number of spectral cubes in the sample (77 out of 80). Through physical navigation (i.e. moving to different locations near the Discovery Wall), the viewer can change their attention from a single object to an ensemble in order to gather evidence regarding the possible cause of the failed visualisations. In the sensemaking phase, we decide that a first course of action is to remove the impact of the excess flux in all spectral cubes, and visualise the outcomes. Further investigation could include selecting the subset of those spectral cubes most strongly impacted, in order to determine the cause(s) of the excess flux. §.§ Candidate rejection An unwanted outcome of automated source finders is the generation of false-positive detections. This is particularly true in their early phase of operation of new survey programs, when source finders may not have been tuned optimally to the specific characteristics of the data. But false-positives may persist throughout the lifetime of a survey. One way to improve the accuracy of source-finders is to raise the acceptance threshold, so that fewer candidates make it through the processing pipeline for further inspection and analysis. This approach reduces the discovery space, with many interesting objects remaining undetected. By lowering the acceptance criteria, more false candidates will need to be reviewed and ultimately rejected. This can be a particularly labour intensive phase. Visual inspection is the simplest way to distinguish between true sources and false detections, but may require an appropriate level of expertise. Here, again, quality control processes will be crucial, as individual cubelets may suffer from anomalies from processing, calibration, or interference. Our bespoke visualisation environment permits rapid inspection and comparison of many sources at the same time, improving the way that decisions are made regarding the nature of candidates. The VDAR study we performed to understand the candidate rejection process was to: * Load one of the 80-cube combination surveys (Set C), with T_ Load∼ 150 seconds. The combination survey includes a high proportion of spatially resolved galaxies from the THINGS and LVHIS catalogues. * Visually inspect every source, looking for the spatially resolved galaxies, and then identifying which of these did not immediately match the expected template of a grand design spiral galaxy. It took less than three minutes to visually inspect all 80 cubes. While some resolved, non-spiral galaxies were very easy to identify, others require additional time in order to reach a decision. Here, the use of the volume rendering technique allows for individual sources, or sets of sources, to be rotated such that either the spatial or kinematic structure can be used to reach a decision. Figure <ref> shows columns 2–5 of the Swinburne Discovery Wall, with labels under the image used to identify five sources of interest (A-E): * Source A (THINGS, NGC 3077) is spatially resolved, but shows a disrupted Hi structure. NGC 3077 is connected to a larger neighbouring spiral galaxy, M81, by an Hi bridge <cit.>; * Source B (LVHIS, ESO 245-G007) shows a “tube-like” feature (readily apparent when rotating the spectral cube) surrounding a central, somewhat spatially unresolved object; * For source C (WHISP, UGC01178), there is no visible flux, which is likely due to a poor choice of the default visualisation parameters; * Source D (LVHIS, AM 0319-662) comprises two Hi detections, with the more prominent source offset from the centre of the cube. The central LVHIS source is a dwarf irregular galaxy, a companion to NGC 1313 at the lower right of the cube <cit.>; and * Source E (THINGS, NGC5236) is a spiral galaxy, but the overall blue feature extending across the source indicates some additional processing may be required. In particular, this can be explained as this source, Messier 83, is known to have an HI diameter much larger than the VLA primary beam with which it was observed in the THINGS project. The overview provided by many small-multiples rapidly highlight this source's distinctive feature, which was not present in any of the other 79 sources in this sample. Identification of these five “anomalous” cases occurs rapidly, when the viewer is able to both see a large sample (i.e. comparative visualisation, by stepping back from the Discovery Wall) and investigate an individual object in more detail (by moving closer to view, or interact with, an object of interest). To close the loop on candidate rejection, a minor modification to encube would allow each spectral cube to be tagged in real time as a true or false detection, which would then be fed back to the source finder to improve the true detection rate. §.§ Morphological classification Once a catalogue of robust detections has been gathered, the nature of the sources must be considered. For previously known objects, a morphological classification has likely already occurred. For new discoveries, an initial classification can be provided. For future Hi surveys conducted with wide-field interferometric imaging, the extended structure of many sources will be visible. This includes detecting the presence of low column density features such as bridges, tails, etc. Consequently, visual morphological classification of complete, unbiased, sub-populations of sources will be possible. Indeed, with a statistically significant population of Hi galaxies, selected in an unbiased (i.e. blind survey) fashion, it becomes possible to develop new morphological categories – beyond the standard Hubble classification – that may correlate with the local or global environment or integral properties, such as the Hi mass. The morphological classification process shares many similarities with the candidate rejection phase, and we appeal to the same VDAR study as in Section <ref>. The two features of our bespoke visualisation environment that provide an alternative approach to morphological classification, at scale, are: (1) the use of volume rendering, which allows each spectral cube to be rotated around any axis, providing immediate access to both spatial and kinematic information; and (2) the comparative nature of the display configuration, which makes it easy to go back-and-forth between specific objects in order to reach a decision regarding the classification. This might mean a change in the outcome of an initial or even pre-existing classification, or the recognition that a new sub-class of objects had been identified. § DISCUSSION In this Section, we interpret the benchmarking results obtained with encube on the Swinburne Discovery Wall. By considering survey sizes, data load times, visualisation configurations and interaction frame rates, we estimate the visualisation throughput, which we present in terms of the number of sources that could be examined in a given period of time. As a reflection on the role for bespoke visualisation environments in astronomy, we also discuss the evolution of advanced visualisation systems when used in astronomical research projects. §.§ Load times In order to be a useful adjunct to desktop-based visualisation methods, an alternative display solution needs to provide an appropriate level of computational performance. Regardless of whether a single spectral cube or multiple cubes are to be visualised, there is an unavoidable overhead while the data is transferred from its storage location into the computer memory. While this latency may not be as noticeable when working with a single cube, there is a cumulative loss of time when working with large surveys. This effect increases if individual cubes are loaded multiple times for comparative tasks. The most important factors in the load time are the network and internal transfer bandwidths and the volume of data. Our benchmarking results revealed a strong positive correlation between T_ Load and V_ Store across a range of storage volumes from 1.17 GB to 34.73 GB. This is consistent with our expectation that each of: (1) the data access and load phase, where each Process and Render node must transfer data via the NFS mount to the Master node; (2) the pre-computation performed for each spectral cube; and (3) the initial transfer of data to the GPU for texture-based volume rendering have O(N) algorithmic behaviour. If any one of these processes imposed a bottleneck for the increasing total data volume, we would expect to see deviations away from the linear scaling. With the Swinburne Discovery Wall hardware, we can load 180 spectral cubes drawn from: (1) the LVHIS survey in under 2 minutes; (2) the WHISP survey in under 3 minutes; and (3) combinations of WHISP, THINGS and LVHIS cubes in under 5 minutes. Using the median T_ Load for WHISP-only surveys in Table <ref>, we can consider alternative configurations that reach the same total number of data cubes, but through multiple loads of smaller quantities at a time. An additional overhead here is that we need to wait T_ Socket = 60 seconds for the Process and Render nodes to release their socket connections before the next configuration can be loaded. Expected total load times (rounded up to the nearest half minute) are as follows: * Nine sets of 20 WHISP cubes will load in 11.5 minutes (9 × 21 + 8 * T_ Socket = 669 s); * Four sets of 40 WHISP cubes plus one set of 20 WHISP cubes will load in 7.0 minutes (4 × 38 + 1 × 21 + 4 * T_ Socket = 413 s); and * Two sets of 80 WHISP cubes plus one set of 20 WHISP cubes will load in 5.0 minutes (2 × 73 + 1 × 21 + 2 * T_ Socket = 287 s). By increasing the total number of cubes displayed on the Discovery Wall, we benefit from parallelisation across the Process and Render nodes during the pre-computation phase and we do not experience the system latency imposed by T_ socket. The advantage of using the 4K UHD monitors is that we retain a reasonable image resolution per source even when there are 18 spectral cubes per individual monitor (36 cubes per column) of the Discovery Wall. §.§ Frame rates Once a configuration of spectral cubes has been loaded and displayed on the Discovery Wall, the most important metric is the frame rate. The higher the frame rate, the smoother the interaction experience when modifying the location of the camera (e.g. when controlling the visualisation of all the spectral cubes simultaneously via the user interface). For encube, there are several key observations that we make: * The frame rate depends on the size of the S2PLOT window, such that expanding over both 4K-UHD monitors per Process and Render node decreases the frame rate. This is seen in the per-column frame rates in Table <ref>, where F_1 values (the Master node) are generally higher than those of the other four columns (F_2 to F_5). In order to display the user interface in the web browser on the Master node, we do not extend the S2PLOT window across both monitors. * There are variations in the frame rate as a function of viewing angle, which depends on the relative number of voxels along each axis of a cube [see, for comparison, Figure 5 of <cit.>]. By reporting the lowest measured frame rates after each cube has undergone several complete rotations, we are presenting worst-case outcomes on interactivity. * Frame rates can decrease when zooming in on details. The amount of processing work performed by the GPU depends on the fraction of screen pixels that contain visible data. When zoomed out, a larger percentage of each panel comprises non-data (i.e. background) pixels. We did not record the effect on frame rates as the default configurations for 180 cubes presents a comparable ratio of data to total pixels as occurs when zooming in on with one of the lower N_ cube configurations. Setting a target of 10 frames/s as an indicator of reasonable interactivity with the data cubes, we exceed this for all of the 20-cube mock surveys (mean and median frame rates in Table <ref>), and for configurations of 180 sources selected entirely from the WHISP and LVHIS surveys. For the 180-cube combination configuration, which includes a randomly-selected sample of 60 THINGS cubes, the mean and median frame rates fall below 5 frames/s. Here, the higher frame rates measured for spectral cubes assigned to the fifth column of the Discovery Wall (column F_5 in Table <ref>) occur as only 5-6 out of 36 spectral cubes were randomly selected from the THINGS survey. If we had “perfect” randomness in the construction of the mock survey samples, we would expect 12 THINGS galaxies assigned to each column. Instead, columns two to four are required to perform much more processing than column five per screen refresh (more memory or total voxels per GPU), resulting in the lower frame rates for (F_2 – F_4) when a single GPU is driving two 4K UHD monitors. §.§ Throughput One of the key metrics we wish to ascertain is the visualisation throughput, V_ tp, which is the number of source cubelets that can be inspected in a given period of time, measured in units of sources/hour. For a single user, it is not expected that a peak V_ tp could be sustained throughout an entire day, but it is reasonable to assume that rates of 25-50% of V_ tp might be achievable for extended periods of time. This is compatible with a work pattern for quality control or source-finding candidate rejection where the candidates from the latest large-scale survey cube(s) are assessed daily. §.§.§ Multi-object workflows To estimate the throughput for a multi-object workflow, we consider two scenarios using the combination mock survey: * An 80-cube configuration. The full dataset loads in around T_ Load = 160 seconds (mean load time plus one standard deviation). An initial inspection can occur in T_ Inspect = 180 seconds (see Section <ref>). If we assume 25% of sources require additional action, and the recording of that action takes 60 seconds, then T_ Action = 1200 seconds. * A 180-cube configuration. The full dataset loads in T_ Load = 300 seconds. The time required for the initial inspection is assumed to scale linearly with the number of sources, such that T_ Inspect∼ 405 seconds. With 25% of sources requiring a 60-second action to be recorded, then T_ Action = 2700 seconds. The total time required for the completion of a SIMD process with encube is then: T_ SIMD = T_ Load + T_ Inspect + T_ Action + T_ Socket where T_ Socket, introduced in Section <ref>, is a system latency. Using the values proposed for these four quantities, we suggest that T_ SIMD(80 ) = 1600 seconds (26.7 minutes) and T_ SIMD(180 ) = 3465 seconds (58 minutes). Taken together, we estimate that V_ tp = 160-180 sources/hour seems reasonable for the completion of one of the three SIMD tasks we have considered in our VDAR study. Moreover, we have assumed only a single astronomer completing the task, whereas the large-format workspace of the Discovery Wall comfortably accommodates a small group working together. §.§.§ Comparison with single-object workflows As a point of comparison, we consider a single-object workflow, i.e. one source is loaded and visualised at a time with encube and using the Swinburne Discovery Wall hardware. A relationship between the single object load time and the FITS filesize was determined using a minimal sample of representative spectral cubes from each of the WHISP, THINGS and LVHIS datasets. We select the cubes with the smallest and largest filesizes, along with a cube that had the median file size (see Table <ref>). We measure load times for visualisation with encube running only on the head node, where the data is stored, and on a remote machine over the network via the NFS mount. We used a manual timing method with a reaction time error of 0.5 seconds. As shown in Figure <ref>, we find minimal differences in load times from the local disk (filled circles) or via the remote NFS mount (open circles). Performing a least squares fit to the combined data, we obtain: T_ Load = 37.71 V_ Store - 1.04 seconds with a Pearson correlation coefficient between T_ Load and V_ Store calculated to be r = 0.997. Using the average and median sample survey file sizes from Table <ref>, we compare the single-object and multi-object load times for the 80-cube WHISP, THINGS, LVHIS and combination configurations – see Table <ref>. The ratio of the single-to-multi object load times was calculated for each configuration, showing a 4-5 times speed-up in load times using the five compute nodes of the Swinburne Discovery Wall. This is not surprising for the nearly-perfect parallelism expected in this stage of the workflow, but with a slight input/output bottleneck at the head node where all of the data is stored. §.§.§ Estimates for future extragalactic Hi surveys In Figure <ref>, we estimate and compare the throughput for multi-object and single-object SIMD workflows. In addition to the LVHIS and WHISP extragalactic Hi, we obtain preliminary results for the APERTIF and WALLABY surveys; these values are indicative only of future analysis that is yet to be completed. We base our throughput predictions on 10,000 APERTIF sources (in the velocity range 1,000 to 10,000 km/s) with a mean storage volume of 0.62 MB/source cubelet[K.Hess, private communication] and 210,000 sources in WALLABY with a mean storage volume of 3 MB/source cubelet.[Analysis by author CM] The time to inspect each source is highly dependent on the SIMD task. For the candidate rejection VDAR activity (Section <ref>), we performed an initial visual scan across 80 spectral data cubes displayed on the Swinburne Discovery Wall in three minutes or 2.25 seconds/cube. This is achievable once all cubes have been loaded using physical navigation to rapidly move around the display space. With the continual cognitive set-shifting required for a lone astronomer to load and inspect one cube at a time, regardless of the display and visualisation software used, it may take 10-30 seconds per cube even at peak performance. Moreover, the single-object workflow removes the opportunity to perform comparisons, or rapid revisits to double check that a previously-viewed source had been inspected adequately. For each survey, we consider three scenarios with different follow-up action times: (1) T_ Action = 0, such that inspection occurs but no additional actions are required for all sources; (2) T_ Action = 30 s/source for 10% of sources; and (3) T_ Action = 60 s/source for 25% of sources. Symbols are used in Figure <ref> to differentiate between the inspection times, with T_ Inspect = 3 s/source for a multi-object workflow (filled circle) and T_ Inspect = 10 s/source (open triangle) and T_ Inspect = 30 s/source (plus symbol) for single-object workflows. For large survey sizes, N_S, these components of T_ SIMD dominate over T_ Load regardless of whether a single-object or multi-object workflow is used. The minor contribution from T_ Socket has been omitted. In all of the scenarios we considered, the estimated throughput with a multi-object workflow exceeds that of a single-object workflow. §.§ Evolution of visualisation solutions Astronomers have developed their craft over centuries by using a combination of singular, bespoke facilities for data gathering (e.g. dedicated observatories and supercomputers) supported by widely-available, general purpose resources for data analysis and visualisation (e.g. desktop and laptop computers in the digital era). We assert that a complementary role exists for dedicated advanced visualisation facilities that can provide a very different experience to that of the everyday. In the same way that astronomers do not expect to operate their own personal 64-metre radio telescope or 8-metre class optical/infrared telescope, there should not be an expectation, or need, for all astronomical institutions to operate a local advanced visualisation facility. What is more important is that when such facilities are available, there is a community of interested and potential users who are able to take advantage of them. As astronomical teams prepare themselves for the next phase of petascale and exascale data collection, new visualisation strategies that enable and enhance survey-scale discovery-based research processes will be required. Our VDAR evaluation demonstrates how comparative visualisation (implemented using encube and the Swinburne Discovery Wall) could be applied to SIMD visual analysis tasks that would not otherwise be feasible using a standard desktop configuration. Until a survey project is underway, the exact configuration of software and hardware that provides the most productive approach to advancing scientific knowledge may not be known. As the projects develop, familiarity with the strengths and weaknesses of the instrumentation and software-pipelines will also grow. The strategies for analysis and visualisation adopted during the first year of data collection may not be the same as those deemed essential in the years that follow. Some approaches to analysis and visualisation become essential throughout the lifetime of the individual research project where they were first adopted, perhaps spreading further into the discipline to become ubiquitous. Other alternatives may be relevant for a short period of time, or may only need to be accessed by a few members of a research team, but provide a much-needed distinctive perspective that serves to accelerate discovery. By presenting alternatives to current ways of working, astronomers can consider for themselves whether a combination of options will assist them at various stages of their research workflow. As an illustrative example of the evolution in the use of display environments, we look to the real-time, multi-wavelength Deeper Wider Faster (DWF) fast transient detection program <cit.>, where the Swinburne Discovery Wall – used as a TDW without encube – has also played an important role. As an international collaboration, DWF operations rely on a core team of co-located human inspectors with access to suitable visualisation software and hardware to support their decision-making processes during high-intensity, real-time observing campaigns. Through identification of potential fast or short-lived transient events, the DWF team determines whether there is a need to trigger immediate follow-up observations (e.g. target of opportunity spectroscopic observations with one of the Keck Observatory telescopes). Informed by a user performance study that investigated potential roles for TDWs in supporting inspection of very high pixel-count images by individuals or small teams <cit.>, a TDW became a necessary component of the display ecology used in the DWF project. The TDW replaced an initial inefficient visualisation workflow (used during pilot observations in 2015), where the research team used laptop screens and desktop monitors to inspect each of the 60 CCD frames (4096 × 2048 pixels) per field imaged with the Dark Energy Camera [DECam; <cit.>]. Over successive observing campaigns, as reported by <cit.>, the role and configuration of the TDW changed in response to user requirements and feedback. The visual inspection tasks performed by DWF team members were modified due to improvements in scientific understanding of the categories of fast transients that were being identified in real-time (and by extension those categories that could be analysed after the short-duration observing campaigns had concluded), along with enhancements to the automated pipelines <cit.>. In turn, improvements of the automated pipeline were directly informed by the knowledge the team acquired through using the TDW. At the time of writing, while no longer essential in the DWF context, the Swinburne Discovery Wall continues to play a role during real-time DWF campaigns. At critical stages of the development of DWF, however, the TDW was a solution that was “fit for purpose” and supported team-based visual discovery tasks that were not feasible to conduct with a standard desktop-bound approach. § CONCLUSIONS The expected growth in both the volume and velocity of data from future astronomical surveys necessitates a move away from serial workflows. The comparative visualisation approach we have investigated here via benchmarking and a VDAR evaluation is not intended to replace existing alternatives, but provides a demonstration of a complementary workflow that addresses some existing – and emerging – challenges in the size and scale of astronomical surveys. Within our case study context of extragalactic Hi surveys, we anticipate that both the short and longer term use of automated pipelines will retain a stage of visual inspection and classification. We suggest that this can be achieved more successfully, and more rapidly, using a method that is not about inspecting one object at a time. As we have shown here, the encube framework operating on a tiled display wall presents a compelling alternative mode for SIMD activities. We have considered tasks that are highly repetitive, yet may need to be performed on all sources detected within a survey. Examples here include quality control, candidate rejection, and morphological classification. In all cases, as identified through our VDAR studies, encube encouraged a sensemaking process <cit.> with a foraging phase and a sensemaking loop. The comparative nature of the display – comfortably visualising 180 spectral cubes at a time, using the Swinburne Discovery Wall configuration of ten 4K-UHD monitors – supports the rapid identification of features affecting multiple source cubelets while also presenting immediate access to both the spatial and spectral data for individual objects (through our use of volume rendering). A few hours interacting with data with encube on the Discovery Wall could replace weeks to months of work at the desktop – without diminishing the importance of the follow-up detailed analysis that the desktop supports. We estimate a throughput of 160-180 sources/hour could be inspected using the configuration that we assessed. Both encube and the Swinburne Discovery Wall are easily modifiable and scalable, in the sense that additional columns of monitors plus computers can be added to increase the number of sources displayed at a time. Implementation of our solution at another institution requires access to: the open-source software<cit.>; one or more Linux-based computers; (ideally) multiple monitors; and an appropriate network connection between the process and render nodes and the master node where the data set is stored. Customised visualisation and analysis approaches will evolve over time as surveys progress. They should be employed during those periods that are particularly labour-intensive, while assisting in the identification of additional processes that can be fully or partly automated. Finding the appropriate balance between human inspection and automated detection may help to maximise the overall discovery potential of a workflow <cit.>. § ACKNOWLEDGEMENTS We acknowledge the Wurundjeri People of the Kulin Nation, who are the Traditional Owners of the land on which the research activities were undertaken. Christopher Fluke is the SmartSat Cooperative Research Centre (CRC) Professorial Chair of space system real-time data fusion, integration and cognition. SmartSat CRC's activities are funded by the Australian Government's CRC Program. We acknowledge the generous support of the Eric Ormond Baker Charitable fund, which helped to establish the Discovery Wall and the remote observing facility at Swinburne University of Technology. We are extremely grateful to David Barnes and Amr Hassan for their technical advice and encouragement during early phases of this work, and to Kelley Hess for assisting with understanding the preliminary APERTIF Hi survey results. This paper made use of data from: WHISP, Westerbork Observations of Neutral Hydrogen in Irregular and Spiral Galaxies <cit.>; THINGS, The Hi Nearby Galaxy Survey <cit.>; and LVHIS, The Local Volume Hi Survey <cit.>. § IMPLEMENTATION NOTES §.§ Technical matters In this section, we highlight some additional features of the implementation of encube on the Swinburne Discovery Wall. One workstation is assigned the role of the Master Node, where the manager unit and interaction unit are deployed. All five workstations act as Process and Render nodes. Figure <ref> illustrates the connections and communication pathways between the Master node and each of the Process and Render nodes. Encube is launched from a Linux terminal on the Master node, which activates the program instance on each of the Process and Render nodes. Each program instance: (1) creates and opens a socket for communication with the Master node; (2) and makes application programming interface (API) calls in C code to the S2PLOT library for interactive graphical elements. Relevant content from the configuration file hosted on the Master node is passed to the Process and Render nodes. Once the socket connections have been established, the user interface is accessed through a Web browser accessing localhost on the Master node (see Figure <ref>). S2PLOT allows for the creation of independent regions of the graphics display window, referred to as panels. For simplicity, panels are presented in encube as a uniformly tiled matrix of rows and columns. The 3D geometry within an S2PLOT panel can be controlled by selecting the panel and using the attached mouse to rotate the data cube or the keyboard to zoom in or out. As each display column of the Discovery Wall is independent, it is possible to use the keyboard and mouse associated with a column in order to work with a local subset of data (see Figure <ref>). Alternatively, the location, orientation and view direction of the virtual camera can be set for each panel using an API call. This method is used when interacting with the user interface on the Master node, so that the virtual camera is updated simultaneously for all of the panels. Each Process and Render node requests and loads relevant data files from the Master node, using a drive that is accessible using the network file system (NFS). Once each Process and Render node has loaded the required data, the spectral cube is visualised using 3D texture-based volume rendering. Here, an S2PLOT callback function is associated with each panel, and once per refresh cycle, the volume rendering is generated based on the current virtual camera position. 3D texture-based rendering provides a compromise between lower-fidelity two-dimensional texture image stacks (also implemented in S2PLOT) or computationally-demanding ray-shooting. For simplicity of operation, two different colour-mapping options are provided: intensity-based, whereby a heat-style colour map is assigned from the minimum to the maximum voxel value for each spectral cube, and velocity-based mapping <cit.>. Here, the velocity data is utilised along with the voxel values, in order to provide cues as to whether neutral Hi gas is blue-shifted or red-shifted along the spectral axis with respect to the centre of the cube (assumed to be equivalent to the centre-of-mass for most systems). While completing the benchmarking and VDAR evalauation activities (described in Sections <ref> and <ref>), we chose not to invest development time to make some cosmetic changes to the encube user interface. In particular, the world in miniature component of the interface (see Figure <ref>) was not ideal when the number of spectral cubes visualised exceeded 40. This temporarily limits the ability to use some of the features of encube, such as the ability to select and swap cubes between any of the displays in real-time. However, the overall functionality and performance of the encube process and render components is not impeded. In the implementation of encube that we benchmarked, there were some additional processing steps performed that add to the time taken to load each spectral cube. These comprise several independent complete passes through the spectral cube to calculate statistical parameters, compare actual data values with those recorded in the spectral cube metadata, and generation of a histogram of data values for each spectral cube. Each of these processes have algorithmic linear scaling depending only on the number of voxels in the spectral cube. Consequently, they introduce a multiplicative factor on the time to load all of the spectral cubes. Such pre-computation is a design choice that allows the CPU memory to be freed once data is loaded onto a GPU. Accessing these values has O(1) complexity later during interactive analysis. §.§ Future enhancements While working with encube during the VDAR evaluation, we identified several additional features or enhancements that could extend the framework's suitability for comparative visual analysis of large-scale extragalactic Hi surveys: * Add an on-screen scale indicator. As all spectral cubes are scaled to a unit cube for convenience, the physical size of individual objects was lost. * Within the user interface, allow selection or sorting of the source list by any metadata attribute, such as size, total Hi mass, or distance. * Access and display detailed metadata of a selected object or set of objects. During the present work, a trivial modification was made to toggle visibility of the name of each object within its S2PLOT display panel. * Improve the creation of the on-screen configuration, allowing more flexibility in how data is assigned to the available display space. For example, a non-uniform arrangement of panels per column, which could allow individual spectral cubes to be visualised at increased levels of detail or cubes with different sizes (e.g. spatial pixel coverage or rest-frame physical dimensions) could be presented at the same scale as demonstrated in Figure <ref>. * Include support for additional data types to be loaded and displayed, including spectral cubes from different wavelength regimes or observing modes (e.g. optical integral field units), overlay of two-dimensional images, or visualisation of one-dimensional spectra. * Provide a mechanism by which annotations could be recorded regarding individual sources, preferably through the use of speech-to-text capture and conversion. * Support interactive masking of channels via the user interface for selected subsets of cubelets, so that the issues identified with the WHISP sample could have been resolved in real-time. Such modifications could then be embedded into the dataset, by exporting the modified spectral cubes for future automated, or human, analysis.
http://arxiv.org/abs/2307.03971v1
20230708130330
What is the meaning of proofs? A Fregean distinction in proof-theoretic semantics
[ "Sara Ayhan" ]
cs.LO
[ "cs.LO", "math.LO", "03F03 (Primary), 03F07 (Secondary)" ]
A Fregean distinction in proof-theoretic semantics Sara Ayhan Institute of Philosophy I, Ruhr University Bochum, Bochum, Germany [email protected] What is the meaning of proofs? Sara AyhanI would like to thank several people for supporting me in improving this paper essentially, among them Luca Tranchini for his thorough feedback and vital input on an earlier version of this paper and also two anonymous referees for their very constructive and helpful reports. I am especially grateful to Heinrich Wansing for the numerous and encouraging occasions to discuss this paper extensively and for his valuable comments. Received: date / Accepted: date ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This is a post-peer-review, pre-copyedit version of an article published in the Journal of Philosophical Logic. The final authenticated version will be available online at: DOI: 10.1007/s10992-020-09577-2 The origins of proof-theoretic semantics lie in the question of what constitutes the meaning of the logical connectives and its response: the rules of inference that govern the use of the connective. However, what if we go a step further and ask about the meaning of a proof as a whole? In this paper we address this question and lay out a framework to distinguish sense and denotation of proofs. Two questions are central here. First of all, if we have two (syntactically) different derivations, does this always lead to a difference, firstly, in sense, and secondly, in denotation? The other question is about the relation between different kinds of proof systems (here: natural deduction vs. sequent calculi) with respect to this distinction. Do the different forms of representing a proof necessarily correspond to a difference in how the inferential steps are given? In our framework it will be possible to identify denotation as well as sense of proofs not only within one proof system but also between different kinds of proof systems. Thus, we give an account to distinguish a mere syntactic divergence from a divergence in meaning and a divergence in meaning from a divergence of proof objects analogous to Frege's distinction for singular terms and sentences. § INTRODUCTION In proof-theoretic semantics (PTS) the meaning of the logical constants is taken to be given by the rules of inference that govern their use. As a proof is constituted by applications of rules of inference, it seems reasonable to ask what the meaning of proofs as a whole would consist of on this account. What we are particularly interested in is a Fregean distinction between sense and denotation in the context of proofs.[We assume at least a basic familiarity with this idea, laid out in Frege's famous paper “Über Sinn und Bedeutung”, cf. <cit.> for an English translation.] This account builds up on <cit.>, where such a distinction is proposed and used in a proof-theoretic explanation of paradoxes. The notion of denotation is nothing new in the context of proofs. It is common in the literature on proof theory and PTS (e.g. <cit.>, <cit.>, <cit.>) to distinguish between derivations, as linguistic objects, and proofs, as abstract (in the intuitionistic tradition: mental) entities. Proofs are then said to be represented or denoted by derivations, i.e. the abstract proof object is the denotation of a derivation. The notion of sense, on the other hand, has been more or less neglected. Tranchini <cit.>, therefore, made a proposal that for a derivation to have sense means to be made up of applications of correct inference rules. While this is an interesting approach to consider, Tranchini only determines whether a proof has sense or not but does not go further into what the sense of a proof exactly consists of, so there might be further questions worth pursuing. We will spell out an account of a distinction between sense and denotation of proofs, which can be considered a full-fledged analogy to Frege's distinction concerning singular terms and sentences.[There is some literature also in the field of proof theory concerned with this Fregean distinction, however, to our knowledge, apart from <cit.> this is not concerned with the sense of derivations but with the sense of sentences: cf. P. Martin-Löf (2001). The Sense/Reference Distinction in Constructive Semantics. Transcription of a lecture given at a conference on Frege organised by G. Sundholm at Leiden, 25 August 2001, transcription by B. Jespersen, 9 August 2002: https://www.academia.edu/25695205/The_Sense_Reference_Distinction_in_Constructive_Semantics, or <cit.>.] Another question concerns the relation of different kinds of proof systems (intuitionistic natural deduction (ND) and sequent calculus (SC) systems will be considered) with respect to such a distinction. If we have two syntactically different derivations with the same denotation in different proof systems, do they always also differ in sense or can sense be shared over different systems? § CONNECTING STRUCTURE AND MEANING The basic point of departure is the simple observation that there can be different ways leading from the same premises to the same conclusion, either in different proof systems or also within one system. The focus in this matter so far has been on normal vs. non-normal derivations in ND and correspondingly on derivations containing cut vs. cut-free derivations in SC. However, there can also simply be a change of the order of rule applications that can lead to syntactically different derivations from the same premises to the same conclusion. Does this lead to a different denotation or should we say that it is only the sense that differs in such cases, while the underlying proof stays the same? §.§ Normal form and the denotation of derivations One and the same proof may be linguistically represented by different derivations. We will follow the general opinion in taking proofs to be the denotation - the semantic value - of (valid) derivations. In ND a derivation in normal form is the most direct form of representation of its denotation, i.e. the represented proof object. For our purposes we will consider a derivation to be in normal form iff neither β- nor η-conversions (cf. rules below) can be applied to it. A derivation in normal form in ND corresponds to a derivation in cut-free form in SC. In intuitionistic logic derivations in non-normal form in ND (resp. with cut in SC) can be reduced to ones in normal form (resp. cut-free form). These are then thought to represent the same underlying proof, just one more indirectly than the other, because, as Prawitz <cit.> says, they represent the same idea this proof is based on. In order to make sense and denotation transparent, our approach will be to encode the derivations with λ-terms. As is well known, by the Curry-Howard-isomorphism there is a correspondence between the intuitionistic ND calculus and the simply typed λ-calculus and we can formulate the following ND-rules annotated with λ-terms together with the usual β- and η-conversions for the terms. The β-conversions correspond to the well-known reduction procedures, which can be formulated for every connective in ND <cit.>, while the η-conversions are usually taken to correspond to proof expansions <cit.>. We use p, q, r,... for arbitrary atomic formulas, A, B, C,... for arbitrary formulas, and Γ, Δ,... for sets of formulas. Γ, A stands for Γ∪{A}. For variables in terms x, y, z,... is used and r, s, t,... for arbitrary terms. Term-annotated ND-rules: [⊃I]λx.t:A ⊃B*t:BΓ,[x:A] [⊃E]App(s, t):B*s: A ⊃BΓ *t:AΔ [∧I]⟨s, t⟩: A ∧B*s:AΓ *t:BΔ [∧E_1]fst(t):A*t:A ∧BΓ [∧E_2]snd(t):B*t:A ∧BΓ [∨I_1]s:A ∨B*s:AΓ [∨I_2]s:A ∨B*s:BΓ [∨E] r {x.s  | y.t}:C *r: A ∨BΓ *s:CΔ, [x:A] *t: CΘ, [y:B] [E]abort(t):A*t:Γ β-conversions: App(λx.t, s) ⇝t[s/x] 2 fst(⟨s, t ⟩) ⇝s snd(⟨s, t ⟩) ⇝t 2 r {x.s  | y.t} ⇝s[r/x] r {x.s  | y.t} ⇝t[r/y] η-conversions: λ x.App(t, x) ⇝ t (if x not free in t) ⟨fst(t), snd(t) ⟩⇝t r {t.t  | s.s} ⇝r We read x : A as “x is a proof of A". t[t'/x] means that in term t every free occurrence of x is substituted with t'. The usual capture-avoiding requirements for variable substitution are to be observed and α-equivalence of terms is assumed. A term that cannot be converted by either β- or η-conversion is in normal form. Since there is a correspondence between intuitionistic SC and intuitionistic ND, for every derivation in ND there must be a derivation in SC named by the same λ-term. This correspondence is of course not one-to-one, but many-to-one, i.e. for each proof in ND there are at least potentially different derivations in SC.[On the complications of such a correspondence and also on giving a term-annotated version of SC cf. e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. Term-annotated sequent calculi can be found i.a. in <cit.> or <cit.>, from which our presentation is only a notational variant.] The following are our respective SC-rules, where we use the propositional fragment of an intuitionistic SC with independent contexts <cit.>. The reduction procedures remain the same as above in ND; β-reduction corresponds to the procedures needed to establish cut-elimination, while η-conversion corresponds to what may be called “identicals-elimination" <cit.> or “identity atomization" <cit.>[Showing that it is possible to get rid of axiomatic sequents with complex formulas and derive them from atomic axiomatic sequents. This is also part of cut-elimination but in principle those are separate procedures <cit.>.]: Term-annotated G0ip: Logical axiom: [Rf]x : A ⊢x : A Logical rules: [∧R]Γ, Δ⊢⟨s, t⟩: A ∧BΓ⊢s: A Δ⊢t: B [∧L]Γ, z: A ∧B ⊢s[[fst(z)/x]snd(z)/y] : CΓ, x: A, y : B ⊢s : C [∨R_1]Γ⊢s :A ∨BΓ⊢s:A [∨R_2]Γ⊢s:A ∨BΓ⊢s:B [∨L]Γ, Δ, z:A ∨B ⊢ {x.s  | y.t} : CΓ, x:A ⊢s:C Δ, y:B ⊢t:C [⊃R]Γ⊢λx.t:A ⊃BΓ, x:A ⊢t:B [⊃L]Γ, Δ, x:A ⊃B ⊢s[App(x, t)/y]:CΓ⊢t: A Δ, y:B ⊢s:C [L]x: ⊢abort(x): C Structural rules: Weakening: [W]Γ, x:A ⊢t:CΓ⊢t:C Contraction: [C]Γ, x : A ⊢t[x/y] : CΓ, x : A, y : A ⊢t : C The rule of cut [cut]Γ, Δ⊢s[t/x] : CΓ⊢t : D Δ, x : D ⊢s : C is admissible in G0ip. In the left operational rules as well as in the weakening rule we have the case that variables occur beneath the line that are not explicitly mentioned above the line. In these cases the variables must be either fresh or - together with the same type assignment - already occurring in the context Γ, Δ, etc. Same variables can only (but need not) be chosen for the same type, i.e., if a new type occurs in a proof, then a fresh variable must be chosen. If we would allow to chose the same variable for different types, i.e. for example to let x:A and x:B occur in the same derivation this would amount to assuming that arbitrarily different formulas have the same proof, which is not desirable. §.§ Identity of proofs and equivalence of derivations Figuring prominently in the literature on identity of proofs is a conjecture by Prawitz <cit.> that two derivations represent the same proof iff they are equivalent.[Prawitz gives credit for this conjecture to Martin-Löf. Cf. also Martin-Löf <cit.> on this issue, in his terminology “definitional equality".] This shifts the question of course to asking when two derivations can be considered equivalent. Using the equational theory of the λ-calculus is one way to provide an answer here: terms on the right and the left hand side of the β- and η-conversions are considered denotationally equal <cit.>. Hence, two derivations can be considered equivalent iff they are β-η-equal (cf. <cit.>, <cit.>, <cit.>).[There is some discussion about whether η-conversions are indeed identity-preserving. Martin-Löf <cit.> does not think so, for example. Prawitz <cit.> is not clearly decided but writes in the context of identity of proofs it would seem “unlikely that any interesting property of proofs is sensitive to differences created by an expansion". Widebäck <cit.>, relating to results in the literature on the typed λ-calculus like <cit.> and <cit.>, argues for β-η-equality to give the right account of identity of proofs and Girard <cit.> does the same, although he mentions, too, that η-equations “have never been given adequate status" compared to the β-equations.] The denotation is then seen to be referred to by the term that annotates the formula or sequent to be proven. We will call this the `end-term' henceforth so that we can cover and compare both ND and SC at once. So if we have two derivations with essentially different end-terms (in the sense that they are not belonging to the same equivalence class induced by β-η-conversion), we would say that they denote essentially different proofs. On the other hand, for two ND-derivations, where one reduces to the other (or both reduce to the same), e.g. via normalization, we have corresponding λ-terms, one β-reducible to the other (or both β-reducible to the same term). In this case we would say that they refer to the same proof. Prawitz <cit.> stresses that this seems evident since two derivations reducing to identical normal derivations must be seen as equivalent. Note that we can also have the case that two derivations of the same formula, which would look identical in a non-term-annotated version, here for example of ND, are distinguished on the grounds of our term annotation, like the following two derivations: 2 ND1p ⊃ (p ⊃ (p ∧ p)) ND2p ⊃ (p ⊃ (p ∧ p)) [⊃I^2]λy.λx.⟨x, y ⟩: p ⊃(p ⊃(p ∧p)) [⊃I^1]λx.⟨x, y ⟩: p ⊃(p ∧p) [∧I]⟨x, y ⟩: p ∧p[x : p]^1 [y : p]^2 [⊃I^2]λx.λy.⟨x, y ⟩: p ⊃(p ⊃(p ∧p)) [⊃I^1]λy.⟨x, y ⟩: p ⊃(p ∧p) [∧I]⟨x, y ⟩: p ∧p[x : p]^2 [y : p]^1 The reason for this is that it is possible to generalize these derivations in different directions, which is made explicit by the variables. Hence, the first one can be generalized to a derivation of B ⊃ (A ⊃ (A ∧ B)), while the second one generalizes to A ⊃ (B ⊃ (A ∧ B)).[For a more detailed examination of generalization cf. <cit.> or <cit.>.] So, encoding derivations with λ-terms seems like a suitable method to clarify the underlying structure of proofs. There is one kind of conversion left, though, that needs consideration, namely what we will call permutative conversions, or also γ-conversions.[It goes under various other names, as well, like permutation/permuting conversions or commuting/commutative conversions. Some also prefer “reductions" but we will go with the - to us seemingly - more neutral “conversions". The term γ-conversions appears in <cit.>. Cf. about these conversions in general e.g. <cit.>: 251-259, <cit.>: Ch. 10, <cit.>, <cit.>.] They become relevant here because we have disjunction as part of our logical vocabulary. Prawitz <cit.> was the first to introduce these conversions. In the conjunction-implication-fragment of intuitionistic propositional logic derivations in normal form satisfy the subformula property, i.e. in a normal derivation 𝒟 of A from Γ each formula is either a subformula of A or of some formula in Γ. However, with the disjunction elimination rule this property is messed up, since we get to derive a formula C from A ∨ B which is not necessarily related to A or B. That is why, in order to recover the subformula property, permutation conversions are introduced, which can be presented in their most general form in the following way: D[∨E]C *A ∨BΓ *CΔ, A *CΘ, B ⇝ [∨E]D *A ∨BΓ D*CΔ, A D*CΘ, B Whether or not these are supposed to be taken into the same league as β- and η-conversions in matters of identity preservation of proofs is an even bigger dispute than the one mentioned concerning η-conversions. Prawitz <cit.> says that while there can be no doubt about the `proper reductions' having no influence on the identity of the proof, “[t]here may be some doubts concerning the permutative ∨E-[...]reductions in this connection" but does not go into that matter any further. Since he needs these reductions to prove his normalization theorem, it seems that he would be inclined not to have too many doubts about identity preservation under the permutative conversions. Girard <cit.>, on the other hand, does not seem to be convinced, as he says - considering an example of permutation conversion - that we are forced to identify “a priori different deductions" in these cases. Even though he accepts these conversions for technical reasons, he does not seem to be willing to really identify the underlying proof objects. Restall[Restall, G. (2017). Proof Terms for Classical Derivations. Article in progress: https://consequently.org/papers/proof-terms.pdf], however, analyzing derivations by assigning to them what he calls “proof terms" rather than λ-terms, considers the derivations above as merely distinct in representation but not in the underlying proof, which on his account is the same for both. What is more, he does so not only for technical but rather philosophical reasons, since he claims the flow of information from premises to conclusion to be essentially the same. Lindley <cit.> and Tranchini <cit.> both make a point about the connection between reductions and expansions (although they speak of certain kinds of “generalized" expansions) on the one hand and (“generalized") permutative conversions on the other, claiming that performing a (generalized) expansion on the left hand side of the conversion above followed by a reduction (and possibly α-conversion) just yields the right hand side. To conclude, if we only consider the ⊃-∧-fragment of intuitionistic propositional logic, β-η-equality is enough, but if we consider a richer vocabulary, it seems to us at least that there are substantial reasons to include permutative conversions in our equational theory.[The consequence for this paper would be of course to add “γ-conversions" to the list of relevant conversions in our definitions about normal forms, identity of denotation, etc.] We do not aim to make a final judgment on this issue here. Rather, when we have laid out our distinction about sense and denotation of proofs below, we will consider the matter again and show why it makes no essential difference for our purposes whether we include permutative conversions or not. § THE SENSE OF DERIVATIONS Let us spell out at this point what exactly we will consider as the sense and also again the denotation of a derivation in our approach: Definition of denotation: The denotation of a derivation in a system with λ-term assignment is referred to by the end-term of the derivation. Identity of denotation holds modulo belonging to the same equivalence class induced by the set of α-, β- and η-conversions of λ-terms, i.e. derivations that are denoted by terms belonging to the same equivalence class induced by these conversions are identical, they refer to the same proof object.[We use the more accurate formulation of “belonging to the same equivalence class" here instead of the formulation we used before of two terms “having the same normal form". The reason for this is that while these two properties coincide for most standard cases, they do not necessarily concur when it comes to Lindley's “general permutative conversions" or also to SC in general because in these cases the confluence property is not guaranteed. We want to thank one of the anonymous referees for indicating this important point.] Definition of sense: The sense of a derivation in a system with λ-term assignment consists of the set[One could also consider the question whether multi-sets are an even better choice here, which would of course yield a much stronger differentiation of senses. The reason why we consider sets instead of multi-sets is that to us the distinctions brought about by multi-sets, by e.g. a variable occurrence more or less, do not seem to go hand in hand with substantial differences in how inferences are built up.] of λ-terms that occur within the derivation. Only a derivation made up of applications of correct inference rules, i.e. rules that have reduction procedures, can have sense. §.§ Change of sense due to reducibility Concerning a distinction between sense and denotation in the context of proofs, the rare cases where this is mentioned at all deal with derivations one of which is reducible to the other or with λ-terms which are β-convertible to the same term in normal form (cf. <cit.>, <cit.>, Restall 2017, p. 6). Since Tranchini is the only one to spell out the part about sense in detail, we will briefly summarize his considerations. As mentioned above, in his account, for a derivation to have sense means that it is made up of applications of correct inference rules. The question to be asked then is of course what makes up correct inference rules? Tranchini's answer is that inference rules are correct if they have reduction procedures available, i.e. a procedure to eliminate any maximal formula resulting from an application of an introduction rule immediately followed by an elimination rule of the same connective. From a PTS point of view, applying reduction procedures can be seen as a way of interpreting the derivation because it aims to bring the derivation to a normal form, i.e. the form in which the derivation represents the proof it denotes most directly <cit.>.[Tranchini does not restrict his examination to derivations that normalize, though, but to the contrary, uses it to analyze non-normalizable derivations, like paradoxical ones.] So the reduction procedures are the instructions telling us how to identify the denotation of the derivation, which for Tranchini means that they give rise to the sense of the derivation. If we have two derivations denoting the same proof, for example, one in normal form and the other in a form that can be reduced to the former, we could say in Fregean terminology that they have the same denotation but differ in their sense because they denote the proof in different ways, one directly, the other indirectly. So, we can take as an example the following two derivations, one in normal and one in non-normal form: NDp ⊃ p =1.2em [r]⊃I [x : p] λx.x: p ⊃p NDnon-normal p ⊃ p =1.2em [r]∧E [r]∧I [r]⊃I [x : p] λx.x: p ⊃p [r]⊃I [y : q] λy.y: q ⊃q ⟨λx.x, λy.y ⟩: (p ⊃p) ∧(q ⊃q) fst(⟨λx.x, λy.y ⟩): p ⊃p The latter obviously uses an unnecessary detour via the maximal formula (p ⊃ p) ∧ (q ⊃ q), which is introduced by conjunction introduction and then immediately eliminated again, thus, producing different and more complex terms than the former derivation. The derivation can be easily reduced to the former, though, which can be also seen by β-reducing the term denoting the formula to be proven: fst(⟨λx.x, λy.y ⟩) ⇝λx.x We can also give an example analogous to the one above, where a non-normal term (highlighted in bold) in SC is created by using the cut rule:[Note however, that the connection between the application of cut and the resulting non-normal term is necessary but not sufficient, i.e. there can be applications of cut not creating a non-normal term. A non-normal term is produced if both occurrences of the cut formula in the premises are principal.] SC⊢ (p ∧ p) ⊃ (p ∨ p) =1.2em [r]⊃R [r]∨R [r]∧L [r]W [r]Rf z : p ⊢z : p z : p, x : p ⊢z : p y : p ∧p ⊢fst(y) : p y : p ∧p ⊢fst(y) : p ∨p ⊢λy.fst(y) : (p ∧p) ⊃(p ∨p) SCcut⊢ (p ∧ p) ⊃ (p ∨ p) =1.2em [r]⊃R [r]∨R [r]cut [r]C [r]∧R [r]∧L [r]W [r]Rf z : p ⊢z : p z : p, x : p ⊢z : p y : p ∧p ⊢fst(y) : p [r]∧L [r]W [r]Rf z : p ⊢z : p x : p, z : p ⊢z : p y : p ∧p ⊢snd(y) : p y : p ∧p, y : p ∧p ⊢⟨fst(y), snd(y)⟩: p ∧p y : p ∧p ⊢⟨fst(y), snd(y)⟩: p ∧p [r]∧L [r]W [r]Rf z : p ⊢z : p z : p, x : p ⊢z : p y : p ∧p ⊢fst(y) : p y : p ∧p ⊢fst⟨fst(y), snd(y)⟩: p y : p ∧p ⊢fst⟨fst(y), snd(y)⟩ : p ∨p ⊢λy.fst⟨fst(y), snd(y)⟩ : (p ∧p) ⊃(p ∨p) λy.fst⟨fst(y), snd(y)⟩ ⇝λy.fst(y) In this case again the two derivations are essentially the same because the latter can be reduced to the former by eliminating the application of the cut rule. Again, the proof object they represent is thus the same, only the way of making the inference, represented by the different terms occurring within the derivation, differs, i.e. the sense is different. §.§ Change of sense due to rule permutations So far we only considered the case in which there is an identity of denotation but a difference in sense of derivations due to one being represented by a λ-term in non-normal form reducible to one in normal form. However, we want to show that this is not the only case where we can make such a distinction. This is also the reason why our approach differs from Tranchini's (who works solely in an ND system) in how we grasp the notion of sense of a derivation. Following Tranchini, the derivation having sense at all depends on there being reduction procedures available for the rules that are applied in it. Since we are also interested in a comparison of sense-and-denotation relations between ND and SC systems, our approach requires that there are reduction procedures available for the created terms. Thereby we will be able to cover both systems at once. Encoding the proof systems with λ-terms also makes the connection between changing the order of the rule applications and the sense-and-denotation distinction transparent, which is the other case we want to cover. In ND with disjunction rules it is possible to have rule permutations producing derivations with end-terms identifiable by means of the permutative conversions. In SC, however, there are more cases of rule permutations possible. When the left disjunction rule is involved, this also leads to different - though γ-equal - terms; with the left conjunction or implication rule the end-term remains completely unchanged. Consider e.g. the following three derivations in SC of the same sequent ⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r)): SC_1⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r)) =1.2em [r]⊃R [r]∨L [r]C [r]∧R [r]∧L [r]W [r]∨R [r]Rf q ⊢q q ⊢p ∨q q, r ⊢p ∨q q ∧r ⊢p ∨q [r]∧L [r]W [r]∨R [r]Rf r ⊢r r ⊢p ∨r q, r ⊢p ∨r q ∧r ⊢p ∨r q ∧r, q ∧r ⊢(p ∨q) ∧(p ∨r) q ∧r ⊢(p ∨q) ∧(p ∨r) [r]C [r]∧R [r]∨R [r]Rf p ⊢p p ⊢p ∨q [r]∨R [r]Rf p ⊢p p ⊢p ∨r p, p ⊢(p ∨q) ∧(p ∨r) p ⊢(p ∨q) ∧(p ∨r) (q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r) ⊢((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r)) SC_2⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r)) =1.2em [r]⊃R [r]∨L [r]C [r]∧R [r]∨R [r]∧L [r]W [r]Rf q ⊢q q, r ⊢q q ∧r ⊢q q ∧r ⊢p ∨q [r]∨R [r]∧L [r]W [r]Rf r ⊢r q, r ⊢r q ∧r ⊢r q ∧r ⊢p ∨r q ∧r, q ∧r ⊢(p ∨q) ∧(p ∨r) q ∧r ⊢(p ∨q) ∧(p ∨r) [r]C [r]∧R [r]∨R [r]Rf p ⊢p p ⊢p ∨q [r]∨R [r]Rf p ⊢p p ⊢p ∨r p, p ⊢(p ∨q) ∧(p ∨r) p ⊢(p ∨q) ∧(p ∨r) (q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r) ⊢((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r)) SC_3⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r)) =1.2em [r]⊃R [r]C [r]∧R [r]∨L [r]∧L [r]W [r]∨R [r]Rf q ⊢q q ⊢p ∨q q, r ⊢p ∨q q ∧r ⊢p ∨q [r]∨R [r]Rf p ⊢p p ⊢p ∨q (q ∧r) ∨p ⊢p ∨q [r]∨L [r]∧L [r]W [r]∨R [r]Rf r ⊢r r ⊢p ∨r q, r ⊢p ∨r q ∧r ⊢p ∨r [r]∨R [r]Rf p ⊢p p ⊢p ∨r (q ∧r) ∨p ⊢p ∨r (q ∧r) ∨p, (q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r) (q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r) ⊢((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r)) The difference between SC1 and SC2 (highlighted in bold) is that the order of applying the right disjunction rule and the left conjunction rule is permuted. The difference between SC1 and SC3 (highlighted with underlining) is that the order of applying the right conjunction rule and the left disjunction rule is permuted. The order of applying the right disjunction rule and the left conjunction rule stays fixed this time. Encoded with λ-terms, though, we see that in the first case, comparing SC1 and SC2, the permutation of rule applications produces exactly the same end-term. Both derivations have the same end-term, namely: λ u. {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} SC_1⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r)) =1.2em [r]⊃R [r]∨L [r]C [r]∧R [r]∧L [r]W [r]∨R [r]Rf y : q ⊢y : q y : q ⊢y : p ∨q y : q, z : r ⊢y : p ∨q v : q ∧r ⊢fst(v) : p ∨q [r]∧L [r]W [r]∨R [r]Rf z : r ⊢z : r z : r ⊢z : p ∨r y : q, z : r ⊢z : p ∨r v : q ∧r ⊢snd(v): p ∨r v : q ∧r, v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r) v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r) [r]C [r]∧R [r]∨R [r]Rf x : p ⊢x : p x : p ⊢x : p ∨q [r]∨R [r]Rf x : p ⊢x : p x : p ⊢x : p ∨r x : p, x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r) x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r) u : (q ∧r) ∨p ⊢ {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : (p ∨q) ∧(p ∨r) ⊢λu. {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : ((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r)) SC_2⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r)) =1.2em [r]⊃R [r]∨L [r]C [r]∧R [r]∨R [r]∧L [r]W [r]Rf y : q ⊢y : q y : q, z : r ⊢y : q v : q ∧r ⊢fst(v) : q v : q ∧r ⊢fst(v) : p ∨q [r]∨R [r]∧L [r]W [r]Rf z : r ⊢z : r y : q, z : r ⊢z : r v : q ∧r ⊢snd(v) : r v : q ∧r ⊢snd(v): p ∨r v : q ∧r, v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r) v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r) [r]C [r]∧R [r]∨R [r]Rf x : p ⊢x : p x : p ⊢x : p ∨q [r]∨R [r]Rf x : p ⊢x : p x : p ⊢x : p ∨r x : p, x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r) x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r) u : (q ∧r) ∨p ⊢ {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : (p ∨q) ∧(p ∨r) ⊢λu. {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : ((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r)) Considering the second comparison between SC1 and SC3 the situation is different: here the permutation of rule applications leads to a different end-term. In the end-term for SC1 and SC2 the pairing operation is embedded within the case expression, whereas in the end-term for SC3 the case expression is embedded within the pairing: λ u.⟨ {v.fst(v)  | x.x},  {v.snd(v) | x.x}⟩ SC_3⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r)) =1.2em [r]⊃R [r]C [r]∧R [r]∨L [r]∧L [r]W [r]∨R [r]Rf y : q ⊢y : q y : q ⊢y : p ∨q y : q, z : r ⊢y : p ∨q v : q ∧r ⊢fst(v) : p ∨q [r]∨R [r]Rf x : p ⊢x : p x : p ⊢x : p ∨q u : (q ∧r) ∨p ⊢ {v.fst(v)  | x.x} : p ∨q [r]∨L [r]∧L [r]W [r]∨R [r]Rf z : r ⊢z : r z : r ⊢z : p ∨r y : q, z : r ⊢z : p ∨r v : q ∧r ⊢snd(v) : p ∨r [r]∨R [r]Rf x : p ⊢x : p x : p ⊢x : p ∨r u : (q ∧r) ∨p ⊢ {v.snd(v)  | x.x}: p ∨r u : (q ∧r) ∨p, u : (q ∧r) ∨p ⊢⟨ {v.fst(v)  | x.x},  {v.snd(v)  | x.x}⟩: (p ∨q) ∧(p ∨r) u : (q ∧r) ∨p ⊢⟨ {v.fst(v)  | x.x},  {v.snd(v)  | x.x}⟩: (p ∨q) ∧(p ∨r) ⊢λu.⟨ {v.fst(v)  | x.x},  {v.snd(v)  | x.x}⟩: ((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r)) When we take a look at how the term-annotated rules must be designed in order to have a correspondence to the respective rules in ND, we see why some permutations of rule applications lead to different end-terms, while others do not; and why SC is in general more flexible in this respect than ND. In SC the left conjunction rule as well as the left implication rule are substitution operations, i.e. they can change their place in the order without affecting the basic term structure because only in the inner term structure terms are substituted with other terms.[For ⊃L the only exception is when an application of this rule is permuted with an application of ∨L, which creates a different, though γ-convertible term.] In ND, on the other hand, there are no substitution operations used in the term assignment, i.e. for each rule application a new basic term structure is created. How is this related to the distinction between sense and denotation? In cases like SC1 vs. SC2 the way the inference is given differs, which can also be seen in different terms annotating the formulas occurring within the derivation: with otherwise identical terms in the two derivations y and z only occur in SC1, while fst(v) and snd(v) only occur in SC2. However, the resulting end-term stays the same, thus, we would describe the difference between these derivations as a difference in sense but not in denotation. In other cases, when disjunction elimination or the left disjunction rule is involved, permutation of rule applications can lead to a different end-term, as we see above in SC1 vs. SC3. Whether this corresponds to a difference in denotation depends on whether we accept γ-conversions to be identity-preserving. What all cases have in common, though, is that rule permutation always leads to a difference in sense of the given derivations because the sets of terms occurring within the derivations differ from each other. §.§ Philosophical motivation Let us have a look at how the Fregean conception of sense is received in the literature in order to show the philosophical motivation for adopting such a definition of sense for derivations. According to Dummett <cit.>, Fregean sense is to be considered as a procedure to determine its denotation.[This idea of sense as procedures also occurs in more recent publications like <cit.> or <cit.>.] Girard <cit.>, in a passage about sense and denotation and the relation between proofs and programs, mentions that the sense is determined by a “sequence of instructions" and when we see in this context terms as representing programs and “the purpose of a program [...] to calculate [...] its denotation" (ibid., p. 17), then it seems plausible to view the terms occurring within the derivation, decorating the intermediate steps in the construction of the complex end-term that decorates the conclusion, as the sense of that derivation. Tranchini holds the reduction procedures to be the sense because these `instructions' lead to the term in normal form. However, in our framework - because we do not only consider normal vs. non-normal cases - it seems more plausible to look at the exact terms occurring within the derivations and view them as representing the steps in the process of construction encoding how the derivation is built up and leading us to the denotation, the end-term. For us it is therefore only a necessary requirement for the derivation to have sense to contain only terms for which reduction procedures are available but it does not make up the sense. In the case of rule permutation we can then say that the proof is essentially the same but the way it is given to us, the way of inference, differs: i.e. the sense differs. This can be read off from the set of terms that occur within the derivation: they end up building the same end-term, but the way it is built differs, the procedures to determine the denotation differ. Thus, this allows us to compare differences in sense within one proof system as well as over different proof systems. Troelstra and Schwichtenberg <cit.> e.g. give an example of two derivations in SC producing the same end-term in different ways to show that just from the variables and the end-term we cannot read off how the derivation is built up:[For simplicity we omit the weakening steps that would strictly seen have to precede the applications of the ∧L-rule.] SC1⊢ (s ∧ p) ⊃ ((q ∧ r) ⊃ (p ∧ q)) =1.2em [r]⊃R [r]⊃R [r]∧L [r]∧L [r]∧R [r]Rf x : p ⊢x : p [r]Rf y : q ⊢y : q x : p, y : q ⊢⟨x, y ⟩: p ∧q x : p, z : q ∧r ⊢⟨x, fst(z) ⟩: p ∧q u: s ∧p, z : q ∧r ⊢⟨snd(u), fst(z) ⟩: p ∧q u : s ∧p ⊢λz.⟨snd(u), fst(z) ⟩: (q ∧r) ⊃(p ∧q) ⊢λu.λz.⟨snd(u), fst(z) ⟩: (s ∧p) ⊃((q ∧r) ⊃(p ∧q)) SC2⊢ (s ∧ p) ⊃ ((q ∧ r) ⊃ (p ∧ q)) =1.2em [r]⊃R [r]⊃R [r]∧L [r]∧L [r]∧R [r]Rf x : p ⊢x : p [r]Rf y : q ⊢y : q x : p, y : q ⊢⟨x, y ⟩: p ∧q u : s ∧p, y: q ⊢⟨snd(u), y ⟩: p ∧q u: s ∧p, z : q ∧r ⊢⟨snd(u), fst(z) ⟩: p ∧q u : s ∧p ⊢λz.⟨snd(u), fst(z) ⟩: (q ∧r) ⊃(p ∧q) ⊢λu.λz.⟨snd(u), fst(z) ⟩: (s ∧p) ⊃((q ∧r) ⊃(p ∧q)) The senses of these derivations would be the following: Sense of SC1: {x, y, z, u, ⟨ x, y ⟩, ⟨ x, fst(z) ⟩, ⟨ snd(u), fst(z) ⟩, λ z.⟨ snd(u), fst(z) ⟩, λ u.λ z.⟨ snd(u), fst(z) ⟩} Sense of SC2: {x, y, z, u, ⟨ x, y ⟩, ⟨ snd(u), y ⟩, ⟨ snd(u), fst(z) ⟩, λ z.⟨ snd(u), fst(z) ⟩, λ u.λ z.⟨ snd(u), fst(z) ⟩} The two sets only differ with regard to the underlined terms, otherwise they are identical. Thus, they only differ in the order in which the two left conjunction rules are applied. For the resulting end-term this is inessential, but we can see that when taking the sense, and not only the end-terms, i.e. the denotation, into account, it is indeed possible to read off the structure of the derivations. As noted above (examples on p. 6), the term annotation of the calculi makes this structure of derivations explicit so that we can differentiate between derivations which would otherwise look identical. As several authors point out, this is a desirable feature if one is not only interested in mere provability but wants to study the structure of the derivations in question (cf. <cit.>, <cit.>) and also, for simplicity, if one wants to compare proof systems of ND and SC with each other <cit.>. Since we are interested in both of these points, it seems the right choice for our purposes to consider the annotated versions of the calculi and that is also why these annotated versions are indeed needed for our notions of sense and denotation. Of course, one could argue that the underlying structure is still the same in the non-annotated versions and can be made explicit by other means, too, like showing the different generalizations of the derivations, but still, we do not see how in these calculi our notions could be easily applied. Another issue that needs to be considered is the one of identity of senses, i.e. synonymy. Therefore, we want to extend our definition of sense given above with an addition: If a sense-representing set can be obtained from another by uniformly replacing (respecting the usual capture-avoiding conventions) any occurrence of a variable, bound or free, by another variable of the same type, they express the same sense. What we ensure with this point is just that it does not (and should not) matter which variables one chooses for which proposition as long as one does it consistently. So, it does not make a difference whether we have 2 ND1p ⊃ (q ⊃ p) =1.2em [r]⊃I [r]⊃I [x : p] λz.x: q ⊃p λx. λz.x: p ⊃(q⊃p) Sense1: {x, λ z.x, λ x. λ z.x} or 2 ND2p ⊃ (q ⊃ p) =1.2em [r]⊃I [r]⊃I [y : p] λz.y: q ⊃p λy. λz.y: p ⊃(q⊃p) Sense2: {y, λ z.y, λ y. λ z.y} Sense1 and Sense2 represent the same sense. Or to give another example (pointed to by one of the anonymous referees) where we have free variables occurring within the derivation but not appearing in the end-term: If one would replace all occurrences of the free variable y by the variable w in derivation SC1⊢ (s ∧ p) ⊃ ((q ∧ r) ⊃ (p ∧ q)) (cf. above), then this would make no difference to the sense according to our definition since the sense-representing sets would be obtained from replacing y by w. This also fits the Fregean criterion of two sentences' identical sense, as Sundholm <cit.> depicts it within a broader analysis: two propositions express the same sense if it is not possible to hold different epistemic attitudes towards them, i.e. “if one holds the one true, one also must hold the other one true, and vice versa". Whereas, if we have two sentences which only differ in two singular terms, referring to the same object but differing in sense, we can easily hold the one sentence to be true, while thinking the other is false, if we do not know that they are referring to the same object. With proofs it is the same: Looking at ND1p ⊃ (q ⊃ p) and ND2p ⊃ (q ⊃ p) we may not know whether the derivation is valid or not, we do know, however, that if one is a valid derivation then so is the other. With derivations differing in sense this is not so straightforward. For Frege this point of considering cases where intensionality is directed towards sentences was crucial to develop his notion of sense, so the question arises how we can explain cases of intensionality directed towards proofs with our notions of sense and denotation. Let us suppose we have two denotationally-identical proofs which are represented by two different derivations 𝒟 and 𝒟'. In this case it could happen that a (rational) person believes that derivation 𝒟 is valid but does not believe that derivation 𝒟' is valid. How can we account for that? One explanation would be of course to point to the difference in linguistic representation. After all, it can just be the case that one way of writing down a proof is more accessible to the person than another (they may not be familiar with a certain proof system, for example). This would amount to letting the linguistic representation, the signs, collapse with the sense of a derivation. However, then we would have no means to distinguish this case from cases in which we want to argue that it is not justified for a rational person to have different propositional attitudes towards propositions which are about derivations differing insignificantly from each other, like in the cases of ND1p ⊃ (q ⊃ p) and ND2p ⊃ (q ⊃ p) above. For Frege <cit.> the referent of an expression in an intensional context is not its customary referent, i.e. the object it refers to or the truth value in the case of sentences, but its customary sense. Here the situation is the same: What is referred to in such a setting, when speaking about the attitudes of a person towards propositions about derivations, is not the proof objects (which are identical in our situation) but their senses, which are in this context represented by the sets of terms encoding the steps of construction. It seems plausible then to say that when the construction steps differ in two derivations, a person can have different attitudes towards propositions about them, because the different construction steps may lead to this person grasping the one derivation, while not understanding the other. § ANALOGY TO FREGE'S CASES Let us finally compare how our conception of sense and denotation in the context of proofs fits the distinction Frege came up with for singular terms and sentences. We can have the following two cases with Frege's distinction: firstly (cf. <cit.>), there can be different signs corresponding to exactly one sense (and then of course also only one denotation). In the case of singular terms an example would be “Gottlob's brother” and “the brother of Gottlob". The sense, the way the denoted individual object is given to us, is the same because there is only a minor grammatical difference between the two expressions. More frequently, this occurs in comparing different languages, though, taking singular terms which express exactly the same sense only using different words, like “the capital of France" and “die Hauptstadt Frankreichs". In the case of sentences an example would be changing from an active to a passive construction without changing the emphasis of the sentence; an example from Frege is the following: “M gave document A to N", “Document A was given to N by M" <cit.>. In the case of proofs, finally, an example would be the following case: ND(p∨ p) ⊃ (p∧ p) =1.2em [r]⊃I^3 [r]∧I [r]∨E^1 [y : p ∨p]^3 [x : p]^1 [x : p]^1  {x.x | x.x} : p [r]∨E^2 [y : p ∨p]^3 [x : p]^2 [x : p]^2  {x.x | x.x} : p ⟨ {x.x | x.x},  {x.x | x.x}⟩: p ∧p λy.⟨ {x.x | x.x},  {x.x | x.x}⟩ : (p ∨p) ⊃(p ∧p) SC⊢ (p∨ p) ⊃ (p ∧ p) =1.2em [r]⊃R [r]C [r]∧R [r]∨L [r]Rf x : p ⊢x : p [r]Rf x : p ⊢x : p y : p ∨p ⊢ {x.x | x.x} : p [r]∨L [r]Rf x : p ⊢x : p [r]Rf x : p ⊢x : p y : p ∨p ⊢ {x.x | x.x}: p y : p ∨p , y : p ∨p ⊢⟨ {x.x | x.x},  {x.x | x.x} ⟩: p ∧p y : p ∨p ⊢⟨ {x.x | x.x},  {x.x | x.x}⟩: p ∧p ⊢λy.⟨ {x.x | x.x},  {x.x | x.x}⟩: (p ∨p) ⊃(p ∧p) Sense: {x, y,  {x.x | x.x}, ⟨ {x.x | x.x},  {x.x | x.x}⟩, λ y.⟨ {x.x | x.x},  {x.x | x.x}⟩} Or to give another example: NDp ⊃ (p ⊃ (p ∧ p)) =1.2em [r]⊃I^2 [r]⊃I^1 [r]∧I [x : p]^2 [y : p]^1 ⟨x, y ⟩: p ∧p λy.⟨x, y ⟩: p ⊃(p ∧p) λx.λy.⟨x, y ⟩: p ⊃(p ⊃(p ∧p)) SC⊢ p ⊃ (p ⊃ (p ∧ p)) =1.2em [r]⊃R [r]⊃R [r]∧R [r]Rf x : p ⊢x : p [r]Rf y : p ⊢y : p x : p, y : p ⊢⟨x, y ⟩: p ∧p x : p ⊢λy.⟨x, y ⟩: p ⊃(p ∧p) ⊢λx.λy.⟨x, y ⟩: p ⊃(p ⊃(p ∧p)) Sense: {x, y, ⟨ x, y ⟩, λ y.⟨ x, y ⟩, λ x.λ y.⟨ x, y ⟩} In these cases derivations can consist of different signs, namely by having one representation in SC and one in ND, which do not differ in sense nor in denotation, since they both contain exactly the same terms and produce the same end-term. This comparison between different proof systems seems to fit nicely with Frege's <cit.> comment on “the same sense ha[ving] different expressions in different languages". However, as we have seen above with the examples ND1p ⊃ (q ⊃ p) and ND2p ⊃ (q ⊃ p), this case can also occur within the same proof system. One could wonder whether there should not be a differentiation between the senses of the derivations in the first example since it seems that different rules are applied: in SC⊢ (p∨ p) ⊃ (p ∧ p) we have an application of contraction, which we do not have in ND(p∨ p) ⊃ (p∧ p). This would also question whether our definition of sense distinguishes and identifies the right amount of cases. We do believe that this is the case, though, because in the first example, where there is an application of the contraction rule in SC, there is also a multiple assumption discharge in the ND-derivation, which is generally seen as the corresponding procedure, just as cases of vacuous discharge of assumptions in ND correspond to the application of weakening in SC. So just as in different languages of course not exactly the same expressions are used, here too, the rules differ from ND to SC but since the corresponding procedures are used, one can argue that the sense does not differ for that reason. Another case that can occur according to Frege (ibid.) is that we have one denotation, i.e. one object a sign refers to, but different senses. An example for this would be his famous “morning star" and “evening star" comparison, where both expressions refer to the same object, the planet Venus, but the denoted object is given differently. On the sentence level this would amount to exchanging singular terms in a sentence by ones which have the same denotation: “The morning star is the planet Venus" and “The evening star is the planet Venus". The denotation of the sentence - with Frege: its truth value - thus stays the same, only the sense of it differs, the information is conveyed differently to us. For our proof cases we can say that this case is given when we have syntactically different derivations, be it in one or in different proof systems, which have end-terms belonging to the same equivalence class induced by the set of α-, β- and η-conversions. Thus, examples would be corresponding proofs in ND and SC, which share the same end-term, but contain different terms occurring within the derivations. The reason for this to happen seems that in SC often more variables are necessary than in ND. If we compare derivations within ND, one definite case in which we have the same denotation but a different sense is between equivalent but syntactically distinct derivations, e.g. non-normal and normal derivations, one reducible to the other. Another case up for debate would be the one with rule permutations due to disjunction elimination. Within SC we can have two cases: one due to rule permutation, one due to applications of cut. For the first case, where the inference could be given in a different way, although ending on the same term, we gave examples above (cf. p. 12 and 14f.). However, it is worth mentioning that our distinction still captures the usual distinction, the second case, where it is said that two derivations, one containing cut and the other one in cut-free form (as a result of cut-elimination applied to the former), have the same denotation but differ in sense: SC⊢ (p∧ p) ⊃ (p∨ p) =1.2em [r]⊃R [r]∨R [r]∧L [r]W [r]Rf z : p ⊢z : p z : p, x : p ⊢z : p y : p∧p ⊢fst(y) : p y : p ∧p ⊢fst(y) : p ∨p ⊢λy.fst(y) : (p ∧p) ⊃(p ∨p) Sense: {z, x, y, fst(y), fst(y), λ y.fst(y)} SCcut⊢ (p∧ p) ⊃ (p∨ p) =1.2em [r]⊃R [r]cut [r]∧L [r]W [r]Rf z : p ⊢z : p z : p, x : p ⊢z : p y : p ∧p ⊢fst(y) : p [r]∨R [r]Rf z : p ⊢z : p z : p ⊢z: p ∨p y : p ∧p ⊢fst(y) : p ∨p ⊢λy.fst(y): (p ∧p) ⊃(p ∨p) Sense: {z, x, y, fst(y), z, fst(y), λ y.fst(y)} As mentioned above (fn 14), cut does not need to create a non-normal term, as it is the case here, but still any application of cut will necessarily change the sense of a derivation as opposed to its cut-free form. Finally, cases that need to be avoided in a formal language according to Frege <cit.> would be to have one sign, corresponding to different senses, or on the other hand, one sense corresponding to different denotations. As he mentions, these cases of course occur in natural languages but should not happen in formal ones, so it should also not be possible in our present context, for sure. Fortunately, this cannot happen in the context of our annotated proof systems, either, since the signs (taken to be the derivation as it is written down) always express at most one sense in our annotated system, and likewise the sense always yields a unique denotation since the end-term is part of the sense-denoting set.[Another question would be whether there can be signs without any sense at all. Frege <cit.> dismisses this case, as well, with a remark that we need at least the requirement that our expressions are “grammatically well-formed". Tranchini <cit.> gives a good analogy pointing to the notorious connective playing this role in the case of proofs.] § CONCLUSION The context in which Frege considered sense and denotation was the context of identity. Likewise, we argued in this paper, if we use term-annotated calculi, we can also say something about proof identity: identity of proofs over different calculi or within the same calculus consists in having end-terms that belong to the same equivalence class induced by the set of α-, β- and η-conversions. In ND this can happen when we have the same proof in normal and non-normal form, in SC this can happen when we have the same proof using cut and in cut-free form but also when there are forms of rule permutations where an application of the ∧L-rule or the ⊃L-rule switches place with another rule. Including disjunction in our language creates for both calculi the additional question of whether rule permutations including disjunction elimination (resp. the left disjunction rule) lead to a different proof, or whether these proofs should be identified. We are more interested in sense, however, and here we can conclude that what in all these cases changes is the sense of the derivation in question. Finally, considering the question of identity of sense, i.e. synonymy, and trying to follow Frege's conception on this matter, too, we can say the following: if two derivations are supposed to be identical in sense, this means that the way the inference is given is essentially the same, so the set of terms building up the end-term must be the same. The end-term itself does not necessarily tell us anything about the structure of the proof. Sense, on the other hand, is more fine-grained in that the set of terms occurring within the derivation reflects how the derivation is built up. Especially in SC, where we can have different orders of rule applications leading up to the same end-term, the sense gives us means to distinguish on a more fine-grained level. BarendregtGhilezan Barendregt, H., & Ghilezan, S. (2000). Lambda terms for natural deduction, sequent calculus and cut elimination. Journal of Functional Programming, 10(1), 121–134. Groote De Groote, P. (1999). On the Strong Normalisation of Natural Deduction with Permutation-Conversions. In P. Narendran, & M. Rusinowitch (Eds), Rewriting Techniques and Applications: RTA 1999 (pp. 45–59). Berlin/Heidelberg: Springer. Dosen2003 Došen, K. (2003). Identity of Proofs Based on Normalization and Generality. Bulletin of Symbolic Logic, 9, 477–503. Dosen2008 Došen, K. (2008). Cut Elimination in Categories. Springer. Dummett Dummett, M. (1973). Frege: Philosophy of Language. New York: Harper & Row. DJM Duží, M., Jespersen, B., & Materna, P. (2010). Procedural Semantics for Hyperintensional Logic: Foundations and Applications of Transparent Intensional Logic. Springer. Francez Francez, N. (2017). On harmony and permuting conversions. Journal of Applied Logic, 21, 14–23. Frege1 Frege, G. (1948) [1892]. Sense and Reference. The Philosophical Review, 57(3), 209–230. Frege2 Frege, G. (1979). Posthumous Writings. Oxford: Basil Blackwell. Friedman Friedman, H. (1975). Equality between functionals. In R. Parikh (Ed.), Logic colloquium: Lecture notes in mathematics 453 (pp. 23–37). Berlin/Heidelberg: Springer. Girard Girard, J.-Y. (1989). Proofs and Types. Cambridge: Cambridge University Press. Hacking Hacking, I. (1979). What is Logic? The Journal of Philosophy, 76(6), 285–319. Herbelin Herbelin, H. (1994). A Lambda-calculus Structure Isomorphic to Gentzen-style Sequent Calculus Structure. Computer Science Logic, 61–75. Kreisel Kreisel, G. (1971). A survey of proof theory II. In J.E. Fenstad (Ed.), Proceedings of the Second Scandinavian Logic Symposium (pp. 109–170). Amsterdam: North-Holland. Lindley Lindley, S. (2007). Extensional Rewriting with Sums. In S. Ronchi Della Rocca (Ed.), Typed Lambda Calculi and Applications: TLCA 2007 (pp. 255–271). Berlin/Heidelberg: Springer. M-L Martin-Löf, P. (1975). About Models for Intuitionistic Type Theories and the Notion of Definitional Equality. In S. Kanger (Ed.), Proceedings of the Third Scandinavian Logic Symposium (pp. 81–109). Amsterdam: North-Holland. Muskens Muskens, R. (2005). Sense and the Computation of Reference. Linguistics and Philosophy, 28(4), 473–504. NegrivonPlato Negri, S., & von Plato, J. (2001). Structural Proof Theory. Cambridge/New York: Cambridge University Press. Pfenning Pfenning, F. (2000). Structural Cut Elimination: I. Intuitionistic and Classical Logic. Information and Computation, 157, 84–141. Pottinger Pottinger, G. (1977). Normalization as a homomorphic image of cut-elimination. Annals of Mathematical Logic, 12, 323–357. Prawitz1965 Prawitz, D. (1965). Natural Deduction. Stockholm: Almqvist & Wiksell. Prawitz1971 Prawitz, D. (1971). Ideas and results in proof theory. In J.E. Fenstad (Ed.), Proceedings of the Second Scandinavian Logic Symposium (pp. 235–307). Amsterdam: North-Holland. SU Sørensen, M., & Urzyczyn, P. (2006). Lectures on the Curry-Howard Isomorphism. Amsterdam: Elsevier Science. Statman Statman, R. (1983). λ-definable functionals and βη conversion. Archiv für Mathematische Logik, 23, 21–26. Sundholm Sundholm, G. (1994). Proof-Theoretical Semantics and Fregean Identity Criteria for Propositions. The Monist, 77(3), 294–314. Tranchini2016 Tranchini, L. (2016). Proof-theoretic semantics, paradoxes and the distinction between sense and denotation. Journal of Logic and Computation, 26(2), 495–512. Tranchini2018 Tranchini, L. (2018). Stabilizing Quantum Disjunction. Journal of Philosophical Logic, 47, 1029–1047. TS Troelstra, A., & Schwichtenberg, H. (2000). Basic Proof Theory. 2nd ed., Cambridge: Cambridge University Press. Urban Urban, C. (2014). Revisiting Zucker's Work on the Correspondence Between Cut-Elimination and Normalisation. In L. Pereira, E. Haeusler, & V. de Paiva (Eds), Advances in Natural Deduction: A Celebration of Dag Prawitz's Work (pp. 31–50). Dordrecht: Springer. Wideback Widebäck, F. (2001). Identity of Proofs. Stockholm: Almquist & Wiksell International. Zucker Zucker, J. (1974). The correspondence between cut-elimination and normalization. Annals of Mathematical Logic, 7, 1–112.
http://arxiv.org/abs/2307.04691v1
20230710164441
Metastable cosmic strings
[ "Wilfried Buchmuller", "Valerie Domcke", "Kai Schmitz" ]
hep-ph
[ "hep-ph", "astro-ph.CO", "gr-qc" ]
0.4 CERN-TH-2023-118 MS-TP-23-37 0.4 July 2023 2.5cm Metastable cosmic strings 1cm Wilfried Buchmüller^a, Valerie Domcke^b, Kai Schmitz^c ^a Deutsches Elektronen-Synchrotron DESY, 22607 Hamburg, Germany ^b Theoretical Physics Department, CERN, 1211 Geneva 23, Switzerland ^c Institute for Theoretical Physics, University of Münster, 48149 Münster, Germany 2cm Many symmetry breaking patterns in grand unified theories (GUTs) give rise to cosmic strings that eventually decay when pairs of GUT monopoles spontaneously nucleate along the string cores. These strings are known as metastable cosmic strings and have intriguing implications for particle physics and cosmology. In this article, we discuss the current status of metastable cosmic strings, with a focus on possible GUT embeddings and connections to inflation, neutrinos, and gravitational waves (GWs). The GW signal emitted by a network of metastable cosmic strings in the early universe differs, in particular, from the signal emitted by topologically stable strings by a suppression at low frequencies. Therefore, if the underlying symmetry breaking scale is close to the GUT scale, the resulting GW spectrum can be accessible at current ground-based interferometers as well as at future space-based interferometers, such as LISA, and at the same time account for the signal in the most recent pulsar timing data sets. Metastable cosmic strings thus nourish the hope that future GW observations might shed light on fundamental physics close to the GUT scale. empty § INTRODUCTION The formation of topological defects is a generic feature of cosmological phase transitions <cit.>. Such defects are tied to spontaneous symmetry breaking in extensions of the Standard Model (SM), in particular in grand unified theories (GUTs). They include Nielsen–Olesen strings <cit.>, 't Hooft–Polyakov monopoles <cit.>, unstable “dumbbells” or “X-strings” connecting a monopole–antimonopole pair <cit.>, and other composite defects <cit.>. Monopoles would overclose the universe and must therefore be avoided or diluted by inflation. Domain walls will reach a scaling regime, but will still lead to an overclosure problem. On the contrary, cosmic strings evolve towards a scaling regime where their fraction of the total energy density remains constant. Together with characteristic signatures in the cosmic microwave background and in gravitational lensing, the stochastic gravitational-wave background (SGWB) from cosmic strings is a potentially very interesting messenger from the early universe (for reviews and references, see, e.g., Refs. <cit.>). For a large class of supersymmetric GUTs with symmetry breaking chains avoiding the monopole problem, cosmic-string formation is unavoidable <cit.>. Making use of supersymmetric hybrid inflation <cit.>, the string scale is close to the GUT scale, a prominent example being the breaking of B-L, the difference between baryon and lepton number <cit.>. String scales below the GUT scale are also possible and may be related to intermediate-mass right-handed neutrinos, which could render the SGWB a probe of thermal leptogenesis <cit.>. Pulsar timing array (PTA) observations <cit.> can probe the string tension of stable cosmic strings down to Gμ≲ 10^-10 <cit.>, where G denotes Newton's constant and μ is the energy per unit length of the string. These observations have now entered a new phase with evidence for a common-spectrum process at nanohertz frequencies first reported in Refs. <cit.>, followed by evidence for Hellings–Downs angular correlation, the smoking-gun signal of a SGWB, reported by PTA collaborations across the world in Refs. <cit.>. Beyond the astrophysical interpretation in terms of inspiraling supermassive black-hole binaries <cit.>, possible cosmological interpretations include stable and metastable cosmic strings <cit.> (see, e.g., Refs. <cit.> for an overview of possible cosmological signals). However, the originally favoured GUT-scale strings with a tension in the range Gμ≃ 10^-(8⋯6) are firmly excluded by these results, as they would lead to too large an SGWB signal in the PTA band. However, in theories where strings couple to monopoles, strings can decay by quantum tunneling into string segments connecting monopole–antimonopole pairs <cit.>. In the semiclassical approximation, the decay rate per string unit length is given by <cit.> Γ_d = μ/2 πexp( - πκ) with κ = m_M^2/μ , where m_M is the monopole mass. Given the exponential dependence of the decay rate on the parameter κ, and considering monopole masses larger than the string scale, metastable strings have generally been assumed to be effectively stable (see, e.g., Refs. <cit.>). However, a particularly interesting phenomenology is obtained for metastable cosmic strings with √(κ)∼ 8. Such values can indeed be obtained for SO(10) models with B-L strings <cit.>. In this case, the cosmic-string network survives for about an hour (redshift z ∼ 10^7) until monopole production becomes efficient, which implies that at high frequencies the resulting GW spectrum resembles that of stable cosmic strings, whereas at lower frequencies, corresponding to GWs sourced at later times, the spectrum is strongly suppressed <cit.>. As a result, metastable cosmic strings can not only provide a good fit to the PTA signal for GUT-scale string tensions <cit.>, but moreover, they can also easily evade any bounds at PTA scales while still yielding a strong signal at higher frequencies <cit.>, i.e., in the frequency bands relevant for LISA <cit.> and ground-based interferometers <cit.>. In this article, we will discuss the current status of metastable cosmic strings. Section <ref> deals with a minimal but representative example model: the breaking of SU(2)_R×U(1)_B-L down to U(1)_Y by an SU(2)_R Higgs triplet and two SU(2)_R Higgs doublets with quantum numbers suitable for an embedding in SO(10). The computation of the SGWB signal is presented in Section <ref>, with an emphasis on the theoretical prediction for the spectral tilt of the GW spectrum. Some aspects of stable and quasi-stable strings are reviewed in Section <ref>, and the role of inflation is described in Section <ref>. We conclude in Section <ref>. § METASTABLE STRINGS Metastable strings are a characteristic prediction of GUTs that lead, via several steps of spontaneous symmetry breaking, to the SM gauge group G_SM = SU(3)_C×SU(2)_L×U(1)_Y. Strings with tensions above the electroweak scale result from the spontaneous breaking of a U(1) group that commutes with G_SM. Similarly, monopoles arise once a non-Abelian gauge group is broken to a subgroup containing a U(1) factor. Then, if the U(1) symmetry involved in the production of monopoles partially overlaps or coincides with the U(1) symmetry responsible for string formation, the strings become metastable, i.e., pairs of monopoles and antimonopoles spontaneously nucleate along the strings by quantum tunneling. SM extensions giving rise to strings must feature a gauge group of at least rank 5. Starting from an exceptional Lie group at high energies, we can, e.g., consider the following symmetry breaking chain, G_SM⊂SU(5)×U(1)_X⊂SO(10) ⊂E(6) ⊂… , where SU(5) refers to the Georgi–Glashow SU(5) GUT group or to the flipped SU(5) model. Another possibility is to consider a sequence featuring an extended electroweak sector, G_SM ⊂SU(3)_C×SU(2)_L×SU(2)_R×U(1)_B-L⊂G_PS⊂SO(10) ⊂E(6) ⊂… , where G_PS = SU(4)×SU(2)_L×SU(2)_R denotes the Pati–Salam group. If a symmetry group G is broken to a subgroup H, the quotient ℳ = G/H corresponds to the manifold of degenerate vacuum states. The types of defects that may be formed in the symmetry breaking are governed by the topology of ℳ, which is encoded in the homotopy groups π_n(ℳ). Topologically stable strings can form if the first homotopy group is nontrivial, π_1(ℳ) ≠ I, i.e., there are loops in ℳ that cannot be contracted to a point. Similarly, topologically stable magnetic monopoles can arise if the second homotopy group is nontrivial, π_2(ℳ) ≠ I, so that there exist non-contractable two-dimensional surfaces in ℳ. We shall be particularly interested in two-step symmetry breakings G→H→K, where the homotopy group G/K is trivial, but the homotopy groups of the individual steps, G/H and H/K, are nontrivial. In this case, metastable defects can form. A simple example is the breaking of SO(10) to the Standard Model group via SU(5). The result crucially depends on the chosen Higgs representation <cit.>. The breaking chain SO(10) 45→SU(5) ×U(1) 45⊕126→G_SM×ℤ_2 yields stable monopoles and, in the second step, also stable strings. On the contrary, for the closely related symmetry breaking with a 16-plet, SO(10) 45→SU(5) ×U(1) 45⊕16→G_SM , the homotopy group of ℳ = SO(10)/G_SM is trivial, π_1(ℳ) = I, and there are no topologically stable strings. However, cosmologically interesting metastable strings can now form. Metastable strings can break apart into segments in consequence of quantum tunneling events leading to the spontaneous nucleation of monopole–antimonopole pairs. Eventually, string decay leads to a population of short string segments where each segment has a monopole on one end and an antimonopole on the other. In the example in Eq. (<ref>), monopoles are formed both in the first and second breaking step, where the latter also determines the string energy scale. Besides, there are also other composite topological defects, which can be created in other symmetry breaking chains. One example are ℤ_2-strings, also known as “necklaces”, which correspond to one-dimensional string–monopole–string configurations, i.e., configurations where two strings are attached to each monopole <cit.>. More details and references on composite topological defects can be found in Ref. <cit.>. Realistic GUTs require large Higgs representations in order to break the GUT gauge group down to the SM, which complicates their analysis. On top, nonsupersymmetric models are sensitive to large radiative corrections and hence suffer from a severe naturalness problem. This observation triggered the investigation of even more complicated models in the literature: supersymmetric GUTs with even more complicated Higgs sectors. In view of this situation, it is important not to loose sight of the fact that conventional spontaneous symmetry breaking is not the only way in which a fundamental GUT gauge group can be reduced to the SM gauge group. Higher-dimensional theories such as orbifold GUTs or string theory represent intriguing alternatives that deserve consideration (a review and references can, e.g., be found in Ref. <cit.>). In these constructions, the fundamental GUT gauge group is first partially broken in a geometric way, namely, by the the compactification of extra dimensions, and only the remnant subgroup remaining after this first step is further reduced to the SM group via conventional spontaneous symmetry breaking. For these reasons, we will restrict ourselves to the simplest possible case leading to metastable strings in the following, the first embedding in Eq. (<ref>), which may mark the end of a long symmetry breaking chain, which we, however, do not specify in detail, G_SM⊂SU(3)_C×SU(2)_L×U(2) , U(2) = SU(2)_R×U(1)_B-L /ℤ_2 . In order to break this group down to G_SM, we consider a Higgs triplet U ∼ (3,0) of SU(2)_R alongside a pair of Higgs doublets of SU(2)_R, S ∼ (2,q) and S_c ∼ (2̅,-q), that carry charges ± q under U(1)_B-L. The breaking of SU(2)_R leads to monopoles while the breaking of U(1)_B-L implies strings, yielding the necessary ingredients for metastable strings. Also, note that we divide out a ℤ_2 factor in Eq. (<ref>), which is necessary to avoid double counting of the center of SU(2)_R, which consists of the identity element and its negative, {I,-I}, and which is also contained in U(1)_B-L. For earlier discussions of defects in U(2) models with triplet and doublet Higgs fields but without supersymmetry, see Refs. <cit.>. §.§ Strings from supersymmetric B-L breaking The prospects to explain the recent PTA signal in terms of a SGWB from metastable cosmic strings motivate us to consider large string tensions. We are specifically interested in symmetry breaking scale far above the electroweak scale, at least of the order of v_s ∼ 10^13 GeV. It is reasonable to expect unbroken supersymmetry at such high energies, which is why we will focus on supersymmetric models of symmetry breaking in the rest of this paper, following the analysis presented in Ref. <cit.>. Our starting point is a supersymmetric Abelian Higgs model with two chiral superfields S and S_c and a gauge singlet ϕ that gives rise to spontaneous B-L breaking. The fields S and S_c carry charge q and -q under U(1)_B-L, respectively, and the Kähler potential and superpotential of the model (we use the same conventions as in Ref. <cit.>) are given by K = S^† e^2gqV S + S_c^† e^-2gqVS_c + ϕ^†ϕ , P = 1/4 W W + λϕ(v_s^2 - SS_c) . Here, V is a vector superfield, W is the supersymmetric field strength, and v_s is the scale of spontaneous symmetry breaking, which we can choose to be real and positive. From the auxiliary fields of the vector and chiral superfields, we can derive the scalar potential, 𝒱 = 1/2 D^2 + |F_S|^2 + |F_S_c|^2 + |F_ϕ|^2 , where the F and D terms follow from solving the associated equations of motion, D = - gq (|S|^2 - |S_c|^2) , F_S^* = λϕ S_c , F_S_c^* = λϕ S , F_ϕ^* = -λ(v_s^2 - SS_c) . The scalar potential and the kinetic terms for the scalar and vector fields constitute the bosonic part of the Lagrangian, ℒ_b = -1/4 F_μνF^μν - (D_μ S)^*(D^μ S) - (D_μ S_c)^*(D^μ S_c) - ∂_μϕ^* ∂^μϕ - 𝒱 , with covariant derivatives D_μ = (∂_μ + igq A_μ)S and D_μ S_c = (∂_μ - igq A_μ)S_c, and where A_μ is the vector component in the vector multiplet V, F_μν is the corresponding field strength, and where chiral superfields and their scalar components are denoted by the same symbols. The vacuum manifold of the model is described by a D-flat direction, |S|^2 = |S_c|^2, which represents a flat moduli space of vacuum states with unbroken supersymmetry and spontaneously broken U(1)_B-L symmetry, S = v_s e^iα , S_c = S^* , ϕ = 0 . The particle excitations around the true vacuum are best described if we expand the fields S and S_c around their vacuum expectation values (VEVs), S = v_s e^iα + S' , S_c = v_s e^-iα + S'_c . In this field basis, we then find that the Goldstone multiplet (S'-S'_c)/√(2) is “eaten” by the massless vector multiplet V, which results in a massive vector multiplet with mass m_V = √(2)gqv_s. Similarly, the orthogonal linear combination (S'+S'_c)/√(2) and the singlet field ϕ fuse in a massive chiral multiplet with mass m_S = λ v_s. As in the nonsupersymmetric case, the vacuum manifold ℳ is the circle S^1, which has a nontrivial first homotopy group, π_1(ℳ) = ℤ. The model thus admits exited states in the form of topologically stable strings. On the supersymmetric moduli space, i.e., along the D-flat direction, these strings are described by the Nielsen–Olesen string solutions <cit.>. Static strings along the z axis and with winding number n correspond to field configurations of the form S = v_s f(ρ) e^niφ = S_c^* , A_0 = 0 , A_i = -n/gρ h(ρ) ∂_iφ , where we work in cylindrical coordinates (ρ,φ,z) and with boundary conditions f(0) = h(0) = 0 , f(∞) = h(∞) = 1 . Strings described by these field configurations exhibit a total magnetic flux along the string of 2n π/g with n ∈ℤ∖{0} and a string tension (i.e., energy per unit length) of μ = 2π v_s^2 B(β) = π m_V^2/(gq)^2 B(β) with β = m_S^2/m_V^2 = λ^2/2g^2 . Here, the parameter β measures the ratio of the Higgs and vector boson masses; and B is a slowly varying function of β, normalized such that B→1 in the Bogomol'nyi limit β→ 1 <cit.>. If β < 1, the strings will have larger Higgs-field cores than gauge-field cores, m_S^-1 > m_V^-1, which gives rise to type-I strings, in analogy to similar condensed-matter systems. Type-I strings are stable and attracted towards each other. The model defined in Eq. (<ref>) is intriguing, as it features a singlet field ϕ that can be identified as the inflaton in supersymmetric hybrid inflation <cit.>. It is, moreover, straightforward to extend it by adding supersymmetry breaking terms and by coupling it to a set of chiral right-handed-neutrino superfields. Extensions of the model along these lines can account for leptogenesis, dark matter, and a stage of cosmic inflation in accord with recent bounds on the primordial scalar spectral index in observations of the cosmic microwave background <cit.>. Scenarios of this type also give rise to stable cosmic strings with tension Gμ∼ 10^-7, where the specific value of the string tension is dictated by the other phenomenological aspects of the model, in particular the energy scale of inflation. For stable strings, such large string tensions have, however, been known to be in conflict with PTA measurements for many years <cit.>, which renders such scenarios unviable. §.§ Monopoles from supersymmetric SU(2) breaking Next, we turn to topologically stable 't Hooft–Polyakov monopoles <cit.> produced by the spontaneous breaking of SU(2) to U(1). To implement this breaking, we consider an SU(2) triplet U^a (a = 1,..,3) and work with the following Kähler potential and superpotential, K = U^† e^2gV U , P = 1/8 tr[W W] . Here, V = V^a T^a denotes the SU(2) vector superfield and the (T^a)_bc = -iϵ_abc are the SU(2) generators in the adjoint representation. The bosonic Lagrangian of the theory is given by ℒ_b = -1/4 F^a_μνF^aμν - (D_μ U^a)^*(D^μ U^a) - ig ϵ_abc D^a U^b * U^c + 1/2 D^aD^a + F^a *_U F^a_U , with gauge-covariant derivative (D_μ U)^a = ∂_μ U^a - g ϵ_abc A^b_μ U^c and non-Abelian field strength tensor F^a_μν = ∂_μ A^a_ν - ∂_ν A^a_μ -g ϵ_abc A^b_μ A^c_ν. In passing, we mention that the Lagrangian in Eq. (<ref>) corresponds to the bosonic Lagrangian of SU(2) Super-Yang–Mills theory with 𝒩=2 supersymmetry. Supersymmetric UV completions of the SM of this type can occur in certain orbifold compactifications. Starting with a supersymmetric Pati–Salam or SO(10) theory in five or six dimensions, orbifold constructions can lead to an 𝒩 = 2 sector with gauge group SU(2)_R in four dimensions. The total gauge group containing SU(2)_R as a subgroup is then spontaneously broken down to the SM in subsequent symmetry breaking steps. The classical theory defined by the potentials in Eq. (<ref>) features again a moduli space spanned by the flat direction U^a = u/√(2) δ_a3 modulo gauge transformations. Thanks to the large symmetry of the theory, this flat direction is preserved at the quantum level as well as when nonperturbative corrections are taken into account. The flat direction is therefore also present in the full theory, where it interpolates between two phases: a confinement phase with monopole condensation at small values of u and a perturbative Higgs phase at large values of u <cit.>. In the following, we will be concerned with the Higgs phase of the model, which corresponds to field values u much larger than the confinement scale Λ. As in the previous case of the Abelian Higgs model, the vacuum degeneracy along the flat moduli space can be lifted by coupling the superfield U to a new gauge singlet superfield ϕ' via a superpotential term of the form P = 1/8 tr[W W] + λ'/2ϕ' (v_u^2/2 - U^T U) , where v_u is a mass scale that we can choose to be real and positive and which sets the scale of spontaneous SU(2) breaking. The new superpotential in Eq. (<ref>) now only exhibits 𝒩=1 supersymmetry, as the new term breaks 𝒩=2 supersymmetry. Next, we derive the equations of motions for the auxiliary fields contained in V, U, and ϕ', D^a = ig ϵ_abc U^b* U^c , F_U^a* = λ' ϕ' U^a , F_ϕ'^* = -λ'/2(v_u^2/2 - U^T U) . Instead of a supersymmetric moduli space, we now find a supersymmetric vacuum at U^a = v_u/√(2) δ_a3 , ϕ' = 0 , where the value of U^TU is now fixed and U^a is determined up to an SU(2) rotation. Meanwhile, one rotation of the fields U^a still represents an unbroken symmetry, despite the fact that U^TU has nonvanishing expectation value, which means that a U(1) subgroup of SU(2) survives in the new ground state after symmetry breaking. The particle spectrum of the theory now consists of: a massless vector multiplet, V^3; a charged vector multiplet with mass m_V = gv_u that has “eaten” the Goldstone multiplets U^1,2; and a massive chiral multiplet with mass m_U = λ' v_u/√(2) composed of the multiplets U'^3=U^3-v_u and ϕ'. After symmetry breaking, the theory contains excited states in the form of topologically stable monopoles. To see this, note that the vacuum manifold ℳ is a 2-sphere S^2 spanned by the SU(2) rotations acting on the fields U^a in the ground state, just like in the nonsupersymmetric case. The vacuum manifold thus has nontrivial homotopy group π_2(ℳ) = ℤ, which indicates the existence of monopole solutions. The simplest monopole configuration is the “hedgehog” solution <cit.>, corresponding to radial field profiles of the form U^a = v_u/√(2) f(r) x^a/r , A^a_0 = 0 , A^a_i = h(r) ϵ_aij x^j/gr^2 . Now, r denotes the radial coordinate in spherical coordinates rather than cylindrical coordinates, r=(x^i x^i)^1/2, and the functions f and h are subject to the boundary conditions f(0) = h(0) = 0 , f(∞) = h(∞) = 1 . From Eq. (<ref>), we read off that the scalar field profile points into the radial direction, U^a ∝ϕ̂^̂â≡ x^a/r. The same is therefore true for the unbroken symmetry generator. Similarly, one obtains the following gauge-invariant magnetic field strength at large distances, B_i = -1/2 ϕ̂^a ϵ_ijk F^a_jk = x^i/gr^3 , which allows us to identity the magnetic charge of the monopole, 4π/g. In general, the monopole mass is 2n π/g with n ∈ℕ, i.e., the 't Hooft–Polyakov monopole corresponds to n=2. The mass of the monopole is subject to the Bogomol'nyi bound <cit.>, m_M ≥4π m_V/g^2 = 4π v_u/g , where the equal sign holds in the Prasad–Sommerfield limit λ'/g → 0 <cit.>. For nonzero values of the ratio of coupling constants, λ'/g, there exist no analytical expressions for the functions f and h in Eq. (<ref>). One therefore has to resort to numerical solutions of the field equations, which show that m_M is a monotonically increasing function of the Higgs mass. In the limit λ'/g →∞, one finds in particular an upper bound m^max_M ≃ 4π m_V/g^2 × 1.79 <cit.>. §.§ Monopoles and metastable strings Let us now combine the constructions in Secs. <ref> and <ref> and discuss metastable B-L strings decaying into short string segments with monopoles and antimonopoles on their ends. Defects of this type form if we embed the electroweak part of the SM gauge group, G_ EW = SU(2)_L×U(1)_Y, in the group G_221 = SU(2)_L×SU(2)_R ×U(1)_B-L/ℤ_2 and spontaneously break G_221 down to G_ EW in two steps. Hypercharge Y then follows from the linear combination of the neutral SU(2)_R and U(1)_B-L generators, Y = T^3_R + (B-L)/2. The two symmetry-breaking steps in this model break SU(2)_R×U(1)_B-L/ℤ_2 to U(1)_Y and end on the vacuum manifold ℳ = U(2)/U(1) = S^3. This manifold contains the union of the vacuum manifolds of stable strings and monopoles, S^1 ∪ S^2, and has trivial homotopy groups π_1(ℳ) and π_2(ℳ). The model thus neither features topologically stable monopoles nor strings. Instead, we will see that it can give rise to metastable strings or unstable dumbbells. In order to break U(2) = SU(2)_R×U(1)_B-L/ℤ_2 down to U(1)_Y, we shall work with similar Higgs representations as in Secs. <ref> and <ref>. Specifically, we introduce a B-L-neutral SU(2)_R triplet U as well as two oppositely B-L-charged SU(2)_R doublets S, S_c, U ∼(3,0) , S ∼(2,q) , S_c ∼(2̅,-q) , under SU(2)_R×U(1)_B-L. Defects in nonsupersymmetric U(2) models with triplet and doublet Higgs representations were previously discussed, e.g., in Refs. <cit.>. In the following, we are, however, interested in the supersymmetric version of the model, which we construct by choosing the Kähler potential K and superpotential P as a combination of Eqs. (<ref>), (<ref>), and (<ref>), supplemented by an additional mass term in P, [Our model is different from standard left-right-symmetric models, as we work with neutral triplets. In left-right-symmetric models, the triplets typically carry U(1) charge and thus occur in pairs <cit.>.] K = U^† e^2gV U + S^† e^2(gṼ + g'qV')S + S_c^† e^-2(gṼ+g'qV')S_c + ϕ^†ϕ + ϕ'^†ϕ' , P = 1/8 tr[W W] + 1/4 W'W' + 2 h S^T_c Ũ S + λ'/2ϕ' (v_u^2/2 - U^T U) + λϕ(v_s^2 - S^T_cS) - h v_u S^T_c S . Here, U = (U^1,..,U^3)^T is the triplet field written as a vector in the triplet representation; Ũ = U^a τ^a/2 is the triplet field written as a matrix in the doublet representation; V = V^a T^a is the SU(2)_R vector field in the triplet representation; Ṽ = V^a τ^a/2 ≡ T_R is the SU(2)_R vector field in the doublet representation; V' is the U(1)_B-L vector field; and W and W' are the supersymmetric SU(2)_R and U(1)_B-L field strengths. The covariant derivative in the bosonic sector, induced by the Kähler potential, is given by D_μ S = ∂_μ S + i(gṼ + g'qV')S. The terms involving the Yukawa coupling h are introduced for the following reason: First, the trilinear coupling, coupling the fundamental triplet U^a to the composite triplet S^T_cτ^a S ensures that a U(1) subgroup survives after symmetry breaking. Without this term, the initial U(2) group would in general be broken completely, which in our case would mean that no hypercharge gauge group U(1)_Y would remain in the electroweak sector. Second, the mass term for the pair of doublet fields, i.e., the last term in P in Eq. (<ref>), ensures that U(2) symmetry breaking results in a supersymmetric vacuum with ⟨ P⟩ = 0. This serves the purpose to separate the energy scales of SU(2)_R and U(1)_B-L breaking from the energy scale of supersymmetry breaking. Without this mass term, we would generically expect a contribution to the gravitino mass from the U(2) sector of the order of ⟨ P⟩/M^2_P∼ hv_uv_s^2/M^2_P. However, if ⟨ P⟩ = 0 after U(2) symmetry breaking, we retain the possibility that a separate supersymmetry-breaking sector results in a hierarchically smaller gravitino mass. As in Secs. <ref> and <ref>, the model exhibits again D-flat directions that are lifted by the coupling to the singlets ϕ and ϕ'. The supersymmetric true vacuum then corresponds to U^a = v_u/√(2) δ_a3 , S = S_c = v_s [ 1; 0 ] , ϕ' = √(2)hv_s^2/λ' v_u , ϕ = 0 . The fundamental triplet U^a and the composite triplet S^T_cτ^aS are parallel in this vacuum configuration, as desired, thanks to the Yukawa coupling h ≠ 0 in Eq. (<ref>). As mentioned above, this is necessary to keep an unbroken U(1) symmetry in the vacuum. Without the Yukawa coupling h, the relative orientation of U^a and S^T_cτ^aS would not be fixed. Next, let us discuss the mass spectrum of the model. In order to identify the mass eigenstates, we must shift the chiral multiplets around their vacuum expectation values, U^3 = v_u/√(2) + U^3' , S = [ v_s + S^0'; S^- ] , S_c = [ v_s + S^0'_c; S^+ ] , ϕ' = √(2)hv_s^2/λ'v_u + ϕ̂ . Then, by inspecting the terms linear in the vector fields V, Ṽ, and V' in the Kähler potential in Eq. (<ref>), we can identify the Goldstone multiplets, Π^∓ = 1/√(v_u^2 + v_s^2)(v_u U^∓ + v_s S^∓) , Π^0 = 1/√(2)(S^0' - S^0'_c) , where U^± = (U^1 ∓ i U^2)/√(2), which are respectively “eaten” by the vector multiplets V^± = 1/√(2)(V^1 ∓ iV^2) , V_X = (cosΘ V^3 + sinΘ V') , with tanΘ = 2g'q/g. The vector multiplet orthogonal to these two fields, V_Y = -sinΘ V^3 + cosΘ V' , remains massless, while the vector multiplets V^± and V_X acquire masses m^2_V = g^2 (v_u^2 + v_s^2) , m^2_X = g^2/2cos^2Θ v_s^2 . In addition to the Goldstone multiplets Π^± and Π^0 and vector multiplets V^±, V_X, and V_Y, we are left with six chiral multiplets, Σ^±, Σ^0, U^3', ϕ, and ϕ̂. The mass matrix of these fields follows from the quadratic part of the superpotential, P_m = - 2√(2)h(v_u^2+v_s^2/v_u) Σ^-Σ^+ - λ' v_u/√(2)ϕ̂ U^3' - √(2)v_s(λϕ - h U^3') Σ^0 - h v_s^2/v_u U^3'U^3' , where the linear combinations Σ^± = 1/√(v_u^2 + v_s^2)(-v_s U^± + v_u S^±) , Σ^0 = 1/√(2)(S^0' + S^0'_c) , are orthogonal to the Goldstone multiples Π^± and Π^0, respectively. We emphasize again the role of the Yukawa coupling: in absence of the h-dependent terms in Eq. (<ref>), the superpotential P_m simply corresponds to the mass terms discussed in Secs. <ref> and <ref>, i.e., the mass terms for SU(2)_R and U(1)_B-L breaking in isolation. For suitably chosen parameter values, the model discussed in this section allows us to break SU(2)_R ×U(1)_B-L down to U(1)_Y in two subsequent steps, each of which corresponding to a cosmological phase transition in the early universe. In the first step, a nonvanishing triplet expectation value ⟨ U^a⟩ breaks SU(2)_R to U(1)_R; and then in a second step, nonvanishing doublet expectation values ⟨ S⟩ and ⟨ S_c⟩ break U(1)_R×U(1)_B-L down to U(1)_Y. Note that analogous symmetries are present in the electroweak sector, where SU(2)_L contains the subgroup U(1)_L and where U(1)_L ×U(1)_Y contain in turn the electromagnetic subgroup U(1)_Q; even though electroweak symmetry breaking does not involve any Higgs triplets. Up to now, we treated the U(1)_B-L gauge coupling times the charge of the doublet fields, qg', as a free parameter. This is no longer possible as soon as one begins to consider embeddings of our model in either of the symmetry breaking chains in Eqs. (<ref>) and (<ref>). In Pati–Salam or SO(10) GUT extensions of the SM, the Higgs doublets S and S_c are embedded into Pati–Salam (4,1,2) ∼χ_L and (4̅,1,2̅) ∼χ_R^c representations, or into 16, 16 representations of SO(10), respectively [see Eq. (<ref>)]. Here, the Higgs doublets S, S_c are identified as the “lepton doublets” in χ_L, χ^c_R. For the Pati–Salam embedding, the covariant derivative in the bosonic sector reads D_μχ_L = ∂_μχ_L + i(g T^a_RV^a + g'1/2(B-L) V')χ_L. The normalization condition g^' 2/4 tr[(B-L)^2] = g^2tr[(T^3_R)^2] then implies g'√(2/3) = g, which corresponds to the mixing angle tanΘ = -√(3/2). The covariant derivative with the two U(1) factors U(1)_R and U(1)_B-L then reads D_μχ_L = ∂_μχ_L + ig (T^3_R V^3+ √(3/2) 1/2(B-L) V') χ_L. Note that the field S carries charge ± 1/2 with respect to the generators T^3_R and 1/2(B - L), respectively. Embedding the doublets S, S_c in 16-, 16-plets Φ, Φ^c of SO(10), as in Eq. (<ref>), implies that heavy Majorana neutrino masses must be generated by the nonrenormalizable operator ℒ_n = 1/M_* h_ij S^T L^c_i S^T L^c_j ⊂1/M_* h_ij Φ^c ψ_i Φ^cψ_j . Here, the fields L^c_i = (n^c_i,e^c_i)^T, i=1,..,3, denote the SU(2)_R doublets of right-handed neutral and charged leptons that are contained in the SO(10) 16 representations ψ_i of matter, and h_ij are Yukawa couplings. Alternatively, one can follow Eq. (<ref>) and break SO(10) with 126-, 126-plets Φ̃, Φ̃^c containing the SU(5) singlets S̃, S̃_̃c̃. Heavy neutrino masses are now generated by the renormalizable couplings ℒ_n = h_ijS̃ L^c_i L^c_j ⊂ h_ijΦ̃ψ_i ψ_j , as assumed, e.g., in Ref. <cit.>. The VEVs of S̃, S̃_̃c̃ leave a ℤ_2 discrete symmetry unbroken, which leads to topologically stable strings. The cosmological realization of the two symmetry-breaking stages in our model (in the form of cosmological phase transitions) leads to the formation of defects: monopoles in the first step and strings in the second step. For a monopole–string–antimonopole configuration, the magnetic fluxes of the string and the (anti)monopole have to match[Note that, in the U(2) model, this is only possible if g'q and g are integer multiples of each other. This is guaranteed by the Pati–Salam embedding.] (see, e.g., Ref. <cit.>). The string solution with lowest energy has winding number n=1. As the symmetry breaking field S has charge 1/2, it carries magnetic flux 4π/g. This can be matched by a n=2 monopole with mass m_M ∼ 4π v_u/g. Together with the string tension μ≃ 2π v_s^2, we then obtain for the parameter κ, which controls the metastability of cosmic strings, κ = m_M^2/μ∼8π/g^2v_u^2/v_s^2 . In supersymmetric theories, one expects g^2 ∼ 1/2 at the unification scale. This implies √(κ)∼ 7 v_u/v_s. As we shall see in the following section, metastable strings can be relevant for GWs in the PTA band for √(κ)≳ 8, which corresponds to v_u ≳ v_s. Note that the model predicts confined as well as unconfined magnetic flux for the monopole, which is estimated as 4π/g sin^2Θ (for a discussion, see Ref. <cit.>). An important open question concerns the range of validity of the relation in Eq. (<ref>). The size of the magnetic cores of the monopole and the string are given by m_V^-1 and m_X^-1, respectively. The string decay rate will also be affected by the false vacuum cores, whose size is given by the Higgs masses m_U and m_S for monopole and string, respectively. Moreover, Eq. (<ref>) uses estimates for the mass and tension of an isolated monopole as well as an isolated string, respectively. So far, no calculations have carried out for spatially extended composite defects. As v_u approaches zero, the semiclassical approximation used in the derivation of Eq. (<ref>) breaks down, and metastable strings turn into dumbbells that decay immediately. In the case where SU(2)_R×U(1)_B-L is broken to U(1)_Y by VEVs of the doublets S and S_c only, dumbbells or X-strings form, which are completely analogous to the Z-strings of the SM <cit.>. For |tanΘ| = √(3/2), X-strings are known to be unstable <cit.>. Metastable strings are a generic feature of GUTs. The supersymmetric breaking of U(2) = SU(2)_R×U(1)_B-L /ℤ_2 is the simplest example (albeit a representative one) of a much richer structure that occurs in realistic GUTs. It is an intriguing prospect that the metastability of cosmic strings might be tested with gravitational waves, which would provide direct information about the energy scales of GUT symmetry breaking stages. § STOCHASTIC GRAVITATIONAL-WAVE BACKGROUND The SGWB sourced by a metastable cosmic-string network was recently computed in Ref. <cit.> (see also Refs. <cit.>). As for stable cosmic strings, a population of string loops with number density ∘n(ℓ, t') radiates GWs with the power density per frequency <cit.> P_gw(t', f') = G μ^2 ∑_k = 1^k_maxℓ/f' ∘n(ℓ, t') P_k , where f' = 2 k/ℓ indicates the GW frequency, emitted by a loop of length ℓ oscillating in its kth harmonic excitation; t' is the time of GW emission and P_k = Γ/(k^4/3ζ[4/3]) with Γ≃ 50 is the power emitted by a single loop (assuming the emission is dominated by the contribution from cusps). Integrating over t', we obtain the spectral energy density in GWs today normalized by the critical energy density, Ω_gw(t_0, f) = 16 π (Gμ)^2/3 H_0^2 f∑_k k P_k ∫_0^z_idz'/H(z') (1 + z')^6 ∘n(2 k/f', t(z')) . Here H(z) is the Hubble parameter, we have switched the time-variable to redshift z, and the argument of the loop number density ensures that we are accounting for all GWs emitted at frequency f' such that after red-shifting, they are observed at frequency f today. The remaining challenge is to determine the loop number density ∘n(ℓ, t'). In the limit of stable cosmic strings, we adopt the velocity-dependent one-scale (VOS) model <cit.> within the Nambu–Goto framework, in which one-dimensional string loops are formed at a fixed fraction α of the horizon and then shrink due to GW emission, ℓ(t) = α t' - Γ G μ (t - t'). In this case, the loop number density can be determined analytically by solving the corresponding kinetic equation, up to integration constants which can be extracted from simulations <cit.>. For example, in a radiation-dominated background, this yields ∘n^rad_∞(ℓ, t) = B/t^3/2 (ℓ + Γ G μ t)^5/2 Θ(α t - ℓ) , where B = 0.18 and α = 0.1 are obtained from a fit to numerical simulations. The subscript ∞ (for κ→∞) refers to stable cosmic strings. For metastable cosmic strings, the kinetic equations are modified to take into account the decay of string loops to segments through the formation of a monopole–antimonopole pair, as well as the formation of segments from longer segments and super-horizon strings <cit.>. If the monopoles carry no unconfined flux, the segments themselves can have cosmological lifetimes and contribute to the SGWB <cit.>. However, as demonstrated in Ref. <cit.>, the GW spectrum generated by cosmic string loops alone provides a good approximation to the full spectrum in most of the parameter space, even when there is a contribution from segments. Moreover, the example in Sec. <ref> has unconfined flux. We therefore focus on the GW spectrum from string loops, in which case the key change compared to stable cosmic strings is an additional decay term in the kinetic equation for the loop number density accounting for the monopole–antimonopole formation on the loops. Matching the number density to Eq. (<ref>) at early times, t ≪ t_s = 1/Γ_d^1/2, then yields for the loop number density of the metastable cosmic string network at t > 1/Γ_d^1/2, ∘n^rad(ℓ, t) = B/t^3/2 (ℓ + Γ G μ t)^5/2 e^- Γ_d [ ℓ (t - t_s) + 12Γ G μ (t - t_s)^2] Θ(α t_s - ℓ - Γμ ( t - t_s)) . Here, the exponential factor accounts for the decay of the loops at t > t_s through the generation of monopoles, and the Heaviside function ensures that loop formation only occurs at t < t_s. For expressions for the loop number densities involving evolution during the matter-dominated era as well as expressions for the number densities of super-horizon strings and segments, see Ref. <cit.>. Fig. <ref> shows the GW spectrum obtained by inserting Eq. (<ref>) (and corresponding expressions for the matter-dominated era) into Eq. (<ref>). The dotted black curves show the limit of stable cosmic strings, κ→∞, whereas the colored curves show the prediction for the spectrum for two different values of the ratio of the symmetry breaking scales κ and the string tension μ. Large frequencies correspond to GWs produced at early times, and hence the spectrum produced by stable and metastable strings is identical, featuring a plateau at Ω_gw^plateau≃128 π/9 B Ω_r (G μ/Γ)^1/2 , where Ω_r h^2 = 4.15 · 10^-5 is the density parameter of radiation today. At lower frequencies, the earlier decay of the metastable cosmic string loops suppresses the GW signal, leading to a drop in the spectrum proportional to f^2. This drop sets in at a frequency <cit.> f_low∼ 3· 10^-9 Hz(50/Γ)^3/4(10^-8/Gμ)^1/2exp(-π(κ/4-16)) . GWs from metastable strings can be observed if their decay happens sufficiently late such that their redshifted frequencies are not much below f_low. On the observational side, a lower cutoff on the measurable frequencies at PTAs is given by the observation times, i.e., f^PTA∼ (10 yr)^-1≃ 3 nHz. The cutoff f_low depends exponentially on κ, and from Eq. (<ref>) one reads off that the condition for a potential discovery of GWs at PTAs, f_low≲ f^PTA, requires √(κ)≳ 8. Note that the predicted f^2 spectrum is remarkably different from the f^3 scaling expected for causal GW sources (including, e.g., GWs from domain walls) and is due to the fact that the cosmic-string network acts as a GW source over time scales much larger than a Hubble time. PTA collaborations across the world have recently reported the observation of a stochastic common-spectrum process<cit.> showing evidence for Hellings–Downs spatial correlations <cit.>, the hallmark signature of a SGWB. While the most plausible source remains supermassive black-hole binaries, the observed tension with common astrophysical population models <cit.> motivates a thorough investigation of a possible cosmological contribution. Upcoming data will improve our understanding of the spectral tilt, the isotropy and the presence of resolvable individual sources in this SGWB, which will all help to distinguish an astrophysical from a cosmological origin. With these promises and caveats in mind, we now focus in more detail on the GW signal of metastable cosmic strings in the PTA frequency band. As shown in Refs. <cit.>, this could explain the observed GW signal for 10^-11≲ G μ≲ 10^-7 and √(κ)≳ 8. For models of hybrid and tribrid inflation compatible with these values, see, e.g., Refs. <cit.>. From Eq. (<ref>), one reads off that √(κ)≃ 8 can be achieved if the two symmetry breaking scales v_u and v_s are close to each other. Metastable cosmic strings thus lead to observable effects at PTAs for v_u ≳ v_s. If the astrophysical origin of the currently observed signal should be confirmed, which corresponds to interpreting the current PTA data as an upper bound on a cosmological signal, this would shift the interest to the region v_u ≲ v_s (or to smaller values of Gμ), which remains compatible with such a constraint while still allowing for a large SGWB signal in the LISA and LIGO bands. [The described connection between PTA observation times and GUT-scale parameters may appear surprising, but an analogous case is known from neutrino physics. Neutrino mass differences √(Δ m^2)∼ 0.05 eV could only be discovered in atmospheric neutrino oscillations because the oscillation length is L ∼ 10^4 km (earth diameter) <cit.>. Smaller mass differences, e.g. √(Δ m^2)∼ 0.005 eV, would have remained unobserved. ] Within the approximations mentioned above, the GW spectrum from metastable cosmic strings only depends on two parameters, the string monopole mass m_M and the string tension μ, or, dropping the logarithmic dependence on the Yukawa couplings, the two symmetry breaking scales v_u and v_s. Interpreting the PTA data as a SGWB, the current data <cit.> indicate an amplitude of 10^-10≲Ω_gw^PTA h^2 ≲ 10^-9 at the PTA peak sensitivity f_PTA = 3 nHz. The spectral index n_t = dlnΩ_gw/dln f is less constrained and varies more significantly across the different data sets, 0 ≲ n_t ≲ 3, with the PPTA data set preferring slightly smaller values, the EPTA 10.5 year data set preferring larger values and the NANOGrav data lying in the middle. Upcoming data and analysis will significantly improve the measurement of the spectral tilt, allowing a distinction between different SGWB sources. Remarkably, within the framework of metastable cosmic strings, the two observables Ω_gw^PTA and n_t allow to determine the two model parameters, v_u and v_s. This is shown in Fig. <ref>, where we fix the amplitude of the GW spectrum at f = 3 nHz to two distinct reference values, Ω_gw^PTA h^2 = 5 · 10^-10 (left) and Ω_gw^PTA h^2 =10^- 9 (right), and show how a future improved measurement of n_t (x-axis) would determine the GUT-scale symmetry breaking parameters (on the two vertical axes). [Since the cosmic string signal is not a perfect power law over the frequency range of PTAs, the precise value of the tilt n_t (both in the model prediction and signal reconstruction) depends on the underlying assumptions. Here, for concreteness, we determine n_t by linearly interpolating between the signal predictions at 2 and 4 nHz. Note that n_t is related to the often quoted tilt α of the dimensionless characteristic strain and to the spectral index (-γ) of the timing-residual power spectral density as n_t = 2α + 2 = 5 - γ.] The limit of stable cosmic strings requires Gμ≃ 4 · 10^-11 (7 · 10^-11) yielding n_t ≃ 0.7 (0.6) to reproduce this SGWB amplitude for Ω_gw^PTA h^2 = 5 · 10^-10 (10^-9). As the cosmic-string lifetime and hence κ is reduced, the string tension μ needs to be increased to maintain the same SGWB amplitude at 3 nHz. For quasi-stable strings, an increase in Gμ comes with a decrease in n_t, until with a further decrease of κ the f^2 part of the spectrum enters the PTA band and the spectral index starts increasing again. For large string tensions (large v_s), the desired SGWB amplitude can only be achieved by significantly reducing the string lifetime (reducing κ), recovering the asymptotic f^2 scaling. Of course in this case, unless the reheating temperature is very low or a non-standard cosmological history is invoked <cit.> (see, e.g., Ref. <cit.>), the SGWB will exceed the bound Ω_gw≤ 5.8 · 10^-9 (1.7 · 10^-8 for a linear prior) set by the LIGO–Virgo–KAGRA (LVK) collaboration in the 100 Hz range <cit.>. If the current preference for a spectral index larger than n_t ≃ 1 persists, this moreover disfavours the limit of stable strings resulting in a sweet spot with v_s = few × 10^14 GeV and √(κ)≃ 8.3 (see also Ref. <cit.>). We conclude this section by drawing attention to some theoretical uncertainties and open questions in the calculation of the GW spectrum. Our calculations here are based on the Nambu–Goto action, taking cosmic strings to be infinitely thin, and moreover we focus on the GW emission by cusps on the cosmic string loops. Alternatively, cosmic strings can be modeled using lattice simulations of classical field theory (Abelian Higgs model) and one may consider GW emission from kinks as well as GW bursts. For reviews and more detailed discussions, see Refs. <cit.>. In summary, a robust understanding of the substructure on string loops remains challenging but is crucial to accurately estimate the GW spectrum. § STABLE AND QUASI-STABLE STRINGS The GW spectrum of metastable strings in the PTA frequency band and above agrees with the one of stable strings for monopole-mass–string-tension ratios √(κ)≳ 9 <cit.>. These strings are usually referred to as quasi-stable. They can explain the PTA results for Gμ = 10^-(11⋯10), corresponding to a spectral tilt of 0 ≲ n_t ≲ 0.8 <cit.>, where the upper bound is relatively sensitive to the details of the modelling of the cosmic-string network <cit.>. Many studies have explored possible connections between an intermediate string scale v_s ∼ 10^13 GeV and other predictions of supersymmetric and nonsupersymmetric GUT models. In nonsupersymmetric SO(10) models, requiring gauge coupling unification together with an intermediate string scale significantly restricts the allowed symmetry breaking chains. Moreover, the unification scale has to be large enough such that the proton decay rate is smaller than current experimental bounds <cit.>. The situation is similar in supersymmetric SO(10) models <cit.>. Since SO(10) GUTs with a large string scale around 10^13 GeV generically contain heavy Majorana neutrinos, weakly interacting neutrinos with sub-eV Majorana masses, as well as leptogenesis are naturally incorporated. The discussion of SO(10) models can be extended to E_6 models where new types of monopoles and strings appear <cit.>. One may also consider extensions of SO(10) models with a Peccei–Quinn symmetry whose breaking yields an axion <cit.>. The model predicts two types of monopoles, related to the GUT scale and an intermediate scale, and in addition topologically stable strings produced at an intermediate scale below 10^13 GeV. The related SGWB is too weak to be observed by PTAs or LVK, but can be detected at SKA, LISA and ET/CE. On the other hand, for baryogenesis after primordial-back-hole evaporation as discussed in Ref. <cit.>, the predicted scale of the B-L strings is too large to be consistent with PTA data. These are some examples of the discriminating power of SGWB signals in the PTA band. So far, we have assumed that a fundamental GUT is broken to the SM by a sequence of symmetry breakings that are all realized by the Higgs mechanism. One can then expect a plethora of topological defects produced in cosmological phase transitions as sources of a SGWB. However, it is far from obvious that all symmetry breakings are realized by the Higgs mechanism. Nonsupersymmetric GUTs suffer from severe fine-tuning problems, and supersymmetric GUTs with realistic fermion mass matrices require large Higgs representations, which make them almost intractable. Attractive alternatives are GUTs in higher dimensions and in string theories. Compactification to four dimensions will then reduce the GUT group to a subgroup whose further breaking to the SM group could then proceed via the Higgs mechanism (for a review and references, see, e.g., Ref. <cit.>). This still leaves room for some topological defects. Clearly, the discovery of monopoles or evidence for strings in a SGWB would be extremely valuable as a guide to a grand unified theory beyond the SM. § COSMOLOGICAL DEFECTS AND INFLATION Metastable cosmic strings require two steps of spontaneous symmetry breaking. In the first step, an SU(2) group, which may be embedded in some GUT group, is broken to U(1), leading to monopoles as topological defects. In the second step, a U(1) group is spontaneously broken leading to strings as topological defects. This U(1) group must not be orthogonal to the U(1) contained in SU(2) in order to allow the string to split into segments having monopoles and antimonopoles at the ends. Between the SU(2) and U(1) phase transitions an inflationary period must have taken place in order to dilute the produced monopoles but to keep the cosmic strings. In fact, one of the motivations for cosmic inflation has been the “monopole problem” of GUTs in standard cosmology (see, e.g., Ref. <cit.>). One may worry that a mass ratio of √(κ)∼ 8 between the topological defects sourced by these phase transitions does not leave enough space for an inflationary period. However, while the chronological order of the symmetry breaking stages in hybrid inflation is set by the Higgs masses, these are linked to the corresponding symmetry breaking scales (or Higgs vacuum expectation values) through Yukawa couplings. This leaves enough freedom to implement hybrid inflation <cit.>. In the supersymmetric SU(2)_R×U(1)_B-L model discussed in Sec. <ref>, inflation is naturally realized by means of F-term hybrid inflation <cit.>. The two singlets, needed in the superpotential to ensure SU(2)_R and U(1)_B-L breaking, play the role of inflatons. In combination with the supersymmetric SM and right-handed neutrinos, a consistent picture of inflation, leptogenesis and dark matter is obtained for a large scale of B-L breaking, v_s ∼ 10^15 GeV <cit.>. Alternatively, one can consider sneutrino tribid inflation in a gauged U(1)_B-L extension of the supersymmetric SM. Metastable strings are again obtained by embedding the model in SO(10) <cit.>. Depending on the pattern of supersymmetry breaking, one obtains gravitino dark matter <cit.>. Note that in all SO(10) models the precise connection between GUT masses and couplings and the monopole-mass–string-tension ratio √(κ) is an open question that remains to be investigated. Monopoles and strings have also been considered in nonsupersymmetric SO(10) models with an intermediate string scale v_s ∼ 10^13 GeV. The inflaton is introduced as a GUT-singlet scalar field whose potential is generated by radiative corrections. Monopoles and strings may be present today at an observable level, and stochastic GWs may respect PTA and LVK bounds and only become visible at LISA, SKA, BBO and ET/CE <cit.>. The formation of monopole–antimonopole–string configurations may lead to a suppression of the GW spectrum at PTA frequencies <cit.>. In GUT models with monopoles and metastable strings the incorporation of inflation is of crucial importance. One is then faced with the challenging problem to treat gauge coupling unification, fermion masses, proton decay, baryogenesis, (potentially) supersymmetry breaking and dark matter, together with inflation and the formation of a cosmic string network in a quantitatively consistent way. In the examples mentioned above, some progress has been made, but there is much room for improvement. Evidence for (metastable) strings from a SGWB would be a key element to guide us toward grand unified theories. § CONCLUSIONS AND OUTLOOK The evidence for a gravitational-wave background at nanohertz frequencies recently reported by PTAs around the globe <cit.> opens a new window to study the evolution of our universe. The observed signal at nanohertz frequencies, ten orders of magnitude below the LIGO–Virgo–KAGRA band, may have an astrophysical origin — inspiralling supermassive black-hole binaries — but it might also be a remnant of events in the early universe. One possible cosmological interpretation of the observed signal are metastable cosmic strings, which have a strong theoretical motivation in the framework of GUTs. The corresponding GW spectrum is characterized by two parameters: the string tension μ = 2π v^2_s, where v_s is the associated symmetry breaking scale, and the ratio between monopole mass squared and string tension, κ = m^2_M/μ, which determines the lifetime of the string network. The signal in the most recent PTA data sets <cit.> is well described by a power law with a characteristic amplitude of the order of 10^-10≲Ω_gw h^2 ≲ 10^-9 and a positive spectral tilt, 0 ≲ n_t ≲ 3, around the current PTA peak sensitivity of roughly 3 nHz. In terms of the parameters of metastable cosmic strings, this implies a string tension 10^-11≲ Gμ≲ 10^-7 and a decay parameter √(κ)≳ 8, where the upper (lower) bound on Gμ (√(κ)) arises from the constraints set by ground-based interferometers on the amplitude of the SGWB. As illustrated in Fig. <ref>, a value of the spectral tilt n_t ≳ 1, as preferred by the most recent PTA data, favours values v_s ≥ few × 10^14 GeV, close to the GUT scale. Such large values of the string tension will be conclusively tested once the LIGO and Virgo ground-based interferometers reach design sensitivity in the coming years <cit.>. In order to distinguish metastable cosmic strings from other interpretations of the SGWB signal at PTA frequencies, a more precise determination of the spectral tilt will be important. Moreover, like most other cosmological signals, the SGWB from metastable cosmic strings is largely isotropic, as opposed to the significant anisotropies, and the possible presence of resolvable sources, which are expected for a GW signal from supermassive black-hole binaries <cit.>. Future pulsar observations and combinations of existing PTA data sets will shed light on these questions in the near future. In addition, GW observations in other frequency bands are an extremely powerful probe of the cosmic-string hypothesis, as the predicted signal spans many orders of magnitude in frequency: smaller frequencies would be valuable to test the characteristic f^2 behaviour of the spectrum, current and future ground-based detectors will be able to distinguish GUT-scale metastable strings from intermediate-scale stable strings, and the space-based interferometer LISA will probe string tensions down to values well below the current reach of PTAs. If upcoming observations point to an astrophysical origin of the current PTA signal, the results presented here can be interpreted as upper bounds on Gμ and κ, demonstrating the potential of ground- and space-based interferometers to probe the remaining parameter space of GUT-scale cosmic strings. On the theoretical side, the calculation of the GW spectrum has to be improved in several ways. Most importantly, a precise and robust understanding of the substructure of string loops is crucial for the estimation of the SGWB. In addition, the large value of v_s suggested by the recent PTA data calls for further explicit studies of metastable strings in GUT models. As explained in Section <ref>, the value √(κ)≃ 8.3 hinted at by the data requires the energy scales v_u and v_s of the symmetry breakings leading to monopoles and strings, respectively, to be close to each other. This is a strong constraint on GUT model building that remains to be investigated, with consequences for neutrinos, leptogenesis and inflation. Acknowledgments The work of K. Sc. is supported by the Deutsche Forschungsgemeinschaft (DFG) through the Research Training Group, GRK 2149: Strong and Weak Interactions — from Hadrons to Dark Matter. JHEP
http://arxiv.org/abs/2307.05657v1
20230711155600
Mixed-Precision Quantization with Cross-Layer Dependencies
[ "Zihao Deng", "Xin Wang", "Sayeh Sharify", "Michael Orshansky" ]
cs.NE
[ "cs.NE" ]
Handwritten Text Recognition Using Convolutional Neural Network Atman Mishra Dept. of AIML New Horizon College of Engineering Bangalore, India [email protected] A. Sharath Ram Dept. of AIML New Horizon College of Engineering Bangalore, India [email protected] Kavyashree C. Dept. of AIML New Horizon College of Engineering Bangalore, India [email protected] Received / Accepted ================================================================================================================================================================================================================================================================================================================================================================== Quantization is commonly used to compress and accelerate deep neural networks. Quantization assigning the same bit-width to all layers leads to large accuracy degradation at low precision and is wasteful at high precision settings. Mixed-precision quantization (MPQ) assigns varied bit-widths to layers to optimize the accuracy-efficiency trade-off. Existing sensitivity-based methods simplify the MPQ problem by assuming that quantization errors at different layers (or at different blocks of layers) act independently. We show that this assumption does not reflect the true behavior of quantized deep neural networks. Importantly, we show that not fully addressing cross-layer dependencies leads to sub-optimal decisions. We propose the first sensitivity-based MPQ algorithm that captures the cross-layer dependency of quantization error for all layers. Our algorithm (CLADO) enables a fast approximation of pairwise cross-layer error terms by solving linear equations that require only forward evaluations of the network on a small amount of data. Decisions on layerwise bit-width assignments are then determined by optimizing a new MPQ formulation dependent on these cross-layer quantization errors via the Integer Quadratic Program (IQP), which can be solved within seconds. We conduct experiments on multiple CNNs and transformer-based models on the ImageNet and the SQuAD dataset and show that CLADO delivers state-of-the-art mixed-precision quantization performance. § INTRODUCTION Reducing the storage and computational requirements of the state-of-the-art deep neural networks (DNNs) is of great practical importance. An effective way to compress DNNs is through model quantization <cit.>. A quantization strategy that starts with a pre-trained model is known as post-training quantization (PTQ). Typically, PTQ requires only a small amount of data to evaluate the statistics of activations and weights. PTQ uses the statistics to find the optimal quantization ranges of activations and weights for each layer in the model. This process, known as calibration, is data-efficient and fast in deployment. Calibration requires the bit-width of each layer to be specified. In simplest form, the same bit-width is used for all layers, i.e., uniform precision quantization (UPQ). In practice, calibration with 8-bit UPQ leads to negligible accuracy degradation for most DNNs. However, quantization to precisions lower than 8-bit often result in substantial accuracy degradation: e.g., ∼30-40% accuracy drop on ResNet models at 4-bit <cit.>. A specific limitation of UPQ is that it ignores the differences in layers' tolerance to quantization and treats all layers equally. However, it is observed that some layers are more robust than others <cit.>: quantizing some layers to low bit-width leads to only negligible performance drop, whereas even moderate quantization of others leads to significant degradation. To take advantage of this phenomenon, mixed precision quantization (MPQ) seeks an optimal bit-width assignment across layers that achieves lower model prediction error at the same target compression rate as UPQ. Naïvely formulated, the search space of MPQ is exponential in the number of layers (L) to quantize, rendering the problem NP-hard. In order to make the MPQ problem tractable, prior work presented two classes of solutions <cit.>: search-based methods and sensitivity-based methods. Search-based methods <cit.> consider the quantized network as a whole and evaluate the model with a large number of bit-width assignments. The evaluations are used to inform the search process towards an optimal solution. These search-based methods are expensive, hard to parallelize, and usually take hundreds or even thousands of GPU hours because of their iterative search nature. Sensitivity-based methods <cit.>, on the other hand, evaluate a closed-form metric that measures the tolerance of layers to quantization. Such a metric, viz. sensitivity, is a property of each quantized layer. Various formulations have been proposed for the sensitivity metric in practice: e.g. the Kullback-Leibler divergence between the quantized and the full-precision layer outputs <cit.>, the largest eigenvalue of the Hessian <cit.>, the trace of the Hessian <cit.>, the Gauss-Newton matrix that approximates the Hessian <cit.>, or the quantization scale factors <cit.>. Despite the diversity in the formulation of sensitivity, all sensitivity-based MPQ methods minimize total sum of sensitivities across layers, constrained by a target compression ratio. Such an optimization problem is much more efficient to solve than search-based methods. Our work is also sensitivity-based. A critical limitation of existing sensitivity-based MPQ algorithms is the assumption that the impact of layer-wise quantization is independent across layers, and the sensitivity metric, is thus, linearly additive. While it is an objective that is easy to formulate and efficient to optimize, it fails to capture the actual interactions between quantized layers, to which we refer to as the cross-layer dependency of quantization error and, under such circumstances, a MPQ solution that is aware of such dependencies should outperform one based on the assumption of independence. A central claim of our work is that the assumption of layers' independence and zero cross-layer dependency of error is inaccurate because interaction of quantization errors between layers exists empirically. Figure <ref> illustrates the impact to model performance resulted from 2-bit quantization of ResNet-50 on ImageNet. We only quantize convolutional layers and each layer is assigned an unique index from 0 to 52. The diagonal terms (i,i)s show the increase in loss caused by quantizing a single layer i. The off-diagonal terms show the extra increase in loss due to quantizing a pair of layers (i,j), compared to the sum of losses due to quantizing each layer alone. They capture the interaction of errors between layers. We observe that a non-negligible number of interactions have magnitudes comparable to diagonal terms. Moreover, the effect of interactions can be different when quantizing different sets of layers. Some of the interactions are destructive: they lead to extra increase in the sum of losses due to quantizing each layer alone, as indicated by the off-diagonal terms with positive values. Some others are helpful: they reduce the sum of losses, as indicated by the off-diagonal terms with negative values. However, all of these interactions are treated as negligible and ignored in prior sensitivity-based MPQ. Importantly, ignoring these cross-layer interactions leads to suboptimal MPQ solutions (see Section.<ref> for real-world cases of suboptimality with ResNet models on ImageNet classification). To overcome this limitation of naive sensitivity-based MPQ, we propose CLADO (Cross-LAyer-Dependency-aware Optimization), an algorithm that measures the cross-layer dependency for all layers efficiently and optimizes a new cross-layer dependency aware objective via Integer Quadratic Programming (IQP). We start by applying the second-order Taylor expansion on network quantization loss. We decompose the loss into two components with each being a sum of terms. Terms in the first component are specific to individual layers and they act independently across layers. They are well-addressed in prior work and we refer to them as the layer-specific sensitivities. Terms in the second component, which prior work ignores, represent interactions between layers and we refer to them as the cross-layer sensitivities. Naively evaluating them requires calculation of inter-layer Hessian matrices, i.e., off-diagonal blocks of the network Hessian. However, direct computation of these matrices requires quadratic time and space complexity in the number of network parameters, which makes the computation practically infeasible. We propose an efficient backpropagation-free method that avoids Hessian calculation and computes sensitivities by solving a set of linear equations, that only require O(L^2) evaluations of network, on a small amount of data, to which we refer to as the sensitivity set. These sensitivities allow us to reformulate the original MPQ problem as an IQP that contains O(L) binary variables and has linear constraints. The newly formulated IQP can be solved within seconds. The contributions of our work are: * CLADO, a sensitivity-based algorithm that captures the cross-layer interactions of quantization error and transforms the MPQ problem into an Integer Quadratic Program that can be solved within seconds. * CLADO is enabled by an efficient method to compute the cross-layer dependencies in O(L^2) time. It is backpropagation-free and only requires forward evaluations of DNNs on a small amount of training data. * We present ablation studies that verify the importance of cross-layer dependencies in MPQ. Experiments on CNNs and transformer models show that CLADO achieves state-of-the-art mixed-precision quantization performance. E.g., on ImageNet, without finetuning, CLADO demonstrates an improvement, in top-1 classification accuracy, of up to 27% over uniform precision quantization, and up to 15% over existing MPQ methods; after finetuning, CLADO achieves ≤ 1% degradation while other alternatives do not under several tight mode size constraints. § PRIOR WORK Existing mixed-precision quantization techniques can be divided into two classes: search-based and sensitivity-based. Search methods evaluate the mixed-precision quantized networks' performance and use it to guide the optimization process. HAQ <cit.> and AutoQ <cit.> use Reinforcement Learning (RL), where all possible bit-width assignments of layers are treated as the action space and the MPQ performance is interpreted as the reward. Other search methods, such as MPQDNAS <cit.> and SPOS <cit.>, apply Neural Architecture Search (NAS) to make MPQ a differentiable search process. The advantage of these methods is that they make decisions based on explicit evaluation of the quantized model's loss. However, they are computationally demanding, usually taking hundreds or even thousands of GPU-hours to complete <cit.>. In contrast, sensitivity-based methods, use layer sensitivity as a proxy metric (or “critic”) to assess the impact of quantization on model performance. The sensitivities of layers in certain quantization precision are usually fast to measure, and once measured, they are reused in estimating the optimization objective as a function of different combinations of layer-wise precision assignments. This type of methods, including ZeroQ <cit.>, variants of HAWQ <cit.> and MPQCO <cit.>, formulates the MPQ as a constrained combinatorial optimization problem. The objective to be minimized is the sum of layers' sensitivities that quantify the overall quantization impact, while the constraints ensure certain target compression requirements, such as memory and/or computational budgets. (We note that there is also prior work, BRECQ <cit.>, that defines sensitivities on blocks of adjacent layers. Except for the granularity of sensitivity measurement, this work shares the same methodologies as the above-described approach in formulating the MPQ as an optimization problem with its objective being the summation of sensitivities and its constraints being the compression requirement.) To measure the layers' sensitivities, a small subset of training data is needed (e.g., ∼1024 training samples for ImageNet). We refer to it as the sensitivity set. Different work defines sensitivities differently. Most of them use approximations to the Taylor expansion of network loss. Therefore, sensitivities defined in these work have theoretical implications to the increase in quantization loss. This line of work includes: <cit.> uses the largest eigen value of H_w^(l); <cit.> uses the average trace of H_w^(l); <cit.> uses the Gauss-Newton Matrix. A few other work, e.g.,<cit.>, does not derive sensitivity from the Taylor expansion and have sensitivities defined with different physical meanings than the increase in quantization loss. The detailed formulas used by these work are presented in Table <ref>. Since sensitivity-based methods optimize a proxy, instead of the real increase in network loss, they do not require iterative model evaluation and are efficient to solve. Recent work <cit.> suggests that state-of-the-art sensitivity-based algorithms achieve performance comparable to that of search-based algorithms. Our proposed method CLADO is sensitivity-based. § CROSS-LAYER DEPENDENCY AND MPQ OPTIMALITY Empirically, cross-layer dependencies play non-negligible role in the search of optimal MPQ solutions and ignoring them may lead to suboptimal solutions. As for evidence, we present, on ImageNet, two examples of MPQ suboptimality caused by ignoring the cross-layer dependencies. To make the suboptimality easy to understand, we focus on only a few layers and assume that the problem is to choose two layers to quantize so that the quantization loss is minimized. §.§ ResNet-34 Figure <ref> shows the sensitivity matrix of ResNet-34 for 2-bit quantization on four layers: (index 12), (index 19), (index 22), and (index 23). The values are computed on 40k training samples. Ignoring the cross-layer interactions (off-diagonal terms) leads to the selection of layers (12,19) because it gives the smallest predicted loss 0.115+0.140=0.255. However, the real induced loss of quantizing layers (12,19) is 0.115+0.140+2×0.009=0.273 and the optimal solution is quantizing layers (22,23) with the smallest induced loss 0.246+0.148+2×(-0.070)=0.254. §.§ ResNet-50 Figure <ref> shows the sensitivity matrix of ResNet-50 for 4-bit quantization on three layers: they are (index 19), (index 24), and (index 25). Again, the values are computed on 40k training samples. Ignoring the cross-layer interactions leads to the selection of layers (19,24) because it gives the smallest predicted loss 0.016+0.022=0.038. However, the real induced loss is 0.038+2×0.004=0.046. The optimal solution is quantizing layers (19,25) with the smallest induced loss 0.016+0.026+2×(-0.001)=0.040. § CROSS-LAYER-DEPENDENCY-AWARE OPTIMIZATION (CLADO) §.§ Preliminaries We first introduce the notation and formulation of the MPQ problem following convention set in <cit.>. Notation: We assume an L-layer (indices start from 0) neural network f: Θ×𝕏→𝕐 and a training dataset of N samples (𝐱^(n), 𝐲^(n)) ∈𝕏×𝕐 with n={1, …, N}. The model maps each sample 𝐱^(n) to a prediction 𝐲̂^(n) = f( θ, 𝐱^(n)), using some parameters θ∈Θ. Then the predictions are compared with the ground truth 𝐲^(n) and evaluated with a task-specific loss function ℓ: 𝕐×𝕐→ℝ. For example, the cross-entropy loss is used for image classification. This leads to the objective function to minimize ℒ: Θ→ℝ: ℒ(θ) =1/N∑_n=1^N ℓ(f(θ, 𝐱^(n)), 𝐲^(n)) =1/N∑_n=1^N ℓ^(n)(θ) . We denote the weight tensor of the l^th layer as W^(l)∈ℝ^c_o × c_i × k × k and its flattened version as w^(l)∈ℝ^c_o c_i k^2, where k is the kernel size for a convolutional layer and k=1 for fully-connected layers, c_i and c_o are the number of input and output channels, respectively. The quantization function is denoted by Q: ℝ^D ×ℤ^+→Π_b, which takes a full-precision vector and the quantization bit-width as input and produces the quantized vector. In this paper, we only consider uniform symmetric quantization as it is the most widely used scheme in practice due to its ease of implementation in hardware. As a result, Π_b=s ×{-2^b-1, …, 0, …, 2^b-1-1} for signed input and s ×{0, …, 2^b-1} for unsigned one, where b is the quantization bit-width and s is the quantization scale factor. The quantized b-bit weight is given by Q(w, b)= clip(⌊ w / s⌉, -2^b-1,2^b-1-1) × s for full precision w. Discrete constrained optimization formulation: Let w≜{w^(l)}_l=0^L-1 be the set of flattened weight tensors of the network. To find the optimal bit-width assignment with the goal of minimizing total model size, MPQ can be written as the following discrete constrained problem: [ min _{b^(l)}1/N∑_n=1^N ℓ(f(w+Δ w, 𝐱^(𝐧)), 𝐲^(𝐧)); s.t. Δ w^(l)=Q(w^(l), b^(l))-w^(l); ∑_l|w^(l)| · b^(l)≤ C_target; b^(l)∈𝔹; l ∈{0, …, L-1}, j ∈{1, …, M} ] 𝔹={b_1,b_2,...,b_|𝔹|} denotes the set of bit-widths to be assigned in MPQ. In this work, C_target is the target model size of the network, and |·| denotes the length of vectors. Proxy of optimization objective: Let g_w≜∇ℒ(w) and H_w≜∇^2 ℒ(w) be the gradient and the Hessian, respectively. For computational tractability, CLADO approximates the objective via the second-order Taylor expansion: ℒ(w+Δ w) =1/N∑_n=1^N ℓ^(n)(w+Δ w) ≈ℒ(w)+g_w^T Δ w+1/2Δ w^T H_w Δ w . For a well-trained model, it is reasonable to assume that training has converged to a local minimum, and thus, the gradient g_w≈ 0, resulting in: ℒ(w+Δ w) ≈ℒ(w)+1/2Δ w^T H_w Δ w . In essence, MPQ seeks to minimize ℒ(w+ Δ w) as a function of Δ w, treating ℒ(w) as a constant; thus, minimizing ℒ(w+Δ w) is equivalent to minimizing 1/2Δ w^T H_w Δ w. Here we use Δ w^T H_w Δ w as a proxy of the objective in (<ref>). §.§ Methods The core of CLADO is a way to capture the interactions of error between layers that are ignored in prior work. We note that BRECQ <cit.> proposed a partial solution that addresses cross-layer dependencies for layers within a local block. However, BRECQ cannot be scaled to address cross-layer dependencies for all layers, because that will reduce it to an exhaustive search over |𝔹|^L sensitivities, which is not computationally feasible. Importantly, we show that ignoring inter-block dependencies is sub-optimal in case of MPQ. CLADO presents the first sensitivity-based solution that addresses cross-layer dependencies for all layers. Layer-specific and cross-layer sensitivities: Let H_ijs be the partition of the Hessian at layer-level granularity. Specifically, let H_ii≜∂^2ℒ(w)/∂w^(i) ^2 be the Hessian of layer i and H_ij≜∂^2ℒ(w)/∂ w^(i)∂ w^(j) be the cross-layer Hessian between layers i and j. Unless otherwise stated, we use i,j to denote the indices of layers and they take values from {0,1,..,L-1}. Then, Δ w^T H_w Δ w = ∑_iΔ w^(i)TH_iiΔ w^(i) + ∑_i≠ jΔ w^(i)TH_ijΔ w^(j) (<ref>) decomposes the objective into two terms. The first term captures the contribution to loss increase from individual layers due to the intra-layer effects resulted from quantization. We refer to them as layer-specific sensitivities. The second part contains the contributions from effects of pairwise layer interaction as a consequence of two different layers being jointly quantized. We refer to them as the cross-layer sensitivties. We use Ω_i,j(·, ·) as a unified notation for both types of sensitivities. Ω_i,j(Δ w^(i),Δ w^(j)) ≜ Δ w^(i)TH_ijΔ w^(j) We rewrite (<ref>) in terms of Ω as follows; minimizing (<ref>) is equivalent to minimizing: Ω=∑_i Ω_i,i(Δ w^(i),Δ w^(i)) +∑_i≠ jΩ_i,j(Δ w^(i),Δ w^(j)) Note that this re-definition obviates direct reference to the Hessian. We will show below that Ω_i,j can be evaluated without computing the Hessian. Unlike prior work <cit.> that ignores the cross-layer interaction terms, here we optimize (<ref>) in its full form: [ min _{b^(l)}Ω; s.t. Δ w^(l)=Q(w^(l), b^(l))-w^(l); ∑_l|w^(l)| · b^(l)≤ C_target; b^(l)∈𝔹; l ∈{0, …, L-1} ] IQP formulation: Next we show that (<ref>) can be formulated as an Integer Quadratic Program (IQP) that solves for the bit-width assignment decisions. Since each layer l has |𝔹| candidate bit-width choices:{b_1,b_2,..,b_|𝔹|}, Δ w^(l) can take |𝔹| values: Δ w^(l)∈{Δ w^(l)_1,Δ w^(l)_2,..,Δ w^(l)_|𝔹|}. Here, we use Δ w^(l)_m ≜ Q(w^(l),b_m)-w^(l) with m∈{1,2,..,|𝔹|} to denote the quantization error on w^(l), when quantizing it to b_m bits. For each layer l, we introduce a one-hot variable α^(l)∈{0,1}^|𝔹|s.t.∑_m=1^|𝔹|α_m^(l)=1 to represent this layer's bit-width decision. The single entry of 1 in α^(l), e.g. α^(l)_m, indicates the selection of a corresponding Δ w^(l)_m and the chosen bit-width b_m. For compact notation, let α∈ℝ^|𝔹|L be the concatenation of α^(l)s. Specifically, α=(α^(0),α^(1),..α^(L-1)) Δ w^(i) = α^(i)_1 Δ w^(i)_1 + α^(i)_2 Δ w^(i)_2 + ... + α^(i)_|𝔹|Δ w^(i)_|𝔹| Δ w^(i)_m = Q(w^(i),b_m)-w^(i), m∈{1,2,...,|𝔹|} α^(i)_1,α^(i)_2,...,α^(i)_|𝔹|∈{0,1} α^(i)_1 + α^(i)_2 + ... + α^(i)_|𝔹| = 1 In addition, we gather all Ω_i,js into a matrix Ĝ∈ℝ^|𝔹|L × |𝔹|L, to which we refer to as the sensitivity matrix. Specifically, for m,n∈{1,2,..,|𝔹|}: Ĝ_|𝔹|i+m,|𝔹|j+n≜Ω_i,j(Δ w^(i)_m,Δ w^(j)_n) Expand the Δ w^(i), Δ w^(j) terms in (<ref>) by (<ref>) and bring in the definition of Ĝ in (<ref>): Δ w^T H_w Δ w = ∑_i, jΔ w^(i)TH_ijΔ w^(j) = ∑_i, j (∑_mα^(i)_mΔ w^(i)T_m) H_ij (∑_nα^(j)_nΔ w^(j)_n) = ∑_i, j∑_m∑_nα^(i)_mα^(j)_nΔ w^(i)T_m H_ijΔ w^(j)_n = ∑_i, j∑_m∑_nα^(i)_mα^(j)_nΩ_i,j(Δ w^(i)_m,Δ w^(j)_n) = ∑_i, j∑_m∑_nα_|𝔹|i+mĜ_|𝔹|i+m,|𝔹|j+nα_|𝔹|j+n = ∑_p=1^|𝔹|L∑_q=1^|𝔹|Lα_p Ĝ_p,qα_q = α^T Ĝα , We have in (<ref>) a rewritten objective Ω = α^T Ĝα, in the form of a quadratic function in decision variables α. Since Ĝ is a constant matrix composed of sensitivities, once these sensitivities are measured, the original MPQ problem can be reformulated as the following Integer Quadratic Program (IQP): [ min _{α^(l)}α^T Ĝα; s.t. α=(α^(1),α^(2),..α^(L)); α^(l)_1,α^(l)_2,...,α^(l)_|𝔹|∈{0,1}; α^(l)_1 + α^(l)_2 + ... + α^(l)_|𝔹| = 1; ∑_l∑_mα^(l)_m|w^(l)| · b_m ≤ C_target; l ∈{0, …, L-1} , m∈{1,…,|𝔹|} ] Backpropagation-free sensitivity measurement: We now present an efficient way to compute Ĝ through only forward evaluations of DNNs. For any pair of layers i,j and their corresponding bit-width choices m,n, computing Ω_i(Δ w^(i)_m), Ω_j(Δ w^(j)_n), and Ω_i,j(Δ w^(i)_m,Δ w^(j)_n) in a naïve manner requires computing the Hessians. Prior work, <cit.>, computes Ω_i(Δ w^(i)_m) and Ω_i(Δ w^(j)_n) through a bottom-up approach (again, it ignores Ω_i,j(Δ w^(i)_m,Δ w^(j)_n)) by first approximating the Hessians and then the products. However, we point out that there is no need to compute Ω explicitly under the assumption that the Taylor expansion (<ref>) is a reasonable approximation of network loss, because Ω_i,js can be computed from the change in loss due to perturbation of individual layers' parameters as follows: Ω_i,i(Δ w^(i)_m,Δ w^(i)_m) ≈ 2 (ℒ(w+Δ w^(i)_m)-ℒ(w)) Ω_j,j(Δ w^(j)_n,Δ w^(j)_n) ≈ 2 (ℒ(w+Δ w^(j)_n)-ℒ(w)) Ω_i,i(Δ w^(i)_m,Δ w^(i)_m) + Ω_j,j(Δ w^(j)_n,Δ w^(j)_n) +2Ω_i,j(Δ w^(i)_m,Δ w^(j)_n) ≈ 2 (ℒ(w+Δ w^(i)_m+Δ w^(j)_n)-ℒ(w)) . The right sides of the three equations in (<ref>) are computed by taking the difference in the loss of the quantized model from that of the full-precision model. Subtracting the first two equations from the third equation, we get: Ω_i,j(Δ w^(i)_m,Δ w^(j)_n) ≈ℒ(w+Δ w^(i)_m+Δ w^(j)_n)+ℒ(w) - ℒ(w+Δ w^(i)_m) - ℒ(w+Δ w^(j)_n) . Importantly, (<ref>) and (<ref>) compute all sensitivities by only forward evaluations of DNNs on the small sensitivity set. Positive semi-definite approximation: Instead of directly using the matrix Ĝ, which is computed on the small sensitivity set, we first approximate it with a positive semi-definite (PSD) matrix. This is because, theoretically, G is PSD if it is computed on the whole training set and the training has fully converged. However, since Ĝ is estimated by the sensitivity set that only contains a small number of training samples, the measurement error can make Ĝ indefinite. Practically, we find that the PSD approximation is critical to producing meaningful MPQ solutions, as it guarantees convexity of the IQP objective, making it much easier and faster to reach a good solution. We provide a sketch of proof to show that G is PSD. It uses proof by contradiction and starts by assuming that G is not PSD. If G is not PSD, then there exists a v∈ℝ^|𝔹|L such that v^TGv<0. Consider the following perturbation u on weight: u = (u_0,…,u_L-1) with u_i = ∑_m=1^|𝔹|v_i|𝔹|+mΔ w^(i)_m By using our definition of G in (<ref>), one can show that u^T H_w u=v^T G v. We start by expanding u^T H_w u in layer granularity (i,j denote indices of layers and take values from {0,…,L-1}; m,n denote indices of bit-width options and take values from {1,…,|𝔹|}). In the derivation, (a) uses the definition of sensitivity (Equation 4.6); (b) uses the definition of sensitivity matrix (Equation 4.10); (c) is by introducing new variables p,q with p=|𝔹|i+m and q=|𝔹|j+n. u^T H_w u = ∑_i,j u_i^T H_ij u_j = ∑_i,j(∑_mΔ w^(i)T_m) H_ij(∑_nΔ w^(j)_n) = ∑_i,j∑_m,n v_i|𝔹|+m(Δ w^(i)T_m H_ijΔ w^(j)_n) v_j|𝔹|+n (a)= ∑_i,j∑_m,n v_i|𝔹|+mΩ_i,j(Δ w_m^(i),Δ w_n^(j)) v_j|𝔹|+n (b)= ∑_i,j∑_m,n v_i|𝔹|+m G_|𝔹|i+m,|𝔹|j+n v_j|𝔹|+n (c)= ∑_p=1^|𝔹|L∑_q=1^|𝔹|Lv_p G_p,q v_q = v^T G v Because of the equivalence between v^T G v and u^T H_w u, u^T H_w u is negative according to the assumption that G is not PSD. In consequence, by (<ref>), u forms a descending direction of loss and contradicts the fact that the network is trained to local optimal. Therefore, the assumption does not hold and G must be PSD. To apply PSD approximation on Ĝ, one only needs to compute its eigen decomposition and replace the negative eigenvalues with zeros. We present in Algorithm <ref> the complete CLADO algorithm. CLADO sensitivity computation requires 1/2|𝔹|L(|𝔹|L+1) measurements of the network on a small set of samples. Assuming measurements of DNN outputs take constant time complexity, CLADO has quadratic time complexity (O(|𝔹|^2L^2)). § EXPERIMENTAL RESULTS: CONVOLUTIONAL NEURAL NETWORKS §.§ Experimental Settings Dataset and models: We conduct experiments on the ImageNet dataset <cit.>. To compare CLADO to existing work, we test two state-of-the-art algorithms: MPQCO <cit.> and HAWQ-V3 <cit.>. We evaluate with four computer vision models: ResNet-34/50 <cit.>, RegNet-3.2GF <cit.>, and MobileNetV3 <cit.>. For all models excluding MobileNetV3, we quantize activations to 8 bits and consider three candidate bit-widths for weights: 𝔹={2,4,8}. MobileNetV3, because of its compact architecture and high parameter efficiency, suffers higher degradation than other models at a constant compression ratio. For this reason, we consider more conservative quantization bit-width candidates with 𝔹={4,6,8} Implementation details: All algorithms are implemented in PyTorch <cit.>. The publicly available package CVXPY <cit.> is used to solve the IQP problem in CLADO and we select GUROBI <cit.> as its backend. For fair comparisons, all algorithms are run using the same procedures and settings. We download PyTorch pre-trained full-precision models. MQBench <cit.> is applied for model quantization. Following prior work, quantization scale factors are determined by minimization of the MSE between the values and their quantized values. This is achieved by setting the “weight observer” in MQBench to be . Implementation Details: Our experiments are mainly conducted on a server equipped with a single Intel i7-9700 CPU, a single Nivida RTX2080 GPU, and 32GB DDR4 RAM. Use of multiple sensitivity sets: To study the dependence of algorithms on the sensitivity set and to reduce the variation in results, we use multiple sensitivity sets. Specifically, for each sample size ranging from 256 to 4096, we randomly construct 24 sets with the given size. Then, we test the performance of algorithms on each of the 24 sets. §.§ Code Implementations We provide our code implementation of CLADO. We also include implementations of MPQCO and HAWQV3. Codes are zipped into a single file named . Structure of codes: Once unzipped, all codes including three Python files and one bash script are placed under the folder. computes the sensitivities for MPQCO and CLADO. computes the Hessian traces, which are used later by to compute the sensitivities of HAWQ. takes the pre-computed sensitivities (traces for HAWQ) and solves the corresponding IQP(for CLADO)/ILP(for HAWQ and MPQCO) problems to get the MPQ decisions. It then evaluates the decisions and report quantized models' performance. Dependencies: To run the codes, a CUDA environment with , , , and (with backend) packages is required. We recommend to use , , , , and - to avoid any compatibility issues. MPQ sample runs: is an example script to run the experiments. It launches a quick run of three algorithms on the ResNet-34 model using a randomly sampled 64-sample sensitivity set. A datapath to the ImageNet dataset is required, it needs to be specified by user through the variable in the script. Once finished, one can check for results. By default, we use for batch size. The and define the indices of the starting and ending (inclusive) batches to be included in the sensitivity set (e.g., with the default batch size, one can specify , to include 1024 samples in the sensitivity set). specifies the model name of pretrained models by , one can change it to other valid names (e.g., “resnet50”) listed in <https://pytorch.org/vision/stable/models.html>. For other program arguments and hyperparameters, (e.g., CUDA device, number of threads, sampling of sensitivity samples, model size calculations and constraints), please see the codes for more details. Note that HAWQ uses Hutchinson's method (via ) to compute the Hessian traces, which is demanding of GPU memory. Therefore, when experimenting with deep models (e.g., ResNet-50), one may use a smaller batch size than 64 to avoid the OOM error. Quantization settings: The quantization of models are handled by the package. The details of 's quantizer settings can be found in (line 154-181). One can modify the settings there to test other settings of interest by specifying different hyperparameters including quantization granularity (per channel or per tensor), algorithms for quantization scale factors (MSE or Min-Max), formats of scale factors (power of two or continuous), etc. Cached sensitivities: The sensitivities of algorithms are computed by s and are stored so that they can be reused for MPQ optimization with different constraints or additional analysis (e.g., for visualization of cross-layer interaction effects in Figures <ref> and <ref>). Specifically, sensitivities of CLADO are stored under folders named “Ltilde_xxx”; sensitivities of MPQCO are stored under folders named “DELTAL_xxx”; sensitivities and Hessian traces of HAWQ are stored under folders named “HAWQ_DELTAL_xxx”. The sensitivities are stored on a per-batch basis, with each representing the estimated sensitivities on a batch of training data (with batch size specified by ). Note that these sensitivities (for all three algorithms) are additive across batches. §.§ Comparison with Other MPQ Algorithms Runtime: CLADO requires 1/2|𝔹|L(|𝔹|L+1) measurements of the network output on a small set of samples. On a single Nvidia RTX2080 GPU, our implementation takes 1 hour to generate the sensitivities for the ResNet-34 model and 2.5 hours to generate the sensitivities for the ResNet-50 model. HAWQ takes roughly the same number of GPU-hours as CLADO to perform the Hessian trace computation. MPQCO is the fastest one taking 5-10 minutes to compute the sensitivities. Further, the computation of sensitivity for CLADO can be parallelized by the use of multiple GPUs to further reduce the time (e.g., the use of 10 GPUs reduces the time by 10×). Search-based methods are several orders of magnitudes slower. They usually takes hundreds to thousands of GPU-hours to finish a search. Further, the search process cannot be easily parralelized because of the dependencies between the early and later search steps. Average performance: Results of sensitivity-based algorithms depend on the size of the sensitivity set and show some variation. Figure <ref> presents the averaged results of algorithms. In terms of average performance, CLADO delivers the best compressed models under most constraints. Its advantage over prior work is especially prominent when models are aggressively compressed to low precisions. Under the tightest size constraints tested, it achieves 9% / 6% higher top-1 accuracy than MPQCO/HAWQ on ResNet-34, 15% / 7.5% higher top-1 accuracy than MPQCO/HAWQ on ResNet-50, and 12.5% / 8.5% higher top-1 accuracy than MPQCO/HAWQ on RegNet-3.2GF. Under moderate compression requirements, e.g., 12-14MB ResNet models and 8- 11MB RegNet models, performance drop of all algorithms is much smaller and hence smaller differences between algorithms. However, CLADO is still able to achieve, over the alternatives, 1-4% higher top-1 accuracy on ResNet models, and around 1% higher top-1 accuracy on RegNet-3.2GF. Under even higher model size constraints, all algorithms tend to be the same with the biggest difference in top-1 accuracy below 0.25%. Size of the sensitivity set: Figure <ref> shows the median, the upper quartile (75%), and the lower quartile (25%) of algorithms' performance on the randomly sampled sensitivity sets. CLADO produces reasonably stable results. Though it may have slightly higher variation than the alternatives, its lower quartile performance is almost always better than their upper quartile performance. In addition, we found that CLADO keeps improving with larger sensitivity sets while MPQCO and HAWQ do not. Figure <ref> shows, for RegNet, the change in algorithms' average performance when using different sensitivity set sizes. When increasing the number of samples from 1024 to 4096, the performance of HAWQ and MPQCO either does not change at all or changes negligibly (≈ 0.1%). However, CLADO's performance improves by 0.58%, on average. Analysis of bit-width assignments: Figures <ref> and <ref> visualize the MPQ decisions made by the three algorithms on ResNet-34 and ResNet-50, respectively. The other details and visualizations, including layer sensitivities measured by different methods and the proportion of parameters in each layer of the models, are included in Appendix. All algorithms tend to assign more bits to shallow layers. This aligns with the observation that shallow layers have higher sensitivities and fewer parameters. In terms of sensitivity measurements, all methods agree that certain layers are very sensitive, e.g., the first down-sampling layer in the network. However, they can also differ significantly in predicting sensitivities of other layers. E.g., on both models, MPQCO predicts higher sensitivities for deep layers than other methods. For ResNet-50, HAWQ predicts much higher sensitivity for the first convolutional layer than other methods. The sensitivities measured by CLADO and the these measured by the KL-divergence appear closest in the shape of the distributions. For all three methods, increasing the model size leads to monotonic increment of bit-width for each layer. Generally, layers with bigger sensitivities and smaller sizes tend to get larger bit-widths with higher priority than others. E.g., for CLADO and ResNet-34, from 8.87MB to 10.13MB (Figures <ref> and <ref>), there is a 6-bit bit-width increment to layer3.1.conv2 because this layer has high sensitivity and small size. In contrast, there is no bit-width increment for layer4.0.conv1 and layer4.0.conv2 because these layers are less sensitive and have more parameters. With the same change in model size, the three methods are very different in the change of their bit-width decisions. E.g., for ResNet-34 (Figure <ref>), from 8.87MB to 10.13MB, CLADO increases the bit-widths for 12 layers but HAWQ only increases the bit-widths for 4 layers; from 10.13MB to 12.16MB, the 6 layers whose bit-widths are increased by CLADO are completely different from the 8 layers whose bit-widths are increased by MPQCO. §.§ MPQ Performance after Quantization-Aware Finetuning So far, we studied MPQ in the post-training quantization (PTQ) settings, in which no further training is performed and only a small fraction of data is given (usually ≤ 1% of the training data). However, if the whole training data and the access to the training pipeline are available, quantization-aware training/fine-tuning can be further applied. Such QAT tends to significantly reduce performance degradation. To examine if CLADO's bit-width assignments remain superior to others after fine-tuning, we conducted experiments of QAT fine-tunings applied on the mixed-precision quantized models produced by the three algorithms. We employ MQBench, the same quantization tool utilized in our previous experiments, to carry out the QAT experiments. We use batch size 32, SGD with learning rate 10^-4 and the momentum of 0.9. We found that CLADO's bit-width assignments have faster QAT convergence at smaller losses than others, Figure <ref>. Most importantly, they maintain higher test performance than others after QAT, Figure <ref>. In the figure, PTQ results are not included because their accuracy is very low (<45%) and they make the visualization of QAT difference difficult. Because QAT significantly reduces degradation, the difference between algorithms' after-QAT performance is much smaller: for 8.87MB ResNet-34, CLADO, HAWQ, MPQCO achieve 44.40%, 9.96%, 0.51% accuracy before QAT but after QAT, they achieve 72.34%, 72.00%, and 72.08% accuracy, respectively. Although the after-QAT difference is small, CLADO maintains meaningful advantage: for ResNet-34 with 8.87MB size constraint and ResNet-50 with 8.94MB, 9.22MB, and 9.50MB constraints, CLADO achieves ≤ 1% accuracy degradation but HAWQv3 and MPQCO do not. §.§ Ablation Studies We conduct ablation studies to justify the effectiveness and necessity of our method's components. Cross-Layer dependencies: To investigate the effectiveness of cross-layer dependencies, we test the performance of CLADO assuming zero cross-layer sensitivities. Specifically, we set Ĝ_i,j=0 for i≠ j and keep the rest of CLADO unchanged. We refer to this set of experiments as “Cross Removed”. Its results are captured by the orange curves in Figures <ref> and <ref>. Comparing it with CLADO (blue curves), cross-layer dependencies improve MPQ performance and reduce algorithm's variation, e.g., they lead to 7.5% and 6% increase in top-1 accuracy for 10.1MB ResNet-34 and 10MB ResNet-50, respectively. We also conducted experiments in which cross-layer dependencies are only considered within blocks of layers, similar to BRECQ. In this set of experiments, we intentionally leave out the inter-block dependencies and keep the rest of CLADO unchanged. We refer to this group of experiments as “Block Interactions”. The results are captured by the black curves in Figure <ref>. We found that leaving out inter-block dependencies worsens MPQ compared to the CLADO (Full Interactions). PSD approximation of sensitivity matrices: To investigate the effectiveness of PSD approximation, we disable the use of PSD approximation in CLADO, keeping the rest of CLADO unchanged. We found that, without the PSD approximation, it takes CVXPY indefinitely long to solve the IQP problem and reach only suboptimal solutions. With PSD approximation, the solver is able to compute the optimal solutions within seconds. This observation aligns with our expectation that using a convex objective Ω makes the IQP easy and efficient to solve. Its results are captured by the purple curves in Figures <ref> and <ref>, we found that, though PSD does not always produce best results, it improves consistency in results and, in some cases, it prevents severe degradation in the quality of solutions, e.g., PSD improves the median top-1 accuracy, which is below 50%, to over 70% for ResNet-50 under size constraints 13.4MB and 17.9MB. § EXPERIMENTAL RESULTS: TRANSFORMER-BASED MODELS Transformer-based models rely on attention mechanisms, which is powerful in processing long sequences of data tasks while addressing the limitations of Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs). Recently, several transformer-based models <cit.> have shown capability in advancing state-of-the-art performance in domains of NLP and CV. We choose two representative ones, ViT <cit.> for CV and BERT <cit.> for NLP, to verify the effectiveness of CLADO. BERT (Bidirectional Encoder Representations from Transformers uses) <cit.> uses only the transformer encoder and learns deep bidirectional representations by jointly conditioning on both left and right context in all layers. This allows the model to pre-train on a large corpus of text and fine-tune for specific tasks, significantly improving the state-of-the-art for many NLP tasks. ViT (Vision Transformer) <cit.> applies the transformer architecture to image recognition tasks. It treats an image as a sequence of patches and applies self-attention and position embeddings on them, demonstrating that transformers can be used effectively for computer vision tasks. The pre-trained BERT base model is from HuggingFace library <cit.>. We test it on the SQuAD dataset <cit.>. The pre-trained ViT is from PyTorch and we test it on the ImageNet dataset. Figures <ref> and <ref> show the size/performance trade-offs for BERT and ViT, respectively. We found that while performance of other methods are task-dependent, CLADO always gives the best trade-offs. For example, on SQuAD, HAWQ achieves ≤ 2% F1 score degradation when compressing the BERT model to 40MB while the degradation of MPQCO-quantized model is more than 10%; however, on ImageNet, MPQCO-quantized ViT is much better than HAWQ-quantized ViT: the performance gap, in terms of the top-1 accuracy, is more than 15% at model size 34MB. Though the improvements of CLADO over the best alternatives are small under large model sizes (40-80MB BERT,31-40MB ViT), they can be meaningful: on SQuAD, there is around 0.5% F1 score improvement for BERT with size 42.5-52.5MB; on ImageNet, there is 0.56% and 0.73% Top-1 accuracy improvement for ViT with size 31.44MB and 34.10MB, respectively. Under more restrictive settings, advantage of CLADO becomes bigger: for BERT with size constraint 37MB, CLADO achieves a F1 score of 82 while scores of HAWQ and MPQCO drop below 70; for ViT with size constraints 29.8MB and 28.2MB, CLADO achieves 5.64% higher and 21.82% higher accuracy than the best alternative. By comparing the orange curves (CLADO with cross-layer terms set to 0) to others, we make the following observations. First, cross-layer interactions are helpful, but the degree of their helpfulness depends on the dataset and the model. In the experiments, improvements by considering cross-layer interactions are indicated by the performance lift from the orange curves to the blue curves (CLADO. Cross-layer interactions demonstrate greater improvements to the ViT model than to the BERT model: on ViT, they generally lead to ≥ 5% improvement in Top-1 accuracy; on BERT, they translates into only ∼ 0.5% F1 score improvement. Second, CLADO without cross-layer terms may achieve state-of-the-art performance in some tasks, but not all. For BERT on SQuAD, it is comparable to HAWQ: it is slightly worse than HAWQ (by ≤ 0.5% Top-1 accuracy) under model sizes ≥ 50MB but is better than HAWQ (by ≥ 10%) under model size <40MB. However, for ViT on ImageNet, it is noticeably worse than MPQCO. Moreover, the fact that it is worse than MPQCO on ViT reduces the advantage of CLADO over MPQCO because the improvement due to cross-layer information is mitigated by a weaker baseline. As side notes to the above observations, there are two open questions that remain confusing and we leave for future research. They are, (1) why the degree of cross-layer interactions' usefulness is task-dependent and (2) why the superiority of CLADO without cross-layer interactions over other methods are task-dependent. § CONCLUSION In this chapter, we present CLADO, the sensitivity-based mixed-precision quantization algorithm that exploits cross-layer quantization error dependencies. We use the second-order Taylor expansion on quantization loss to derive a proxy of optimization objective. While prior work ignores the cross-layer interaction terms, we propose to optimize the proxy in its full form and thereby reformulate the mixed-precision quantization as an integer quadratic programming problem that can be solved within seconds. To compute the cross-layer dependencies, we propose an efficient approach based on the Taylor expansion, which requires only forward evaluations of DNNs on the small sensitivity set. Extensive experiments are conducted to demonstrate the effectiveness of the proposed method over the uniform precision quantization and other mixed-precision quantization approaches. unsrtnat § APPENDIX: LAYER SENSITIVITIES OF RESNET MODELS MEASURED BY DIFFERENT METHODS § APPENDIX: PROPORTION OF PARAMETERS FOR EACH LAYER IN RESNET MODELS
http://arxiv.org/abs/2307.04458v1
20230710101221
Analyzing the Evolution of Inter-package Dependencies in Operating Systems: A Case Study of Ubuntu
[ "Victor Prokhorenko", "Chadni Islam", "Muhammad Ali Babar" ]
cs.SE
[ "cs.SE" ]
V. Prokhorenko et al. CREST - The Centre for Research on Engineering Software Technologies, the University of Adelaide, Australia victor.prokhorenko, [email protected] Cyber Security Cooperative Research Centre (CSCRC), Australia Queensland University of Technology, Brisbane, Australia [email protected] Analyzing the Evolution of Inter-package Dependencies in Operating Systems: A Case Study of Ubuntu Victor Prokhorenko1,2 Chadni Islam3 Muhammad Ali Babar1,2 August 12, 2023 ================================================================================================== An Operating System (OS) combines multiple interdependent software packages, which usually have their own independently developed architectures. When a multitude of independent packages are placed together in an OS, an implicit inter-package architecture is formed. For an evolutionary effort, designers/developers of OS can greatly benefit from fully understanding the system-wide dependency focused on individual files, specifically executable files, and dynamically loadable libraries. We propose a framework, DepEx, aimed at discovering the detailed package relations at the level of individual binary files and their associated evolutionary changes. We demonstrate the utility of DepEx by systematically investigating the evolution of a large-scale Open Source OS, Ubuntu. DepEx enabled us to systematically acquire and analyze the dependencies in different versions of Ubuntu released between 2005 (5.04) to 2023 (23.04). Our analysis revealed various evolutionary trends in package management and their implications based on the analysis of the 84 consecutive versions available for download (these include beta versions). This study has enabled us to assert that DepEx can provide researchers and practitioners with a better understanding of the implicit software dependencies in order to improve the stability, performance, and functionality of their software as well as to reduce the risk of issues arising during maintenance, updating, or migration. This work is accepted for publication in The 17th European Conference on Software Architecture (ECSA 2023), Istanbul, Turkey. § INTRODUCTION Combining multiple independent software packages together is commonly used to form complex inter-connected ecosystems. A typical example of such large software ecosystems is various Linux distributions. Such ecosystems tend to consist of hundreds or thousands of packages, libraries, binaries, and configuration files with an order of magnitude more dependencies among them <cit.>, <cit.>. Developers and researchers have expressed interest in software complexity measurement in an attempt to reason about characteristics of large code bases <cit.>. Software complexity is viewed as a result of different design decisions and implementation specifics and is a crucial component of long-term effects like the maintainability of software <cit.>. Although software complexity is a crucial consideration for package managers, Linux distributors, and maintainers, we currently have limited knowledge about the evolution of this complexity over the software lifespan. While the complexity of individual packages is tamed by their corresponding developers, combining thousands of packages materializes a new emergent layer of complexity. It is also uncertain whether different metrics for measuring software complexity exhibit similar or varying patterns of evolution. A significant amount of research has extensively explored source-level software complexity <cit.>. As a result, various complexity metrics have been defined, such as cyclomatic, branching, or data flow complexity <cit.>. These metrics are primarily used for software design, debugging, and optimization purposes <cit.>. These metrics are, however, not applicable when analyzing closed-source software distributed only in binary form without access to the source code. In such cases, binary dependency analysis is required to understand the interactions and dependencies between compiled binary executables. Additionally, even when source code is available, there may be situations where the compiled binary may behave differently from what is expected due to specific environment configurations. Thus, binary dependency analysis can provide a more accurate and complete understanding of run-time software behavior, which can be crucial for identifying potential issues or vulnerabilities. This work considers an OS as a whole rather than focusing on analyzing individual software binaries. Considering an OS enables the identification of cross-application relations, which make up an emergent inter-package relation architecture instead of just the intra-package software complexity. We propose a framework that enables the extraction of binary-to-library dependencies and constructs a full OS dependency graph to obtain insights on overall OS complexity which we determine through inter-package dependency coupling. By coupling we mean any type of dependency of one code fragment on another (library inclusion, function call, etc). Our study focused on Ubuntu as a case study to examine the evolution of large software ecosystems over almost two decades. Through empirical research and evidence-based findings, we aimed to assess the current state of package, library, and binary dependencies and identify areas for improvement in management tools, policies, and ecosystem analysis platforms. We believe that a deep understanding of emergent inter-package architecture resulting from combining a multitude of independently developed software subsystems would benefit software developers and OS maintainers. The proposed techniques and tools are expected to minimize manual labor associated with multi-package maintenance. Following are the key contributions of our work * We have introduced a framework for dependency coupling analysis for multi-package software to extract the inter-package relations architecture that is applicable to a broader range of OS due to the binary-level analysis. * We have defined four techniques to quantitatively measure software coupling in terms of executable and dynamically loadable library dependencies at different granularities. * We have investigated the evolution of Ubuntu OS in terms of the proposed library presence dependency type, which revealed the changes in OS-wide inter-package relations over time. § BACKGROUND AND MOTIVATION §.§ Software Complexity Throughout the lifetime of any software system, various code modifications must be implemented in order to adapt to ever-changing user requirements and environmental conditions. An intuitive expectation is that large and complex software systems may be more difficult to update and maintain. Thus, in efforts to gain a stricter definition of complexity, multiple code complexity measurement techniques, such as straightforward line count or cyclomatic complexity, have been proposed so far <cit.>. However, analyzing multiple diverse software systems as a whole is not trivial due to (i) lack of access to the source code of all third-party components, (ii) lack of formal interoperability specification and (iii) highly dynamic state of execution environment at run time. Several techniques are typically employed to handle the growing complexity of large software systems (such as a full OS). For instance, the system package manager may track package dependency information at the OS level. This tracking enables detecting incompatibilities between separate software subsystems and repairing them if possible. Unfortunately, manual labor is commonly used in determining and maintaining information on such version-level incompatibilities <cit.>. Due to the large number of files in a typical OS, manual efforts typically target only high-level dependency definitions, such as package level only <cit.>. As each package may consist of multiple files containing executable code (i.e., executable binaries and libraries), such package dependency understanding may not represent the dependencies precisely. Further challenges arise due to modern complex software systems commonly developed in various programming languages. For instance, purely-binary compiled languages are intertwined with interpreted script languages leading to execution flow frequently being transferred between them. The dependency chains within such complex systems may propagate through a significant portion of files in the file system through the indirect reliance of different code fragments on each other. A typical example includes PHP web pages relying on the PHP interpreter, web server, and third-party PHP libraries. Such immediately obvious (direct) dependencies, in their turn, recursively rely on other system-provided and third-party libraries. Therefore we argue that automated and precise dependency tracking would benefit software system maintainers and administrators and may provide useful insight to software developers. §.§ Code dependency types One piece of code can depend on another in numerous ways. For instance, within the source code analysis context, a function may call other functions. Similarly, class methods may work by invoking other class methods. These types of dependencies present in the same code base are well understood and routinely used in modern IDEs (Integrated Development Environments) to aid software developers. In contrast, cross-language code dependencies spanning across multiple independently developed software systems are less formal and challenging to identify. For instance, a PHP-oriented IDE would not detect incompatible changes in the library which is required by the PHP interpreter itself. Focusing solely on software running within the same OS while not taking network-based dependencies into consideration, we propose the following four conceptual types of dependencies suitable in the executable code analysis context. These four types include (i) the presence of third-party libraries, (ii) the extent of library coverage, (iii) library function call occurrences, and (iv) the run-time usage of functions (Figure <ref>). The third-party library presence dependency relates to file-level granularity. This type of dependency indicates a requirement for a dynamically loadable library to be present in the system for an executable binary to be able to load and start. In Windows-based systems, libraries and executables are denoted by .dll and .exe file extensions, while on Linux-based these are .so and typically extension-less ELF (Executable and Linkable File) correspondingly. While high-level, this file granularity is crucial as a missing library file typically causes the executable file loader to indicate an error and prevents any further file execution. Coverage dependency focuses on the library fragments (e.g., functions or class methods) that a developer explicitly uses or relies on. This type of dependency refers to specific function existence requirements. Thus, the library coverage aspect reflects the degree of reliance on a given library by the executable. Depending on the OS, programming language, and execution environment, individual function-level requirements can be implemented in various ways. For instance, in the context of the Windows PE executable, the list of required functions is tied to a specific library. In contrast, the lists of required libraries and functions are independent in the Linux ELF executable <cit.>. These implementation specific differences complicate coverage analysis in the general case. Function occurrence dependency type attempts to provide further insight into the code dependency by observing that a single external function can be referred to multiple times in the original code. For instance, some heavily used functions can be mentioned all over the code, while some rarely used functions may only appear once. Extracting this type of dependency is extremely complicated and involves computationally-heavy disassembling of compiled code or parsing of interpreted languages. Initial unoptimized attempts revealed a significant time overhead for extracting such occurrence-level dependencies. While certain optimizations can be taken for production-ready usage, it can be concluded that this type of analysis is currently unsuitable for real-time applications. Lastly, dependency usage refers to the actual run-time external code flow control transfers (i.e., the actual function calls). This level of detail may, for example, reveal that one function call is contained within a high-count loop while other function calls may be a part of a condition rarely satisfied at run time. Run-time observation would reveal a deeper understanding of the level of reliance on third-party libraries in both cases. Despite seemingly most accurate and closest to reality, relying on this type of dependency suffers from a major drawback. Different executions or instances of the same executable may exhibit different behavior due to different run-time conditions. In other words, observing a single execution does not guarantee to reveal all external code usage cases. Note that a purposefully crafted executable may incorporate external dependencies that would not be reflected using the proposed dependency measurement techniques. For instance, if an executable downloads code over the network and executes it in place, no third-party library references, function names, or function calls related to the downloaded code may be present in the original executable. Moreover, the downloaded code downloaded can be different on each program invocation, making any dependency analysis futile in such a context. Based on the identified dependency types, we propose an extensible plugin-based framework suitable to extract code dependencies for various types of executable code. § OUR APPROACH AND IMPLEMENTATION Analyzing the full file system enables a more complete and consistent understanding of the dependencies. Software developers only express a requirement for dynamically loadable library presence, but do not have actual guarantees of the library's existence in a given system. We implement a Python-based proof of concept solution to analyze system-wide dependencies. On a conceptual level, our proposed approach for Dependency Extraction (DepEx consists of a file system scanner, a plugin dispatcher, multiple user-definable file-type-specific plugins, and the resulting database. The following steps provide an overview of the DepEx operation: * The existing dependency extraction plugins (also Python-based) are queried to prepare the list of all supported file types * The specified file system is iterated over and each file of a supported type is passed to a corresponding plugin for dependency extraction * The dependencies extracted by the plugin are stored in an SQLite database Having the knowledge of individual file type structures, each plugin is responsible for external dependency detection and extraction. Note that while the current implementation assumes one-to-one relation between file types and plugins, it is possible for multiple plugins to process the same files to extract different types of dependencies. While we have implemented a proof of concept plugins for PHP, Bash, and, to a lesser degree, Python scripts, in this research we primarily focus on ELF executables and .so libraries with the library presence dependency. Once the unattended phase of the dependency extraction is complete, several interactive analysis and usage scenarios become accessible. These include visualization, statistical reporting, and forward and reverse update impact estimation. For instance, various system health characteristics, such as "number of missing libraries" or "number of executables with unfulfilled dependencies" can be queried and plotted if necessary. Similarly, update impact calculation enables obtaining the list of executables and libraries that would be potentially affected in case a given library is updated. In order to aid comprehension of the large amounts of data collected, we developed a visualization subsystem. Using DOT language for graph representation enables rendering the resulting graphs using existing tools as well (such as GraphViz or Gephi). While the individual executable file graphs were readable, the full-system dependency graph was too cluttered for human comprehension. At this stage, interactive filtering was implemented to allow the hiding of popular libraries responsible for most of the visual noise (as shown in Figure <ref>). We are also planning to implement automated filtering based on various features, such as node type, sub-string matching, and popularity. Other auxiliary scripts for dependency graphs exploration include querying all binaries and libraries that depend on a given library () and individual binary/library dependency graph generation ( and ). Individual library dependencies can also be visualized in a more detailed view. § STUDYING THE ARCHITECTURAL ASPECTS OF UBUNTU We focus on the following Research Questions (RQs) to investigate the file-level package relation architecture in Ubuntu systems using DepEx. We considered the presence dependency in this case study. We collected and analyzed the dependencies of 84 consecutive live Ubuntu Linux images that span over 18 years of development and evolution. The research questions we primarily focus on revolve around the emergent inter-package OS-wide architecture implicitly forming as a result of combining multiple independent software packages as well as the related architectural changes observed throughout longer time periods. In addition, we investigate the complexity perception from the perspectives of individual software package developers and whole system maintainers. * RQ1. How do binary-to-library dependencies manifest in the Ubuntu OS in terms of a system-wide dependency graph? * RQ2. What is the difference between individual library complexity directly exposed to developers vs. overall internal system complexity that emerges as a result of combining multiple subsystems together (direct vs. recursive dependencies)? * RQ3. How does the whole Ubuntu OS binary-to-library dependency graph evolve over a longer period? Having high popularity, rich history, and open-source nature, Ubuntu serves as a comprehensive data source. Despite other Linux distributions, such as Alpine, gaining popularity, we were unable to find another dataset comparable in size and quality. Specifically, older Alpine versions were unavailable for download and Debian produced fewer live images. Throughout the development of our DepEx framework, we relied on well-established existing open-source software, such as squashfs-tools[<https://github.com/plougher/squashfs-tools>], binutils[<https://www.gnu.org/software/binutils/>] and ldd[<https://man7.org/linux/man-pages/man1/ldd.1.html>]. SquashFS-related tools were used to expose compressed live Ubuntu images for analysis. Note that different versions of had to be used depending on the age of the Ubuntu image. Binutils package, particularly the GNU tool, was used to extract ELF-specific data such as imported library names. Lastly, was used to extract library search locations. Special precautions had to be taken to lookup for the library paths inside the mounted image rather than resolving paths within the host system that conducted the analysis. For this purpose, we relied on standard Linux functionality. Solely mounting the Ubuntu ISO files directly does not provide access to the live file system, as another layer of compression is typically present for disk space optimization purposes. Thus, we implemented a two-step unpacking process to gain visibility of the inner live file system. Interestingly, extracting the images generated over 18 years revealed how live image preparation changed over time. We noticed different compression techniques used throughout the time period analyzed that ranged from compressed loop files (cloop) to SquashFS versions 2.1-4.0. We also observed that modern SquashFS kernel modules could not transparently mount images compressed by older versions. Thus, we developed a supporting script to provide access to all of the downloaded images in a uniform manner. Using our DepEx framework, we recursively built the full library dependency graph for each identified executable using , and tools. Extracting library dependencies requires analyzing and variables, system library cache as well as the binary executable file path. Finally, we used an SQLite database to store the collected dependency data for all the scanned Ubuntu images. This data can be queried for further analysis and visualization. § FINDINGS AND RESULTS The dependency data extracted from a typical OS is a rich source of information on the high-level system architecture. In contrast to planned layer of architecture, this layer refers to the unwritten architectural aspects that emerge as a result of combining a multitude of independently-developed software packages. Coupled with temporal updates, this data can serve as a basis for a deeper system evolution trends analysis. For instance, long-term trends such as libraries gaining or losing popularity or executable complexity inflation may be detected. Predicting potential OS library or executable removal may help developers adjust the development plans. In addition, determining and removing unused libraries could be useful in optimizing disk space usage and reducing the attack surface. Throughout the data collection conducted, we focused on three key aspects. Firstly, we investigated the OS-level dependency graph as a whole (RQ1). Secondly, we examined various aspects of complexity in binary dependencies determined through coupling analysis (RQ2). Lastly, we analyzed evolutionary trends in the OS dependency graph (RQ3). §.§ OS-wide Dependency Graph Analyzing the resulting SQLite database, which covers 84 Ubuntu images, revealed the following number of binaries, libraries and dependencies per image. We found that from Ubuntu 5.04 to 23.04 the number of binary executables ranged from 1519 to 2753 and the number of libraries ranged from 1683 to 3673. In terms of dependencies detected, the numbers ranged from 18165 to []37641 in the images scanned. A total of 408364 binary and library files were processed to extract the dependencies, which returned almost 2 million dependencies. The total SQLite database size generated is over 83MB of raw dependency data. We noticed that highly popular libraries such as () make the graphs unreadable. Thus we implemented filtering out libraries from the sorted (by popularity) list of all the involved libraries. We observe that hiding the top 10-15 libraries increases the readability of the whole system graph. Notably, loosely coupled subsystems, such as the networking subsystem, become apparent. The libraries presented alongside the diagram also provide insight into the relative popularity of individual libraries within a system. We have observed that number of libraries imported but not present in the system varied from 20 (v5.04) to 8 (v23.04) with the highest number being 92 (v21.10b). As a consequence, the number of other libraries directly impacted by the missing dependencies varied from 4 (v17.10 and v17.10.1) to 27 (v13.04 and v9.04). Similarly, we see that the number of unused libraries (i.e., not imported by any other library or executable) ranged from 1301 (v5.04) to 1666 (v23.04). These numbers constitute a significant proportion of the total number of libraries included (around 77% and 62% respectively). Potential explanations for such a high number of unused libraries could be a) plugin-based applications that do not import libraries directly, b) "forgotten" legacy libraries and c) libraries shipped "just in case" for use by applications commonly installed at a later stage. §.§ Dependencies Coupling Aspects Software dependencies represent the reliance of a given piece of code on external code. In practice, software developers only deal with a subset of the code required for an application to run. A graphics-oriented library may expose a simpler set of functions to developers, while relying on a multitude of other complex hardware-specific libraries to implement the advertised functionality. Thus, a complex and large code base is made to look simple from the developer's perspective. This perception difference opens the possibility of measuring code coupling in direct and recursive ways. The direct coupling of an application reflects how many specific libraries a developer deals with explicitly. In contrast, recursive coupling takes all the underlying dependencies into consideration as well. In addition, there is an inherent asymmetry in dependency tracking. Forward tracking from a given binary to all the required libraries is trivial, as this information is contained within the binary. Reverse tracking from a given library to determine all the binaries and libraries that require the specified library is complicated, as this information is not stored explicitly. Reverse tracking essentially reflects the popularity of a given library and requires scanning the whole file system to be calculated. Thus we developed functionality to measure the (i) direct coupling, (ii) total (recursive) coupling, and (iii) library popularity. Figures <ref> and <ref> illustrate the changes in the average and maximum number of dependencies correspondingly. As can be seen from Figure <ref>, whereas the average total number of dependencies largely stays the same, developer-facing complexity tends to decrease over time. This indicates that developers tend to re-arrange code within libraries to minimize the coupling they face directly. The large spike in Figure <ref> is caused by the introduction of Gnome Shell in Ubuntu 17.10. We, therefore can conclude that while maintaining roughly the same external coupling, GNOME Shell has a complicated internal structure. Particularly, we found that binary has the largest amount of dependencies. This is explained by the fact that the configuration tool needs to interact with most of the GNOME Shell subsystems. A complementary aspect of dependency coupling is popularity. We define library popularity through the number of other libraries or executables that depend on it. In other words, damaging or removing more popular libraries would impact a larger number of executables in a system. In terms of popularity, the top 10 most used libraries (i.e. imported from other libraries and executables) in Ubuntu are: . The numbers alongside the libraries refer to the number of uses (i.e., library importing) averaged across all Ubuntu versions the library was present in. We notice that 7 out of the top 10 directly-coupled libraries relate to various GNOME subsystems while the other 3 relate to the Evolution mail client. Interestingly, the most complex executable with 100 direct dependencies was only present in two Ubuntu versions. This likely indicates that such high coupling was not tolerated, leading to the application removal. Lastly, analyzing total coupling by taking recursive dependencies into account, we found the top 10 complex libraries and binaries:(154), (156), (273), (155),  (154),  (155),  (158), (169),  (158),  (164). §.§ Dependency Graphs Evolutionary Trends Running a large-scale analysis on a set of Linux distributions developed and released over 18 years revealed a number of shifts occurring in the domain. In constant efforts to attract users, Ubuntu is known for conducting experiments, such as introducing new large software packages as a replacement for existing ones. For instance, the significant dip in the number of dependencies on Figure <ref> is explained by the replacement of GNOME 2 with Unity. On a longer scale it is also visible that despite limited local successes of such experiments, the overall trend indicates a slow growth of the number of files and dependencies. Interestingly, we also observed a significant amount of not explicitly required files are present in the system (Figure <ref>). In other words, up to 37% of libraries physically located in the file systems were not mentioned in the import tables of any of the binaries or libraries. This likely indicates that such libraries are primarily used as plugins and could be loaded at run-time through dynamic directory scanning if necessary. Note that these conditional dependencies may be impossible to detect in advance due to the unpredictable nature of external factors. For instance, a user controlled application configuration can determine whether a given plugin library should be loaded at run time. The overall trend also hints that such a dynamic plugin-based approach gains popularity as the proportion of libraries not imported keeps steadily growing. Another observation discovered throughout our analysis relate to the longevity of the libraries and binaries in Ubuntu. Namely, while complex binaries are periodically removed in search of better alternatives, highly popular libraries tend to stay around. Once a popular library is introduced in a particular Ubuntu version, it is unlikely to be removed as such removal would impact all libraries and executables that rely on the library's existence. Even internal code reorganizations affecting highly popular libraries require extra care to maintain compatibility[https://developers.redhat.com/articles/2021/12/17/why-glibc-234-removed-libpthread]. § DISCUSSION §.§ Threats to Validity While we primarily focused on dependency-centric package management in Linux OS, other factors may explain some of the observations. Despite high popularity, packages might get removed from the system due to licensing, compatibility, security, or maintainability issues. Dependency analysis should, therefore, be coupled with change log analysis to verify and confirm the findings. To enhance the external validity of our dependency analysis, we selected a highly popular Linux distribution. By including all of the available versions we expect our approach to be generalizable and applicable to a broader range of OSs. Widening the input data set on the time axis enabled the discovery of uncommon cases and long-term trends. Being well-maintained, Ubuntu served as a high-quality dataset. Legacy Ubuntu versions and their corresponding change logs were still available for download[Ubuntu wiki: Releases - https://wiki.ubuntu.com/Releases]. In contrast, Alpine (another popular Linux distribution) archives did not go far back in time. Moreover, the Alpine archives contained broken links for older versions, preventing image downloading. Similarly, while considering Debian systems, we discovered different and incompatible system image layouts which would complicate the analysis. Primary threats to external validity are abrupt changes causing significant paradigm shifts, lower granularities skewing the results, and implicit dependencies. Abrupt changes may be introduced throughout evolution. Such changes introduce incompatibilities, forcing to amend the scanning process accordingly. Notable examples we observed include compression algorithm changes, folder hierarchy alterations, and transition from to . We noticed a different layout of binary files in the file system that required consideration due to the changes introduced in Ubuntu 19.04. Specifically, and directories were converted to symbolic links to and correspondingly[<https://lists.ubuntu.com/archives/ubuntu-devel-announce/2018-November/001253.html>]. Depending on whether 19.04 is being installed from scratch or on top of the previously installed version, the number of binaries may look like being suddenly doubled in version 19.04. We alleviated this problem by resolving symbolic links. In addition to library dependencies stored in executable binary file import tables, other types of coupling occur in practice. For instance, network communication, special files like Unix domain sockets, Inter-Process Communication (IPC) calls, message-oriented buses, and pipes provide various means of code interactions. Discovering such code coupling instances may not be possible in practice (e.g., new code fragments might be downloaded over a network). Taking into account these code coupling types may significantly skew our findings. §.§ Challenges and Limitations The two primary technical challenges we encountered throughout our data collection and analysis are the large data set sizes and performance issues related to extracting dependencies at lower granularities. As the distributed Ubuntu images are growing in size, so do the number of executable files and their individual sizes. This steady growth is observed over all Ubuntu versions analyzed. For example, within 18 years analyzed, the live Ubuntu image size grew from 600MB (version 5.04) to 3.7GB (version 23.04). Likewise, the number of executable files experienced a 70% increase in size (1605 in 5.04, 2753 in 23.04). Through practical experiments, we established that restricting the dependency granularity is crucial to achieving acceptable processing speed as lower granularity dependency extraction incurs large overheads. Disassembling executable binaries to identify individual third-party library function calls slows the dependency extraction and incurs significant memory overheads. For instance, we have observed cases of over-disassembly and analysis of a single executable taking 40 minutes on an average laptop-class CPU. Thus, while technically possible and potentially interesting to gain further insights, lower-level granularity analysis is out of reach for real-time applications we initially aimed for. At this stage, we restricted the analysis to the file level only. § RELATED WORK The prior work primarily revolves around two aspects, (i) diverse conceptual complexity metrics definitions and (ii) dependency extraction and analysis. Various types of software complexity metrics have been widely studied in the literature <cit.>. Some studies have focused on metrics that are useful in source code analysis but are not easily applicable in binary code analysis <cit.> <cit.> <cit.>. Others have discussed the deficiency of methods to obtain global dependency knowledge and the difficulty in visualizing the resulting graphs <cit.>. The use of software complexity metrics to detect vulnerabilities has also been investigated, with some studies proposing dependency-oriented and execution-time complexities <cit.>. Dependency extraction aspects and challenges have also been explored, with some studies focusing on specific languages or ecosystems <cit.> <cit.>. Package management and dependency validation have been popular research topics, with a set of studies proposing methods to address issues arising from package evolution (e.g., splitting into multiple different packages) <cit.> <cit.> <cit.>. User questions related to package management, such as calculating the consequences of removing or modifying a package, have also been explored <cit.> <cit.>. Efficient package management tools and query languages have been proposed, including tools for efficient package management and relations lookup <cit.>. However, similar to software complexity metrics research efforts, multiple studies have focused only on source-level rather than binary dependencies <cit.> <cit.>. In efforts to resolve binary compatibility issues, some works have investigated relying on version ranges rather than minimum version requirements <cit.>. Unfortunately, the large downside of the proposed approach is the requirement of debug symbols availability, which is rare in commercial software. An interesting use of dependency extraction has been proposed for Windows executables for malware detection <cit.>. Taking the notion of the extent of a dependency into account enables detecting and eliminating insignificant dependencies <cit.>. Overall, it should be noted that dependency related studies primarily focus on source code dependency analysis and package-level relations<cit.> <cit.> and do not typically examine software package evolution over time. We, therefore, conclude that a more precise file-based dependency extraction is an under researched area that might benefit from providing better structural visibility for large-scale systems comprising multiple independently developed packages. We also see that understanding software evolution is essential for maintaining software, ensuring compatibility, and improving security. Having this understanding aids developers in making informed decisions about updates and maintenance, ensures software remains compatible with other systems, and reduces the risk of security issues. Additionally, understanding software evolution can lead to new innovations and improvements in software design and development. § CONCLUSION AND FUTURE WORK In this study, we introduce automated extraction of dependency graphs for a whole system at the executable files level (as opposed to manually maintained traditional package-level dependency graphs). The resulting system-wide dependency graph provides a high-level view of the OS architecture emerging from interactions between the different subsystems and user packages. In addition, this study enabled the discovery of general high-level trends/common patterns in Ubuntu Linux architecture evolution over time. We also differentiate between developer-facing complexity (defined through direct dependency coupling) and overall system complexity (defined through recursive dependency coupling). The motivation behind such a separation is that developers typically deal with third-party libraries without having full visibility of the back-end side of the libraries. In other words, a developer may include one library, while the library itself can have a complicated graph of dependencies not directly visible to the developer. These invisible dependencies may cause software bloating and increase the attack surface. We believe the findings of this study will provide useful insights for software developers and OS maintainers in terms of gaining a holistic quantitative understanding of inter-package architecture management that would be useful, for example, in optimizing disk space and improving system maintainability. We have identified two main directions for future research lines. Specifically, expanding the dependency extraction approach to a wider set of platforms to support and more types of dependencies to extract. For future research, we aim to perform Windows-based analysis and implement support for other levels of granularity, such as individual function dependencies. Also, in contrast to the convenient, holistic file system structure used in live editions, non-live distribution variants are composed of multiple compressed packages, complicating the dependency extraction and analysis. Implementing analysis for such non-live distributions could be a potential future research line. As opposed to fixed library imports, code fragments interacting through various communication channels are loosely coupled. Such non-obvious dependencies are not trivial to detect. For instance, changing code on one side of a UNIX pipe may negatively affect the results of the next program in the pipeline. Furthermore, such dependencies may not be predefined in advance and are only required intermittently while being completely unnoticeable most of the time. We believe that comprehensive and accurate detection of such concealed dependencies would greatly enhance the overall system architecture, evolution, and run-time operation understanding and visibility and enable early detection of potential compatibility breaks caused by code modifications. § ACKNOWLEDGMENT The work has been partially supported by the Cyber Security Research Centre Limited whose activities are partially funded by the Australian Government’s Cooperative Research Centres Programme. § DATA AVAILABILITY As the current project is funded by industry partners, we are unable to publish the source code at this stage. However, aiming to increase transparency and reproducibility in research, we have made the obtained dataset available for public access <cit.>. Researchers and interested parties can access the dataset and utilize it to replicate or build upon our findings. 8 SoftwareMetricsT. Honglei, S. Wei and Z. Yanan, "The Research on Software Metrics and Software Complexity Metrics," 2009 International Forum on Computer Science-Technology and Applications, Chongqing, China, 2009, pp. 131-136, doi: 10.1109/IFCSTA.2009.39. SoftwareMetricsSurvey S. Yu and S. Zhou, "A survey on metric of software complexity," 2010 2nd IEEE International Conference on Information Management and Engineering, Chengdu, China, 2010, pp. 352-356, doi: 10.1109/ICIME.2010.5477581. InitialComplexity Yonghee Shin and Laurie Williams. 2011. An initial study on the use of execution complexity metrics as indicators of software vulnerabilities. In Proceedings of the 7th International Workshop on Software Engineering for Secure Systems (SESS '11). Association for Computing Machinery, New York, NY, USA, 1–7. https://doi.org/10.1145/1988630.1988632 PackageConflict Artho, C., Di Cosmo, R., Suzaki, K., and Zacchiroli, S. (2011). Sources of inter-package conflicts in debian. arXiv preprint arXiv:1110.1354. DebianLinux de Sousa, O. Felicio, M. A. de Menezes, and Thadeu JP Penna. “Analysis of the package dependency on debian gnu/linux." Journal of Computational Interdisciplinary Sciences 1.2 (2009): 127-133. LinuxPackage_IEEE Lan, Yu-Qing, et al. "Extraction methods on Linux package dependency relations." 2009 International Conference on Information Engineering and Computer Science. IEEE, 2009. LinuxPackageVis Mithun, X. L. E., and van de Wetering, H. M. M. (2009). Linux Package Dependency Visualization. Master's Thesis at Department of Mathematics and Computer Science, Aug, 1-64. LinuxQuality Boender, J., Di Cosmo, R., Vouillon, J., Durak, B., and Mancinelli, F. (2008, July). Improving the quality of GNU/Linux distributions. In 2008 32nd Annual IEEE International Computer Software and Applications Conference (pp. 1240-1246). IEEE. RecoverDependency Lungu, M., Robbes, R., and Lanza, M. (2010, September). Recovering inter-project dependencies in software ecosystems. In Proceedings of the IEEE/ACM international conference on Automated software engineering (pp. 309-312). PackageDependency_2015 Jing Wang, Qingbo Wu, Yusong Tan, Jing Xu and Xiaoli Sun, "A graph method of package dependency analysis on Linux Operating system," 2015 4th International Conference on Computer Science and Network Technology (ICCSNT), Harbin, 2015, pp. 412-415, doi: 10.1109/ICCSNT.2015.7490780. DepOwl Jia, Z., Li, S., Yu, T., Zeng, C., Xu, E., Liu, et al. (2021, May). DepOwl: Detecting Dependency Bugs to Prevent Compatibility Failures. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE) (pp. 86-98). IEEE. unix_evolution_TSC D. Spinellis and P. Avgeriou, “Evolution of the Unix System Architecture: An Exploratory Case Study," in IEEE Transactions on Software Engineering, vol. 47, no. 6, pp. 1134-1163, 1 June 2021, doi: 10.1109/TSE.2019.2892149. unix_44 D. Spinellis, “A Repository with 44 Years of Unix Evolution," 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories, Florence, Italy, 2015, pp. 462-465, doi: 10.1109/MSR.2015.64. softwareComplexity E. J. Weyuker, “Evaluating software complexity measures," in IEEE Transactions on Software Engineering, vol. 14, no. 9, pp. 1357-1365, Sept. 1988, doi: 10.1109/32.6178. ComplexityCC C. Ebert, J. Cain, G. Antoniol, S. Counsell and P. Laplante, “Cyclomatic Complexity," in IEEE Software, vol. 33, no. 6, pp. 27-29, Nov.-Dec. 2016, doi: 10.1109/MS.2016.147. ComplexityComparison Zhang, M., Baddoo, N. (2007). “Performance Comparison of Software Complexity Metrics in an Open Source Project." In: Abrahamsson, P., Baddoo, N., Margaria, T., Messnarz, R. (eds) Software Process Improvement. EuroSPI 2007. Lecture Notes in Computer Science, vol 4764. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-75381-0_15 TopologyAnalysis Martin P. Robillard. 2008. Topology analysis of software dependencies. ACM Trans. Softw. Eng. Methodol. 17, 4, Article 18 (August 2008), 36 pages. https://doi.org/10.1145/13487689.13487691 SurviveDependencyCox, Russ. "Surviving software dependencies." Communications of the ACM 62.9 (2019): 36-43. StaticDependencyJász, Judit, et al. "Static execute after/before as a replacement of traditional software dependencies." 2008 IEEE International Conference on Software Maintenance. IEEE, 2008. AutoDepen Ossher, Joel, Sushil Bajracharya, and Cristina Lopes. "Automated dependency resolution for open source software." 2010 7th IEEE Working Conference on Mining Software Repositories (MSR 2010). IEEE, 2010. DataLink DepEx Dataset, <https://figshare.com/s/ce3247b81fac82528495.> interPackage LaBelle, Nathan, and Eugene Wallingford. "Inter-package dependency networks in open-source software." arXiv preprint cs/0411096 (2004). EvolutionPackageDepen Kikas, Riivo, et al. "Structure and evolution of package dependency networks." 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR). IEEE, 2017. DLLHell Dick, Stephanie, and Daniel Volmar. "DLL hell: Software dependencies, failure, and the maintenance of Microsoft Windows." IEEE Annals of the History of Computing 40.4 (2018): 28-51. DLLMinerNarouei, Masoud, et al. "DLLMiner: structural mining for malware detection." Security and Communication Networks 8.18 (2015): 3311-3322. LinuxDis Horváth, Árpád. "The software package dependency networks of some Linux distributions." 2012 IEEE 4th International Conference on Nonlinear Science and Complexity (NSC). IEEE, 2012. EmpiricalComp Decan, Alexandre, Tom Mens, and Philippe Grosjean. "An empirical comparison of dependency network evolution in seven software packaging ecosystems." Empirical Software Engineering 24 (2019): 381-416. PowerLaws Panagiotis Louridas, Diomidis Spinellis, and Vasileios Vlachos. 2008. Power laws in software. ACM Trans. Softw. Eng. Methodol. 18, 1, Article 2 (September 2008), 26 pages. https://doi.org/10.1145/1391984.1391986 LightWeigthDll Xie, Xiongwei, and Weichao Wang. "Lightweight examination of dll environments in virtual machines to detect malware." Proceedings of the 4th ACM International Workshop on Security in Cloud Computing. 2016. ELFspec TIS Committee. "Tool interface standard (TIS) executable and linking format (ELF) specification version 1.2." (1995). MetricsFaults Alakus, T. B., Das, R., and Turkoglu, I. (2019, September). An overview of quality metrics used in estimating software faults. In 2019 International Artificial Intelligence and Data Processing Symposium (IDAP) (pp. 1-6). IEEE.
http://arxiv.org/abs/2307.04029v1
20230708183856
On "Indifference" and Backward Induction in Games with Perfect Information
[ "Nimrod Megiddo" ]
cs.AI
[ "cs.AI" ]
=480pt =0pt 6pt corollaryCorollary definitionDefinition factFact exampleExample lemmaLemma propositionProposition remarkRemark *remarknonumRemark theoremTheorem =0pt #1 1## ###1
http://arxiv.org/abs/2307.04621v2
20230710150816
Recipes for Jet Feedback and Spin Evolution of Black Holes with Strongly-Magnetized Super-Eddington Accretion Disks
[ "Angelo Ricarte", "Ramesh Narayan", "Brandon Curd" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.GA" ]
Super-Eddington Spin Evolution Ricarte, Narayan, & Curd Angelo Ricarte [email protected] 0000-0001-5287-0452]Angelo Ricarte Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA 0000-0002-1919-2730]Ramesh Narayan Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA 0000-0002-8650-0879]Brandon Curd Department of Physics & Astronomy, The University of Texas at San Antonio, One UTSA Circle, San Antonio, TX 78249, USA A spinning black hole accreting from a disk of strongly magnetized plasma via a magnetically arrested disk is known to produce an efficient electromagnetic jet powered by the black hole's spin energy. We present general relativistic radiative magnetohydrodynamic simulations of magnetically arrested systems covering a range of sub- to super-Eddington accretion rates. Using the numerical results from these simulations, we develop formulae to describe the magnetization, jet efficiency, and spin evolution of an accreting black hole as a function of its spin and accretion rate. A black hole with near-Eddington accretion experiences a mild degree of spin-down because of angular momentum loss through the jet, leading to an equilibrium spin of 0.8 rather than 1.0 at the Eddington limit. As the accretion rate increases above Eddington, the spin-down effect becomes progressively stronger, ultimately converging on previous predictions based on non-radiative simulations. In particular, spin evolution drives highly super-Eddington systems toward a black hole spin near zero. The formulae developed in this letter may be applied to galaxy and cosmological scale simulations that include black holes. If magnetically arrested disk accretion is common among supermassive black holes, the present results have broad implications for active galactic nucleus feedback and cosmological spin evolution. § INTRODUCTION Astrophysical black holes (BHs) accreting from disks of plasma are known to launch relativistic jets and outflows <cit.>. Such energy injection from supermassive BHs (SMBHs) at the centers of galaxies, a process referred to as active galactic nucleus (AGN) feedback, is believed to be essential for stopping runaway gas cooling and star formation in massive galaxies and dark matter halos <cit.>. In this paradigm, accretion and feedback processes are critical for a complete picture of SMBH growth and galaxy co-evolution. However, the details remain poorly understood. For magnetized accretion disks, an electromagnetic analogue of the <cit.> process known as the <cit.> (BZ) mechanism provides the most widely accepted model for jet launching. The power of a jet launched by the BZ mechanism scales approximately proportional to both the square of the BH spin and the square of the magnetic flux threading the horizon. In systems with high enough spin and with maximal magnetic field strength, corresponding to a so-called magnetically arrested disk (MAD) <cit.>, more jet power can be launched than the entire rest mass energy of the material flowing into the BH <cit.>. The extra energy is supplied by the spin kinetic energy of the BH, which thereby may cause the BH to spin down with time. In this way, jets that travel through dark matter halos for hundreds of kiloparsecs are ultimately linked to the evolution of BH spin and the transport of magnetic fields on event horizon scales. Since the BZ mechanism powers a jet by extracting BH spin energy, if the process continues long enough a BH could continuously spin down and equilibrate near a spin value a_* ≈ 0. This has been explicitly demonstrated via general relativistic magnetohydrodynamic (GRMHD) simulations of radiatively inefficient, geometrically thick, MAD models <cit.>. Several recent publications have begun to study the implications of this spin-down effect for BH populations over cosmic time. The systems simulated so far largely belong to the regime of advection-dominated accretion <cit.>, or hot accretion <cit.>, which corresponds to highly sub-Eddington accretion. Spin-down is relatively slow for such low Eddington-ratio systems simply because the mass accretion rate is very small; nevertheless, continuous jet feedback from such BHs is implicated for maintaining low star formation for Gyrs in some galaxies <cit.>, which can lead to cosmologically significant BH spin evolution <cit.>. Super-Eddington accretion disks are geometrically thick and advection-dominated, just like low-Eddington ratio hot accretion flows, and can also reach the MAD state <cit.>. Such systems can produce extremely powerful jets <cit.>, and because of the very large accretion rate their BHs could spin-down very rapidly. <cit.> developed a physical semi-analytic model for this spin-down phenomenon. Using this model, <cit.> predicted rapidly decreasing collapsar BH spins to a_* ≲ 0.2 near birth. Self-consistent BH spin evolution is now being implemented in some galaxy and cosmological-scale simulations, which may then be used to model radiative efficiency and jet power <cit.>. Although galaxy-scale simulations cannot possibly resolve accretion disk scales, such an approach still represents a substantial improvement over most contemporary work to link SMBH spin evolution to the angular momentum of resolved gas on scales of parsecs. <cit.> and <cit.> implement spin-down during periods of thick disk accretion, employing fitting functions for the magnetic flux as a function of spin from GRMHD simulations. Again assuming the same results that have been demonstrated for very low Eddington ratio disks also hold for super-Eddington disks, <cit.> consider super-Eddington growth in high-redshift galaxies. While spin-down is noticeable in this simulation, it is counteracted by periods of thin disk accretion. All such calculations require some a priori knowledge or assumptions about the magnetic field strength. For magnetized geometrically thick disks in the low-Eddington rate limit, the MAD model offers one well-studied solution. In contrast to the weak-field “Standard and Normal Evolution” (SANE) model <cit.>, a MAD system is characterized by such strong magnetic fields that magnetic pressure and tension is comparable to the gas pressure near the horizon <cit.>. MAD models are characterized by a dimensionless magnetic flux parameter ϕ (defined in <ref>) saturating at a spin-dependent maximum value <cit.>, as well as “flux eruption events” that occur when the BH expels magnetic flux <cit.>. The saturated fields that characterize the MAD state lead to highly efficient jets powered by the BZ mechanism. Spatially resolved and polarimetric observations of the nearby low-luminosity AGN, M87* and Sgr A*, currently favor MAD models over their SANE counterparts <cit.>, suggesting that the saturated values of ϕ characteristic of MAD models are easily achieved in low Eddington-ratio geometrically thick hot accretion disks. However, it remains to be confirmed that the same saturation values found for hot accretion flows at low Eddington ratios also hold for super-Eddington accretion flows where radiation plays an important role. It is also unknown whether the BZ mechanism operates efficiently in such systems and how efficiently BH spin-down proceeds. We explore these questions here. In this letter, we introduce and analyze a suite of super-Eddington general relativistic radiative magnetohydrodynamic (GRRMHD) simulations in the MAD regime to explicitly calculate the magnetization ϕ, jet power P_jet, and spinup parameter s (defined in equation <ref>), as a function of the dimensionless BH spin parameter a_* and the Eddington ratio f_Edd (defined in <ref>) of the accretion flow. As we shall show, highly super-Eddington accretion disks (f_Edd≫ 1) behave similarly to their very low Eddington-ratio (f_Edd≪ 1) counterparts. However, we find reduced magnetization and spin-down for Eddington ratios f_Edd≲ 10. Based on this behavior, we devise fitting functions for jet power and spin evolution that can be adapted into cosmological and galaxy-scale simulations. § GRRMHD SIMULATIONS Radiation plays a critical role in the dynamics of BH accretion disks for Eddington-ratios f_ Edd≳ 0.01. In these systems, radiative cooling acts to thin the disk at lower Eddington ratios, while radiative pressure puffs up the disk vertically as the mass accretion rate approaches or exceeds Eddington <cit.>. In super-Eddington systems, winds and jets driven purely by radiation can also occur <cit.> The numerical treatment of radiation in BH accretion problems is quite difficult as the algorithm must treat both optically thin and thick regions in a curved spacetime. <cit.> pioneered global, non-relativistic, radiation hydrodynamics (RHD) simulations of super-Eddington accretion disks using flux-limited diffusion. Following this work, radiation was first included in the fully general relativistic radiation magnetohydrodynamics (GRRMHD) code, koral, by <cit.> using the M1 closure scheme and a semi-implicit method to handle the radiation terms. Since then, the M1 closure scheme has been applied in other GRRMHD codes <cit.> as well as a GPU accelerated GRRMHD code <cit.>. Alternative methods of treating radiation in GRRMHD include directly solving the radiative transfer equations to obtain the Eddington tensor <cit.>, Monte Carlo methods <cit.>, or using a discretized radiation tensor <cit.>. The M1 closure scheme allows limited treatment of anisotropic radiation fields. It is superior to the Eddington approximation in optically thin regions, and is well suited for global GRRMHD simulations of super-Eddington disks. However, for complicated radiation fields, it cannot match methods based on the full Eddington tensor. <cit.> explored the role of BH spin in super-Eddington accretion by running a suite of 2D GRRMHD simulations for different spin values. They considered the SANE regime of accretion for which 2D simulations are sufficient. The MAD accretion regime, however, requires 3D simulations and this is the focus of our work. We present a suite of 38 3D numerical simulations of near-Eddington to super-Eddington MAD simulations carried out with the GRRMHD code, koral <cit.>. We include 2 BH masses, M = 10, 10^4 M_⊙, 6 BH spin values, a_*= -0.9, -0.68, 0, 0.68, 0.9, and 0.97 (where a minus sign denotes retrograde accretion), and a range of Eddington ratios, 0.4 ≲ f_Edd≲ 40. Since prolonged super-Eddington accretion is often invoked for the growth of BH seeds in the early universe, as we will later explore in <ref>, these two masses are loosely motivated by exploring both “light” and “heavy” seeding scenarios <cit.>. We define f_Edd as follows, f_Edd = Ṁ/Ṁ_Edd, where Ṁ is the mass accretion rate through the BH horizon (<ref>) and Ṁ_Edd is the Eddington mass accretion rate corresponding to the radiative efficiency of a thin disk (see <ref> and <ref>). Thin disks below and near the Eddington limit are notoriously difficult to simulate, due to difficulties resolving the disk scale height. However, the additional magnetic pressure of the MAD state helps to inflate even moderately sub-Eddington disks (see <ref>), making this problem computationally tractable <cit.>. Using a mesh-based, finite-difference method in a stationary Kerr space-time, koral solves the conservation equations of GRMHD, with the addition of radiative heating, cooling, and plasma coupling. Modeled radiative processes include synchrotron radiation, opacities from electron scattering, free-free and bound-free emission/absorption from the <cit.> model, and Compton scattering. While ideal GRMHD simulations without radiation are rescalable to different masses and accretion rates, the inclusion of radiative processes sets absolute physical scales and necessitates individual simulations for each combination of M, a_*, and f_Edd. Each simulation is initialized as a torus of gas in hydrostatic equilibrium threaded by a large-scale poloidal magnetic field, either perfectly aligned or anti-aligned with the BH spin axis. To limit computational expense, but still allow non-axisymetric structures that commonly arise in MAD disks, we simulate a periodic π/2 wedge in azimuth. From the torus initial conditions, the magnetorotational instability naturally develops to allow the plasma to lose angular momentum and accrete onto the BH, advecting along with it magnetic field which saturates at the MAD state. One example is shown in <ref>, where in the upper panels we visualize the density and magnetic field lines of the M=10^4 M_⊙, a_*=0.9, f_Edd=9.3 model in the plane and in a perpendicular slice respectively. The BH has accumulated a significant poloidal magnetic field, and turbulent eddies are evident in the disk. A flux eruption event characteristic of the MAD state, the low-density bubble near the horizon, is visible during this snapshot. Throughout this work, we use gravitational units to describe physical parameters. For distance we use the gravitational radius r_g ≡ GM/c^2 and for time we use the gravitational time t_g ≡ GM/c^3. We set G = c = 1, so the above relations would be equivalent to r_g = t_g = M. We restore G and c in cases where it helps to keep track of units. Each of the 38 models was run for a total time of 30000 t_g. Summary statistics are given in <ref> and correspond to averages over the final 5000 t_g of the run when we expect each simulation to be most nearly in steady state. § RESULTS §.§ Magnetization The dimensionless magnetization parameter ϕ(t) at time t is defined by <cit.>, ϕ(t) = √(4π)/2√(Ṁ(t))∫_ϑ∫_φ|B^r|_r=r_ H √(-g) dϑ dφ, where B^r is the radial component of the magnetic field, g is the metric determinant, Ṁ(t) is the BH accretion rate, and the integral is evaluated at the BH horizon. MAD systems are characterized by a value of ϕ that has saturated at a spin-dependent value of ∼ 30-50 <cit.>, as is the case for the example plotted in Figure <ref>. The value of ϕ tends to decrease during a flux eruption event; note that our example snapshot visualized in <ref> coincides with a local minimum in ϕ. Although both Ṁ and ϕ are time variable, we assign a single value to each simulation by averaging each quantity over the time period t=25000t_g - 30000t_g. These are the values listed in <ref>. In the left panel of <ref>, we show the values of ϕ obtained from our 38 simulations, both as a function of the Eddington ratio f_Edd and the BH spin a_*. Different spins are encoded in different colors, and different masses are encoded by symbol size. At large Eddington ratios, the simulations approach spin-dependent values similar to those found in pure GRMHD simulations of MADs <cit.>. However, ϕ decreases as f_Edd decreases. Interestingly, simulations with f_Edd=1 remain substantially magnetized, with ϕ values typically about a third of the limiting value for f_ Edd≫1. As we explore in <ref>, this trend can be explained by increased pressure scale height as Eddington ratio increases, allowing the disk to confine stronger magnetic fields. We model the behavior shown in the simulation data by fitting the following function: ϕ(a_*,f_Edd) = ϕ_MAD(a_*)(f_Edd/f_c)^α/1+(f_Edd/f_c)^α, where f_c is a critical Eddington ratio determining the mid-point of the transition, and α is a free parameter determining the rapidity of the evolution around f_c. The function ϕ_MAD(a_*) is the saturated value of ϕ found in non-radiative MAD simulations. We use the approximation given in <cit.>, ϕ_MAD(a_*) = 52.6 + 34a_* - 14.9a^2_* - 20.2a^3_*. By construction, in <ref>, ϕ→ 0 as f_Edd→ 0 and ϕ→ϕ_MAD(a_*) as f_Edd→∞. Via least-squares fitting, we arrive at α=1.29 and f_c=1.88. The spin-dependent ϕ(a_*,f_Edd) curves are plotted in the background of <ref>, and describe the main trends fairly well. We intentionally transition ϕ→ 0 as f_Edd→ 0 to connect to the thin disk solution, but we caution that the shape and rapidity of this transition may be sensitive to our poor sampling of the f_Edd≲ 1 regime. We note that the GRRMHD simulations of both <cit.> and <cit.> produced ϕ∼ 30 for f_Edd∼ 0.3, which our fitting function would underestimate. §.§ Jet Efficiency The electromagnetic jet efficiency η_EM = P_jet / Ṁc^2 can be calculated analytically given a_* and ϕ. For small to moderate values of spin, η_ EM∝ a_*^2ϕ^2 <cit.>, but for spin values up to and including a_*=1, the following expression including higher order correction factors is more accurate <cit.>: η_EM = κ/4πϕ^2Ω^2_ H[1 + 1.38Ω^2_ H - 9.2Ω^4_ H], where Ω_ H≡|a_*|/2r_ H=|a_*|/2(1+√(1-a_*^2)) is the angular velocity of the horizon and κ is a constant dependent on the initial field geometry, for which we adopt κ = 0.05. In the right panel of <ref>, we plot the MHD energy outflow efficiency η_MHD as a function of magnetization, with spin once again encoded in color and mass encoded in symbol size. Note that unlike η_EM predicted by <ref> this quantity also includes the hydrodynamic energy flux. The colored curves correspond to the fitting function <ref> for each spin sampled by our simulation suite. The data points are from the simulations, where we have computed the mass and energy fluxes at a radius of 5 r_g since numerical floors cause inaccuracies closer to the horizon <cit.>. Radiative flux is neglected (which is again affected by floors, particularly in the jet region), but this introduces only a small error since the radiation contribution near the BH tends to be small. Despite the wide range of mass, spin and accretion rate considered in the right panel of <ref>, we find that the fitting function <ref> performs remarkably well, implying that the BZ mechanism dominates the jet physics in MAD super-Eddington accretion flows. Note that at a_*=0, the BZ prediction is identically 0 because the BH has no spin energy. However, the simulations still give η_MHD>0. In these models, the outflowing energy is from the accretion disk, presumably in a hydrodynamic wind. As a point of reference, we plot the radiative efficiencies of thin disks with a_* ∈{0,0.68,0.9,0.97 } as colored horizontal lines. The MHD outflow from the a_*=0 simulation is similar in energetic output to an equivalent thin disk's radiative output. Meanwhile, the radiative efficiency of a thin disk around a maximally spinning black hole can be easily be exceeded with enough spin and magnetic flux. §.§ Spin Evolution Since the BZ mechanism extracts spin energy from the BH, this can result in astrophysically significant spin evolution of an accreting BH, which we study here. We describe the evolution in terms of a dimensionless spin-up parameter <cit.>, s = da_*/dtM/Ṁ = l - 2 a_* e, where l is the inward specific angular momentum flux and e is the inward specific energy flux, each of which we measure at a radius of 5 r_g. Spinup as a function of a_* computed from our GRRMHD simulations is shown in the upper panel of <ref>, where the color encodes different Eddington ratios and the symbol size encodes different masses. The thin disk solution, which always pushes the BH towards maximal prograde spin (a_*→ 1), is shown as a dotted line <cit.>. A fitting function which we presented in previous work for MAD GRMHD (f_ Edd≪ 1) models <cit.> is shown as a dashed line and is given by s_MAD(a_*) = 0.45 - 12.53 a_* -7.80 a_*^2 + 9.44 a_*^3 +5.71 a_*^4 - 4.03 a_*^5. The simulated GRRMHD models generally transition from the thin disk solution to the MAD GRMHD solution as the Eddington ratio increases (blue to red colors in <ref>). This is not unexpected, since highly super-Eddington disks are geometrically very thick and are highly advection-dominated <cit.> and therefore closely resemble the low-f_ Edd hot accretion flows studied in <cit.>. Retrograde models do not follow this trend, however, in fact spinning up more rapidly than the thin disk solution. These models overshoot the thin disk curve because both the BZ mechanism and accretion of oppositely rotating material torque the BH towards a_*=0.[As Eddington ratio increases, the disk dynamics evolve from the thin disk solution and the hydrodynamic torques become weaker (see <ref>). At the same time, the magnetization increases, so the electromagnetic torque becomes stronger. Whether or not a retrograde disk spins up faster or slower than a thin disk depends on the balance between these effects.] <cit.> built a semi-analytic model to understand spin evolution in non-radiative MAD systems based on the spin evolution equations appropriate for a disk-plus-jet system introduced in <cit.>. In this model, the spinup parameter is explicitly split up into hydrodynamic spinup by the accretion disk gas and spindown via a jet powered by the BZ mechanism. The spinup parameter is then expressed as s = s_HD + s_EM, where s_HD = l_HD - 2 a_* e_HD, and s_EM =sign(a_*) η_EM( 1/k Ω_H - 2 a_* ). We detail the calculation and modeling of s_HD from l_HD (the hydrodynamic specific angular momentum flux) and e_HD (the hydrodynamic specific energy flux) in <ref>. As explained there, we develop a fitting function for s_HD given by <ref> that smoothly interpolates between the thin disk solution as f_Edd→ 0 and non-radiative GRMHD results as f_Edd→∞. Meanwhile, the electromagnetic component s_EM depends on η_EM and the parameter k, which is the ratio of the angular frequency of field lines relative to that of the BH. We estimate η_EM as a function of a_* and f_Edd by combining <ref> and <ref>. For k, we adopt the following fit from the non-radiative GRMHD simulations of <cit.>: k(a_*) = 0.23, a_* < 0 min(0.1+0.5a_*,0.35), a_* > 0 this gives k slightly less than the <cit.> monopole value of 0.5, which broadly agrees with other simulations in the literature <cit.>. As one final modification to allow our model to support hot accretion flows, we make the following adjustment: s= s_HD + s_EM f_Edd > f_c s_MAD f_Edd≤ f_c where f_c is a critical Eddington ratio below which the accretion flow should transition to the radiatively inefficient hot accretion mode <cit.>. Following previous efforts to model the evolution of black hole populations, we adopt f_c=3× 10^-2 <cit.>. The exact Eddington ratio at which this transition occurs is poorly constrained and unlikely to be a sharp transition <cit.>. Different values of f_c may be adopted without qualitatively changing our formulae. Our final result for the spinup parameter s (<ref>) can thus be obtained from just two parameters (a_* and f_Edd) by inserting our fitting functions for ϕ(a_*,f_Edd) (<ref>), s_HD(a_*,f_Edd) (<ref>), and η_EM(a_*,ϕ) (<ref>). As constructed, <ref> can be applied to all physical values of a_* ∈ [-1,1] and f_Edd∈ (0,∞). The model predictions from <ref> are shown in the bottom panel of <ref>. The model captures the behavior seen in the simulations (upper panel) exceptionally well, especially for spinning BHs. For a_*=0, it underestimates the evolution of s with f_Edd. We speculate that this may be due to the exclusion of angular momentum loss due to hydrodynamic wind, evident in <ref>. In light blue, we plot the model's prediction for s when f_Edd=1. It is quite similar to the thin disk solution, but has a root, which corresponds to an equilibrium value of a_* for fixed f_Edd, at a_*, eq≈ 0.8 instead of 1. In red, we plot the limit as f_Edd→∞. It follows the non-radiative GRMHD fitting function well, with minor deviations in the retrograde regime. This curve exhibits two kinks originating from the piece-wise nature of <ref>. As f_Edd→ f_c, s is well-approximated by the thin disk solution (dotted black line) by construction. In any case, the key result from the red line is that, as f_ Edd→∞, the equilibrium spin (where s=0) approaches a_*, eq≈0. In <ref>, we plot the equilibrium spin a_*, eq as a function of Eddington ratio, found by taking <ref> and solving the condition s=0 at fixed f_Edd. We demarcate three different physical regimes: (i) hot accretion for f_Edd < f_c, (ii) what is classically modeled as a thin disk for f_c < f_Edd < 1, and (iii) super-Eddington accretion for f_Edd >1. In reality, s and a_*,eq should evolve more gradually around f_ Edd≈ f_c, but we lack a detailed understanding of this transition and are unable to model it more realistically in this work. Our model permits the existence of BHs with a stable a_*, eq≈ 1 for Eddington ratios in the range f_ Edd∼ 0.03 - 0.3, but a_*,eq begins to decline above f_Edd≈ 0.3 and approaches 0 as the accretion rate becomes highly super-Eddington. The limiting equilibrium spin for extremely large values of f_Edd is a_*=0.035, as in the hot accretion regime <cit.>, but note that this exact value is not very accurate and depends on the details of how spin-down is modeled. On the upper x-axis, we plot the evolutionary timescale of both mass and spin for a given f_Edd, given by t_Sal/f_Edd where t_Sal = ϵσ_T c/4 π G m_p = ϵ× 450 Myr is called the Salpeter timescale, where σ_T is the Thomson cross-section and m_p is the proton mass. For the convenience of defining a spin-independent t_Sal, we adopt a fiducial value of ϵ=0.1 for its definition, such that t_Sal = 45 Myr. Since mass and spin evolve on the same time-scale, a BH must accrete a significant fraction of its own mass to reach equilibrium spin[However, note that s measures the ratio of the spin evolution rate to the mass evolution rate. Hence for values of |s| approaching 10, spin evolves 10 times faster than mass.]. In the hot accretion regime, this would occur on timescales easily exceeding the age of the universe, and thus such BHs will not naturally reach the equilibrium spin value through the BZ process <cit.>. However, BHs which accrete continuously near or above the Eddington limit can reach their equilibrium spins in less than (sometimes very much less than) a Hubble time. Interestingly, such continuous and rapid assembly is invoked to explain the existence of massive quasars at z ≳ 6 <cit.>, which have accumulated masses up to 10^10 M_⊙ when the Universe was approximately 1 Gyr old. § DISCUSSION AND CONCLUSIONS In this letter we presented a suite of GRRMHD simulations of radiative MAD accretion disks around BHs. The simulations cover a range of BH spins a_* from +0.97 to -0.9, and Eddington ratios f_ Edd from 0.4 to 40. We find two key qualitative results. First, radiative disks in the MAD state around spinning BHs produce powerful jets as efficiently as the better-studied non-radiative disks (which are found in systems with f_ Edd≪ 1), and the power in the jet comes similarly from the BZ mechanism (see the right panel of <ref>). Second, the saturated magnetic flux ϕ depends not only on the BH spin (as already known for non-radiative MAD models) but also on the Eddington ratio (see the left panel of <ref>). As a result, radiative disks with f_ Edd≲ 0.3 behave roughly like the standard thin accretion disk model, but systems with f_ Edd≫ 1 are very different and closely resemble non-radiative models (see <ref>). In particular, when f_ Edd≫1, the accreting BHs spin-down rapidly toward an equilibrium a_*≈ 0. At a quantitative level, using the above suite of MAD GRRMHD simulations we have devised fitting functions which can be used to estimate magnetization ϕ (<ref>), jet feedback efficiency η (<ref>), and spin evolution s (<ref>), as a function of spin and Eddington ratio. Spindown via the BZ mechanism grows more efficient as Eddington ratio increases, but is already noticeable at f_Edd≈ 1, where the equilibrium spin is a_*=0.8. This has important implications for feedback and spin-evolution of BHs in the near-Eddington to super-Eddington regime, such as flux-limited samples of AGN, rapidly assembling seeds in the early universe, and collapsar BHs. In <ref>, we plot evolutionary tracks for a selection of cosmologically motivated scenarios, each of which results in a BH with M ≈ 10^9 M_⊙. In each case, we have integrated <ref> using a standard Runge-Kutta-Fehlberg 4(5) integrator with adaptive step-sizing. For these examples, we make an important assumption that the accretion disk and BH angular momentum axes are always perfectly aligned, which need not generally be the case. Variations in disk tilt over cosmic time are an uncertainty that can lead to substantial differences in spin evolution, leading to lower spins if the angular momenta of material is more randomized <cit.>. In the left column of <ref>, we plot evolutionary scenarios with different fixed f_Edd values shown as different colors. For f_Edd=20,  1,  0.1,  0.01, we initialize our BHs with M=10,  10^7,  3×10^8,  10^9 M_⊙ and a_*=0,  0,  0,  0.998, respectively. In all cases, 1 Gyr is enough for each of the BHs to approach their equilibrium spin (see <ref>). These scenarios result in very different spin evolution and feedback as a function of time. Both the f_Edd=20 and the f_Edd=1 scenarios result in the accretion of 10^9 M_⊙ of material, but the f_Edd=20 scenario releases a total of 7.8 × 10^53 erg worth of feedback compared to 5.3 × 10^54 erg in the f_Edd=1 scenario, a factor of 7 difference. The reason is that the f_ Edd=20 model reaches a lower equilibrium spin, which results in less efficient jet feedback. A consequence of this interesting result is that a BH could potentially grow more efficiently in a super-Eddington state before having its mass supply cut off by excessive jet feedback. We have assumed a sharp transition between thin and thick accretion flows at an Eddington ratio of f_c = 3×10^-2. Evolving in the thin disk regime, the f_Edd=0.1 model spins up to maximal spin and cannot power a very efficient jet, since lower Eddington ratio sources maintain weaker magnetization. On the other hand, the f_Edd=0.01 model evolves in the hot accretion flow regime and spins down to near zero spin. In the right column of <ref>, we plot two different fueling-limited scenarios. In the “Constant Ṁ” model, we envision that a galaxy provides constant Ṁ that the BH can consume, regardless of the f_Edd implied. In this model, we suggestively tune our parameters to match the formation of the <cit.> quasar, which is observed with f_Edd=0.67 and M=1.6× 10^9 M_⊙ at z=7.642, when the Universe was only 670 Myr old. After being initialized at 10^4 M_⊙ and a_*=0, the BH accumulates mass in the super-Eddington regime as spindown from the BZ mechanism keeps its spin low. Its spin increases only as f_Edd→ 1, and it reaches an equilibrium spin of 0.9. Qualitatively consistent with our predictions for a powerful jet, <cit.> report a relativistic outflow while also suggesting greater incidence of such powerful outflows at high redshift. In the second “Power-Law Ṁ” model, a 10^5 M_⊙ a_*=0 seed initially accretes at f_Edd=15,000, then the accretion rate declines as Ṁ∝ (1+(t/10^7 yr)^2)^-1, motivated by <cit.>. Over the age of the Universe, this BH traverses all three accretion regimes, starting with a_* ≈ 0 while it is super-Eddington, rising to a_* ≈ 0.9 in the thin disk regime, then finally declining to a_* ≈ 0.5 in the hot accretion regime. It runs out of fuel before it can achieve the equilibrium spin ≈ 0 for its final f_ Edd. Ending with f_Edd∼ 10^-6 and M ∼ 10^9 M_⊙, this evolutionary track could represent the history of the most massive BHs resolvable on the sky, such as Event Horizon Telescope target Messier 87. <ref> illustrates how a BH's assembly history is imprinted on its final spin value, motivating observational spin constraints of supermassive BHs. For 0.01 ≲ f_Edd≲ 0.3, X-ray reflection spectroscopy has been most successful in accumulating large spin samples. The measured spin values tend to be highly skewed towards a_* ≈ 1 <cit.>, in agreement with the equilibrium spin of a thin accretion disk, as well as the equilibrium spin value suggested by the present work for that range of f_Edd. To complement these thin disk spin constraints, the next-generation Event Horizon Telescope aims to measure spins of dozens of supermassive BHs in the hot accretion (f_Edd≪ 1) regime <cit.>. Taking the “Power-Law Ṁ” model in <ref> as an example, we would predict typical spin values roughly half-way between 1 and 0 <cit.>. It would be interesting to see what future observations show. Unfortunately, there is no known direct probe of spin in the super-Eddington regime, where we predict equilibrium spins close to 0. Current probes of spin rely on the existence of a sharp transition in the dynamics of the accreting disk at the innermost stable circular orbit. Such a feature is expected to be present in geometrically thin disks (and is the basis of the X-ray reflection method), but it is washed out in geometrically thick disks such as are found for f_ Edd≫1 (e.g., this work). It is worth mentioning that in the present radiative MAD models, as well as others in the literature, roughly ∼ 60% of the jet power can be transformed into radiation at large radius <cit.>. This can occur because inverse Compton scattering can transform much of the kinetic energy of the jet fluid into highly beamed radiation. However, we refrain from providing radiative efficiencies from our simulations, because we find that numerical floors in the jet region can artificially inflate the total energy in the jet at large radii. Fortunately, this artificially injected energy simply outflows from the simulation box and does not affect the region of interest. The analytic formulae devised in this work can be applied to galactic or cosmological scale simulations, conveniently bridging the sub-Eddington and super-Eddington regimes. When placing these models in an astrophysical context, the most important caveat is the assumption that these systems are magnetically saturated in the MAD state. Event horizon scale polarimetric imaging the largest black holes on the sky do currently favor MAD models over their SANE counterparts <cit.>, and ab-initio simulations of gas and magnetic field transport onto Sgr A* can indeed naturally produce MAD states <cit.>, but this evidence pertains only to low-Eddington ratio BHs. Super-Eddington MAD disks can explain jetted tidal disruption events <cit.>, but these objects are only ∼1% of known TDEs and may not be representative of the typical super-Eddington disk. Future observational and theoretical developments to test the robustness of the MAD state would help validate the modeling performed here. Furthermore, our simulations are limited to M=10 M_⊙ and M=10^4 M_⊙, and <ref> hints at a possible trend with mass. We do not expect our results to be very sensitive to BH mass on physical grounds, but this should be verified in future work in the context of varying the metallicity as well. § ACKNOWLEDGMENTS This work was supported in part by NSF grants AST1816420 and OISE-1743747, and by the Black Hole Initiative at Harvard University, made possible through the support of grants from the Gordon and Betty Moore Foundation and the John Templeton Foundation. The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of the Moore or Templeton Foundations. koral <cit.>, Matplotlib <cit.>, SciPy <cit.>, NumPy <cit.> § DATA AVAILABILITY Most plotted values can be downloaded from data files that accompany this publication. In addition, we provide a Python script including the equations presented in this work, as well as the integrator that was used to produce <ref> and <ref>. § ADDITIONAL GRRMHD DETAILS Using the finite-difference method in a fixed, Kerr spacetime, koral solves the conservation equations: (ρ u^μ)_;μ = 0, (T^μ_ ν)_;μ = G_ν, (R^μ_ ν)_;μ = -G_ν, (nu^μ_R)_;μ = ṅ, where ρ is the gas density in the comoving fluid frame, u^μ are the components of the gas four-velocity as measured in the “lab frame”, T^μ_ ν is the MHD stress-energy tensor in the “lab frame”: T^μ_ ν = (ρ + u_g+ p_g + b^2)u^μ u_ν + (p_g + 12b^2)δ^μ_ ν - b^μ b_ν, R^μ_ ν is the stress-energy tensor of radiation, G_ν is the radiative four-force which describes the interaction between gas and radiation <cit.>, and n is the photon number density. Here u_g and p_g=(γ_g - 1)u_g are the internal energy and pressure of the gas in the comoving frame, and b^μ is the magnetic field four-vector which is evolved following the ideal MHD induction equation <cit.>. For fitting purposes, it is useful to write the MHD stress-energy tensor in terms of hydrodynamic (HD) and electromagnetic (EM) components T^μ_ ν, HD = (ρ + u_g+ p_g)u^μ u_ν + p_gδ^μ_ ν and T^μ_ ν, EM = b^2 u^μ u_ν + 12b^2δ^μ_ ν - b^μ b_ν. The radiative stress-energy tensor is obtained via the M1 closure scheme. We include a radiative viscosity term to better approximate the radiation field in the funnel region as in <cit.>. We include the effects of absorption, emission, and scattering via the electron scattering opacity (κ_es), free-free absorption opacity (κ_a), thermal synchrotron, and thermal Comptonization <cit.>. For the M=10 M_⊙ models, we also account for the bound-free absorption opacity (κ_bf) using the Sutherland Dopita model <cit.> assuming a solar metal abundance for the gas[The 10M_⊙ models are quite hot, with temperatures >10^7K, and so the precise details of the atomic opacity prescription or the choice of metallicity are unimportant.]. We exclude the bound-free absorption opacity for the M=10^4 M_⊙ simulations, because these models are primarily meant to represent rapidly-growing “heavy” BH seeds in the early universe that are assumed to form in metal-free halos devoid even of star formation <cit.>. We adapt modified Kerr-Schild coordinates with the inner radius of the simulation domain inside of the BH horizon. The uniformly spaced internal coordinates (x_1,x_2,x_3) are related to the Kerr-Schild spherical polar coordinates polar coordinates (r,ϑ,φ) by r = e^x_1, ϑ = [1 + (H_0π/2)tan(H_0π[-0.5 + (Y_1 + (-Y_1 + Y_2)(e^x_1/2)^P_0)(1 - 2x_2) + x_2])]π2, φ = x_3. The complicated form of the middle expression is designed such that (i) the minimum/maximum coordinate ϑ is radially dependent, and (ii) more cells are focused towards the midplane ϑ=π/2. We choose H_0=0.6 to add slightly more resolution in the midplane in order to better resolve the accretion disk. We also choose Y_1=0.0025, Y_2=0.025, and P_0=1.2 such that Y_2π<ϑ<(1-Y_2)π near the horizon but Y_1π<ϑ<(1-Y_1)π further away. This choice ultimately increases the minimum time step and decreases the computational cost of each simulation. The radial grid cells are spaced logarithmically, and we choose inner and outer radial bounds R_min<r_H and R_max=10^4 r_g. We specify R_min for each model in Table <ref>. We also use a wedge of π/2 in azimuth instead of the full 2π in order to minimize computational costs and set φ_min=-π/4 and φ_max=π/4. We choose outflow boundary conditions at both the inner and outer radial bounds, reflective boundary conditions at the top and bottom polar boundaries, and periodic boundary conditions in φ. In each simulation, we employ a resolution of N_r× N_ϑ× N_φ=256×192×24. The resolution in θ is especially important for GRRMHD (and also GRMHD) simulations. The θ resolution used in the present work is superior to most GRRMHD simulations in the literature. Our φ resolution is modest: 24 cells over a π/2 wedge, which corresponds to an effective resolution of 96 cells over 2π. This is a bit lower than 32 cells in the wedge, or 128 cells over 2π, used in <cit.>. However, it is superior to most other GRRMHD simulations reported in the literature, e.g., 64 cells over 2π in <cit.> and <cit.>, or even 32 cells over 2π used in other work. We ensure that the fastest growing mode of the magnetorotational instability (MRI, ) is adequately resolved within each simulation. For this we compute the quantities <cit.>, Q_ϑ = 2πΩ dx^ϑ|b^ϑ|√(4πρ), Q_φ = 2πΩ dx^φ|b^φ|√(4πρ), where dx^i (the grid cell size) and b^i (the magnetic field strength) are both evaluated in the orthonormal frame, Ω is the angular velocity, and ρ is the gas density. Q≥ 5 is sufficient to resolve the MRI. We weight Q by √(b^2ρ) and integrate over the disk (σ < 1). We then spatially average over r=10r_g-100r_g and temporally average over t=25000t_g-30000t_g. In our least resolved model, which has M=10M_⊙, a_*=0.97, and f_Edd=1.97 in <ref>, we find ⟨ Q_ϑ⟩=5 and ⟨ Q_φ⟩=47, which is sufficient to resolve MRI in the bulk of the disk. Q_ϑ and Q_φ increase with f_Edd since the disk becomes thicker; therefore, all of our models sufficiently resolve the MRI. We initialize each simulation with a torus of gas in hydrodynamic equilibrium following <cit.>. The density was fixed by the entropy constant 𝒦=63 and assuming Γ=4/3. The angular velocity at the equatorial plane was set to a constant fraction of ξ=0.975 of the Keplerian angular velocity outside radius R_1=30 r_g, and followed fixed angular momentum between R_in < r < R_1 with R_in=22 r_g being the inner edge of the torus. The angular momentum was kept constant along the von-Zeipel cylinders. We set the outer edge of the torus at r≈ 400 r_g. This method only gives the hydrodynamic quantities. To initialize the radiation, we split the total pressure given by the initial hydrodynamics solution into gas and radiation components by assuming local thermodynamic equilibrium (LTE). We assign the gas and radiation pressure by finding the LTE temperature given by p_tot=p_gas+p_rad=k_Bρ T + 13a c T^4, where p_tot is the sum of gas and radiation pressure given by the initial torus in pure hydrodynamics, p_gas is gas pressure, and p_rad is the radiation pressure. We thread the torus with a large scale poloidal magnetic field defined by the vector potential A_ϕ. We adopt a definition of A_ϕ which is a function of r and ϑ given by A_ϕ=q(r,ϑ)sin(F(r)-F(R_start)), where we define q(r,ϑ) = (u_g(r,ϑ)-u_g(R_chop,π/2)) - 0.2(u_g(r,π/2)-u_g(R_chop,π/2))0.8(u_g(r,π/2)-u_g(R_chop,π/2))sin(ϑ)^3, R_start < r < R_chop 0 , r > R_chop and F(r)=1λ(53r^0.6 + 54r^-0.4). We set each of the parameters R_start=1.25 R_in, R_chop=350 r_g, and λ=15. Note that q(r,ϑ) uses the midplane gas internal energy to scale the vector potential. Also note that the sin(F(r)-F(R_start)) term can vary the sign of A_ϕ across radius with a wavelength that varies with λ. Our parameter choices are designed to place a large poloidal field that does not vary in sign at all. We normalize the magnetic field strength by setting the pressure ratio β_max≡(2(p_gas+p_rad)/b^2)_max=20. From these initial conditions, the MAD state naturally develops as the magnetic field is advected towards the horizon in the accretion flow. We artificially increase the gas density in high magnetization, σ≡ b^2/ρ, regions in order to ensure the simulation remains numerically stable by limiting σ≤60. Each simulation is carefully inspected to ensure that its accretion rate, magnetic flux parameter, and radial inflow profiles are in steady state for the window considered for further analysis. See <ref> for the full list of simulations described in this work. § FLUX CALCULATIONS The mass accretion rate as a function of radius is computed as Ṁ(r) = -∫_ϑ∫_φ√(-g)ρ u^r dφ dϑ. As we discuss in <ref>, we model the hydrodynamic and electromagnetic parts of the spinup parameter separately, following the formalism of <cit.> and <cit.>. To that end, we compute the angular momentum flux normalized by the mass accretion rate in HD and EM components separately: l_HD(r) = -1/Ṁ(r)∫_ϑ∫_φ T^r_ φ, HD√(-g) dφ dϑ, l_EM(r) = -1/Ṁ(r)∫_ϑ∫_φ T^r_ φ, EM√(-g) dφ dϑ. We similarly obtain the energy flux normalized by the mass accretion rate in HD and EM components: e_HD(r) = -1/Ṁ(r)∫_ϑ∫_φ T^r_ t, HD√(-g) dφ dϑ, e_EM(r) = -1/Ṁ(r)∫_ϑ∫_φ T^r_ t, EM√(-g) dφ dϑ. Note that the choice of sign in each expression is such that we compute the flux of energy and angular momentum into the BH, both of which are positive. We are particularly interested in the total outflowing energy relative to the accreted rest mass energy. We characterize this numerically using the dimensionless MHD efficiency η_MHD(r) = 1 -[e_HD(r)+e_EM(r)]. For the hydrodynamic spinup component, we first obtain the specific angular momentum fluxes l_HD (<ref>) and specific energy fluxes e_HD (<ref>) from the fluid simulations at a radius of 5 r_g. We plot the values calculated directly from the GRRMHD simulations in the leftmost panel of <ref>. The dotted line represents the analytic solution for a thin disk, which we refer to s_thin. As expected, the models approach s_thin as f_Edd→ 0. Meanwhile, the dashed line represents the fit found for non-radiative GRMHD simulations from <cit.>, which we refer to as s_min. They reported e_HD≈ 0.86 and l_HD≈ 0.97 independent of spin, and thus s_min = 0.86 - 1.94a_*. As f_Edd increases, our simulations appear to move from s_thin towards s_min. To build our model, we devise a fitting function that approaches s_thin as f_Edd→ 0, and s_min as f_Edd→∞. Thus, we fit for a single number to interpolate between these solutions, arriving at s_HD = s_thin + s_minξ/1+ξ with ξ = 0.017 f_Edd. The results of this fitting function are shown in the central column of <ref>, and residuals are shown in the rightmost column. Without modeling an additional spin dependence, this fitting function underestimates the rapidity with which the a_*=0 models transition from s_thin to s_min. We speculate that this may be due to the lack of consideration of angular momentum loss due to a hydrodynamic wind, evident in <ref>. For convenience, we reproduce the formulae to obtain s_thin here, following <cit.>. In units where G=c=M=1, e_thin = ( 1 - 2/3 r_ms)^1/2, and l_thin = 2/3√(3)[ 1 + 2(3 r_ms-2)^1/2], where r_ms is the radius of the marginally stable orbit, given by r_ms = 3 + Z_2 - sign(a_*)[(3-Z_1)(3+Z_1+2Z_2)]^1/2, for Z_1 = 1 + (1-a_*^2)^1/3[(1+a_*)^1/3+(1-a_*)^1/3] and Z_2 = (3a_*^2+Z_1^2)^1/2. Finally, s_thin = l_thin - 2 a_* e_thin. We also use the radiative efficiency of the thin disk model to define the Eddington ratio. The Eddington luminosity is the limiting luminosity above which radiation pressure exceeds gravitational pressure in a spherically symmetric system. It is given by L_Edd = 4 π G M m_p c/σ_T, where m_p is the proton mass and σ_T is the Thomson cross-section. Defining a radiative efficiency ϵ = L / Ṁ c^2 allows one to define the Eddington mass accretion rate, Ṁ_Edd = 4 π G M m_p/ϵσ_T c, Throughout this work, when defining the Eddington mass accretion rate, we assume the radiative efficiency of a thin disk, given by ϵ = 1 - e_thin = 1 - ( 1 - 2/3 r_ms)^1/2. Thus, our definition of Ṁ_Edd depends on both mass and spin. § PRESSURE SCALE HEIGHT To gain greater insight into the link between magnetic flux and Eddington ratio presented in <ref>, we explore the pressure scale heights of our simulations. We define the pressure scale height to be h/r = ∫∫ (P_gas + P_rad)|π/2-θ| √(-g) dθ dϕ/∫∫ (P_gas+P_rad) √(-g) dθ dϕ, where P_gas is the gas pressure and P_rad is the radiation pressure (which dominates). Here, P_gas + P_rad has taken the place of ρ in the usual definition of the scale height. In <ref>, we plot the pressure scale height at a radius of 10 r_g as a function of Eddington ratio in our simulations, and color code by spin. In grey, we plot a linear regression to these data, from which we obtain h/r = 0.21 + 0.046log_10f_Edd. This increase in pressure scale height as a function of Eddington ratio suggests that a higher Eddington ratio results in more pressure, mostly due to radiation, that can drive the gas to confine stronger magnetic fields onto the horizon.
http://arxiv.org/abs/2307.04154v1
20230709113104
Well posedness of fluid/solid mixture models for biofilm spread
[ "Ana Carpio", "Gema Duro" ]
math.AP
[ "math.AP", "cs.NA", "math-ph", "math.MP", "math.NA" ]
Well posedness of fluid-solid mixture models for biofilm spread Ana Carpio (Universidad Complutense de Madrid), Gema Duro (Universidad Autónoma de Madrid) August 12, 2023 ================================================================================================= Abstract Two phase solid-fluid mixture models are ubiquitous in biological applications. For instance, models for growth of tissues and biofilms combine time dependent and quasi-stationary boundary value problems set in domains whose boundary moves in response to variations in the mechano-chemical variables. For a model of biofilm spread, we show how to obtain better posed models by characterizing the time derivatives of relevant quasi-stationary magnitudes in terms of additional boundary value problems. We also give conditions for well posedness of time dependent submodels set in moving domains depending on the motion of the boundary. After constructing solutions for transport, diffusion and elliptic submodels for volume fractions, displacements, velocities, pressures and concentrations with the required regularity, we are able to handle the full model of biofilm spread in moving domains assuming we know the dynamics of the boundary. These techniques are general and can be applied in models with a similar structure arising in biological and chemical engineering applications. Keyword. Fluid-solid mixture models, thin film approximations, evolution equations in moving domains, quasi-stationary approximations, stationary transport equations. § INTRODUCTION Biofilms are bacterial aggregates that adhere to moist surfaces. Bacteria are encased in a self-produced polymeric matrix <cit.> which shelters them from chemical and mechanical aggressions. Biofilms formed on medical equipment, such as implants and catheters, are responsible for hospital-acquired infections <cit.>. In industrial environments, they cause substantial economical and technical problems, associated to food poisoning, biofouling, biocorrosion, contaminated ventilation systems, and so on <cit.>. Modeling biofilm spread is important to be able to eradicate them. We describe here biofilms in terms of solid-fluid mixtures, see Figure <ref>. At each point 𝐱 of the biofilm we have a solid fraction of biomass ϕ_s(𝐱,t) (cell biomass, polymeric threads) and a volume fraction of water ϕ_f(𝐱,t) containing dissolved substances (nutrients, autoinducers and so on), in such a way that ϕ_s(𝐱,t) +ϕ_f(𝐱,t)=1. The solid and fluid volume fractions move with velocities 𝐯_s and 𝐯_f, respectively. Biofilm spread on an air/solid interface is governed by the following system of equations, see <cit.>. Assume a biofilm occupies a region Ω^t, that varies with time. Figure <ref> represents schematic views of two dimensional slices. The upper boundary Γ^t_+ separates the biofilm from an outer fluid, that can be a liquid or air. A lower boundary Γ^t_- separates the biofilm from the substratum it attaches to. The main variables satisfy a set of quasi-stationary equations [ div (𝐯_f ϕ_f) = - k_s c c + K_sϕ_s,; [1ex] div(k_h(ϕ_s) ∇ (p-π(ϕ_s)) = div (𝐯_s ),; [1ex] μΔ𝐮_s + (μ + λ) ∇ ( div(𝐮_s)) = ∇ p,; [1ex] -d Δ c + div (𝐯_f c) = - k_c c c + K_cϕ_s, ] constrained by the additional conditions ϕ_f 𝐯_f = - k_h(ϕ_s) ∇ (p-π(ϕ_s)) + ϕ_f 𝐯_s, 𝐯_s =∂𝐮_s ∂ t, ϕ_f+ϕ_s = 1, in the region occupied by the biofilm Ω^t, which varies with time. In this quasi-static framework, the displacement vector 𝐮_s(𝐱,t) and the scalar pressure p(𝐱,t), volume fraction ϕ_s(𝐱,t) and concentration c(𝐱,t) fields depend on time through variations of the boundary Γ^t, which expands due to cell division and swelling. The positive functions k_h(ϕ_s) and π(ϕ_s) represent the permeability and the osmotic pressure. This system is subject to a set of boundary conditions: [ p - π = p_ext - π_ext, Γ^t= Γ^t_+∪Γ^t_-,; (σ̂(𝐮_s) - p 𝐈) 𝐧 = 𝐭_ext, ∂ c ∂𝐧 =0, Γ^t_+,; [1ex] 𝐮_s = 0, c = c_0, Γ^t_-, ] where 𝐧 is the outer unit normal and σ̂(𝐮_s)= λ Tr (ε(𝐮_s)) 𝐈 + 2 μ ε(𝐮_s), ε_ij(𝐮)= 1 2( ∂ u_i ∂ x_j + ∂ u_j ∂ x_i), i,j=1,…,n, n=2, 3, represent elastic stress and strain tensors. Boundary conditions for ϕ_f are required or not depending on the sign of 𝐯_f·𝐧 at the border. The displacement and velocity vectors have components 𝐮 = (u_1,…,u_n) and 𝐯 = (v_1,…,v_n), n= 2,3, respectively. All the parameters appearing in the model, k_s, K_s, k_c, K_c, μ, λ, d are positive constants. For ease of the reader, we have summarized the modeling in Appendix A. In some limits, the system can be reformulated as a poroelastic model <cit.>. The model is complemented with an equation for the dynamics of Γ^t, t>0. If we consider biofilms represented by the scheme in Figure <ref>(a), the contact points between biofilm, air and agar require specific additional information to avoid singularities. We will work with the geometry represented in Figure <ref>(b), that avoids this difficulty by introducing precursor layers <cit.>. Then, Γ^t_- is fixed. The upper boundary Γ^t_+ is parametrized by a height function h(x_1,x_2,t), which satisfies the equation <cit.> ∂ h ∂ t + ∂∂ x_1[ ∫_0^h (𝐯·𝐱̂_1) dx_3 ] + ∂∂ x_2[ ∫_0^h (𝐯·𝐱̂_2) dx_3 ] = 𝐯·𝐱̂_3|_0, where the composite velocity of the mixture 𝐯 = ϕ_f 𝐯_f + ϕ_s 𝐯_s has components 𝐯·𝐱̂_i = v_s,i - k_h(ϕ_s) ∂ (p-π) ∂ x_i, i=1,2,3. At present, only perturbation analyses and numerical studies are available for this type of models <cit.> in simple geometries. Asymptotic studies yield thin film type approximations for (<ref>)-(<ref>) assuming circular geometries and radial symmetry. Non standard lubrication equations for the height h are obtained, which admit families of self-similar solutions in radial geometries. However, the construction of reliable numerical solutions of the model in general experimental configurations faces difficulties due to the lack of well-posedness results. In this paper, we assume we know the dynamics of the upper boundary Γ_+^t, given by a smooth curve x_3 = h(x_1,x_2,t), and develop an existence and stability theory for the model equations. To simplify the analysis, we take k_h(ϕ_s) = k_h >0, k_h(ϕ_s)/ϕ_f = ξ_∞ >0 and π(ϕ_s) = Πϕ_s >0. In this quasi-stationary framework, the displacements 𝐮_s depend on time through the motion of the boundary. However, we lack equations for the velocities, other than the relation ∂𝐮_s ∂ t = 𝐯_s. In Section <ref> we obtain a system of equations characterizing the velocity: div(σ̂(𝐯_s)) = μΔ𝐯_s + (μ + λ) ∇ ( div(𝐯_s)) = ∇ p_t, , 𝐯_s = 0, , σ̂(𝐯_s) 𝐧 = ∂𝐠∂ t + 𝐫(𝐠,𝐮_s), with g= - p 𝐧 = -( p_ ext - π_ ext)𝐧 and 𝐫 to be defined later. A similar equation is obtained for p_t from the equation for p. Taking the divergence of the equations for 𝐮_s and 𝐯_s we find additional equations to close the system de dt = k_h (2 μ + λ) Δ e - k_h ΠΔϕ_s, , de_t dt = k_h (2 μ + λ) Δ e_t - k_h ΠΔϕ_s,t, , where e = div (𝐮_s) and e_t = div (𝐯_s). We will neglect Δϕ_s,t in (<ref>) because Π and Δϕ_s are small compared to other terms. Notice that (<ref>) and (<ref>) are time dependent problems set in time dependent domains, while most results in the literature refer to fixed domains. The construction of solutions for such systems combines a number of difficulties that we will address in stages. Section <ref> characterizes the time derivatives of 𝐮_s and p, solutions of elliptic problems in time dependent domains, by means of additional boundary value problems. In this way we improve the stability of the model, since solving additional partial differential equations in each spatial domain is more effective than approximating time derivatives by quotients of differences of solutions calculated in variable spatial domains. Section <ref> establishes well posedness results for linear parabolic problems (<ref>) set in domains with moving boundaries for specific types of parametrizations. Section <ref> considers the elliptic and stationary transport problems involved in the quasi-stationary submodels, separately and in fixed domains, under hypotheses motivated by asymptotic studies and numerical solutions. Finally, section <ref> considers the full coupled time dependent problem and section <ref> discusses our conclusions and open issues. A final appendix summarizes modeling details. § DIFFERENTIATION OF QUASI-STATIONARY PROBLEMS In the previous section, we have defined the velocity 𝐯_s as the time derivative of the displacement 𝐮_s. The change in time of 𝐮_s is due to the motion of the upper boundary Γ^t_+, that is, time variations in h. In this section we seek an equation characterizing 𝐯_s. We expect 𝐯_s to solve the same boundary value problem as 𝐮_s, but differentiating all sources with respect to time. However, since the boundary Γ^t of Ω^t moves with time, we need to calculate the adequate boundary conditions too. In the region Ω^t occupied by the moving biofilm, the displacements 𝐮_s of the solid phase satisfy equations (<ref>) with boundary conditions (<ref>). To simplify later computations, it is convenient to recast these equations in the general linear elasticity framework. The components of the displacement u_j(t), j=1,…,n, n being the dimension, fulfill - ∂∂ x_α( c_j α m β∂ u_m(t) ∂ x_β) = f_j(t), j=1,…,n, , u_j(t) = 0, j=1,…,n, , c_j α m β∂ u_m(t) ∂ x_β n_α(t) =g_j(t), j=1,…,n, where 𝐧(t) is the outer unit normal vector and c_j α m β the elastic constants. Γ_n^t and Γ_d^t are parts of the boundary Γ^t where we enforce conditions on the stresses of the displacements, respectively. We use the Einstein summation convention that implies summation over a set of indexed terms in a formula when repeated in it. In the above equations, summation over α, β, m is implied, but not over j. The elastic constants c_jα m β for a isotropic solids like the ones we consider are c_j α m β=λδ_j αδ_m β+μ (δ_jmδ_αβ +δ_j βδ_α m) where δ_jm stands for the Kronecker delta, whereas λ and μ represent the Lamé constants. The stress tensor is σ_jα = c_j α m βε_mβ = λδ_j αε_pp + 2 με_jα. In this framework, the velocity 𝐯 is the `Frèchet derivative' or `domain derivative' of 𝐮 with respect to t <cit.>, which is characterized by the solution of a boundary value problem, as we show next. Theorem 2.1. We assume that the body 𝐟 and boundary 𝐠 forces are differentiable in time, with values in [L^2(Ω^t)]^n and [L^2(Γ^t)]^n, respectively, with t>0, n=2,3 being the dimension. Moreover, the C^2 boundaries Γ^t are obtained deforming Γ^0 along a smooth vector field ν. Then, the time derivative 𝐯(t)= ∂𝐮(t) ∂ t, t>0, of the displacement given by (<ref>) satisfies - ∂∂ x_α( c_j α m β∂ v_m(t) ∂ x_β) = ∂ f_j(t) ∂ t, j=1,..,n, 𝐱∈Ω^t, v_j(t) = 0, j=1,..,n, 𝐱∈Γ_d^t, c_j α m β∂ v_m(t) ∂ x_β n_α(t) = ∂ g_j(t) ∂ t + r_j(g_j(t),𝐮(t)), j=1,..,n, 𝐱∈Γ_n^t, where [ r_j =c_jα mβ∂ u_m(t)∂ x_β∂ν_q ∂ x_α n_q(t) + c_jα mβ∂ u_m(t) ∂ x_β∂ (ν_p n_α(t)) ∂ x_p; 5mm + c_j α m β∂ u_m(t) ∂ x_β∂ν_p ∂ x_p n_α(t) - g_j(t) 𝐧(t)^T ∇ν 𝐧(t), j=1,…,n. ] As a corollary, we get the expressions of interest for our model. Corollary 2.2. Under the previous hypotheses, the time derivative 𝐯_s(t), t>0, of the solution 𝐮_s of (<ref>) with boundary conditions (<ref>) satisfies div( σ̂(𝐯_s))= μΔ𝐯_s + (μ + λ) ∇ ( div(𝐯_s)) = ∇ p_t, 𝐱∈Ω^t, 𝐯_s = 0, 𝐱∈Γ_-^t, σ̂(𝐯_s) 𝐧 = ∂ g_j ∂ t + r_j(g_j,𝐮_s), j=1,2 𝐱∈Γ_+^t, with g= - p 𝐧 = -( p_ ext - π_ ext)𝐧 and 𝐫 is defined by (<ref>) with c_j α m β= λδ_j αδ_m β+μ(δ_jmδ_αβ +δ_j βδ_α m). In practice, our moving boundaries are given by parametrizations of the form x_3=h(x_1,x_2,t). Therefore, the field ν∼ (0,0,h_t(x_1,x_2,t)) and 𝐧 = (h_x_1(x_1,x_2,t), h_x_2(x_1,x_2,t), -1) √(h_x_1(x_1,x_2,t)^2 +h_x_2(x_1,x_2,t)^2 +1). Thus, r_j = λ∂ u_j ∂ x_j∂ν_3 ∂ x_j n_3 + μ( ∂ u_j ∂ x_α∂ν_3 ∂ x_α +∂ u_m ∂ x_m∂ν_3 ∂ x_j) n_3 - d g_j dt( n_1∂ν_3 ∂ν_1 + n_2∂ν_3 ∂ν_2) n_3. Corollary 2.3 Under the previous hypotheses, assuming k_k(ϕ_s)=k_h and π(ϕ_s) = Πϕ_s, the derivative p_t(t)= ∂ p(t) ∂ t, t>0, of the solution p of (<ref>) with Dirichlet boundary conditions p= p_ext(t) satisfies, [ k_h Δ p_t = div(𝐯_s,t) + k_h ΠΔϕ_s,t + , 𝐱∈Ω^t,; [1ex] p_t = p_ ext'(t), 𝐱∈Γ^t. ] Proof of Theorem 2.1. We will follow a similar variational approach to that employed in <cit.> for 2D exterior elasticity problems with zero Dirichlet boundary conditions on a moving boundary. We are going to calculate the derivative at t=0. Similar arguments hold for any t>0. Step 1: Variational formulation. First, we write the boundary value problem for 𝐮 in variational form <cit.>. The boundary value problem (<ref>) becomes: Find 𝐮^t ∈ [H^1_Γ_d^t(Ω^t)]^n such that b^t(Ω^t; 𝐮^t, 𝐰^t)= ℓ^t(Ω^t;𝐰^t), ∀ 𝐰^t ∈ [H^1_Γ_d^t(Ω^t)]^n, where b^t(Ω^t; 𝐮^t,𝐰^t) = ∫_Ω^t c_j α m β∂ u_m^t∂ x_β^t ∂w_j^t ∂ x_α^t d𝐱^t, ∀ 𝐮^t, 𝐰^t ∈ [H^1_Γ_d^t(Ω^t)]^n, ℓ^t(Ω^t; 𝐰^t)= ∫_Ω^t f_j(t) w_j^t d𝐱^t + ∫_Γ_n^t g_j(t) w_j^t d𝐒_𝐱^t, ∀ 𝐰^t ∈ [H^1_Γ_d^t(Ω^t)]^n. Here, H^1_Γ_d^t(Ω^t) denotes the usual Sobolev space of H^1(Ω^t) functions vanishing on Γ_d^t ⊂∂Ω^t. H^1(Ω^t) if formed by all functions whose square, and the squares of their derivatives, are integrable in Ω^t, that is, belong to L^2(Ω^t). When 𝐟(t) ∈ [L^2(Ω^t)]^n, 𝐠∈ [L^2(Γ^t)]^n and meas(Γ_d^t)≠ 0, this problem admits a unique solution 𝐮^t ∈ [H^1_Γ_d^t(Ω^t)]^n <cit.>, which in fact belongs to [H^2(Ω^t)]^n, vanishes on Γ_d^t and satisfies σ (𝐮^t) 𝐧 = 𝐠 on Γ_n^t= ∂Ω^t ∖Γ_d^t. For t=0, we have u^0. Here, σ_α j (𝐮^t) = c_j α m β∂ u_m^t ∂ x_β. Step 2: Change of variables. We now transform all the quantities appearing in (<ref>)-(<ref>) back to the initial configuration Ω^0. The process is similar to transforming deformed configurations back to a reference configuration in continuum mechanics <cit.>. We are assumig that the evolution of the moving part of the boundary Γ^t = {𝐱 + t ν(𝐱) | 𝐱∈Γ^0 } is given by a family of deformations 𝐱^t = ϕ^t(𝐱) = 𝐱 + t ν(𝐱) starting from a smooth surface Γ^0 ∈ C^2 (twice differentiable) and following a smooth vector field ν∈ C^2 (Ω), Ω^t ⊂Ω, t>0. The deformation gradient is the jacobian of the change of variables <cit.> 𝐉^t(𝐱) = ∇_𝐱ϕ^t(𝐱) = (∂ x^t_i ∂ x_j(𝐱) ) = 𝐈 + t ∇ν(𝐱), and its inverse (𝐉^t)^-1 = (∂ x_i ∂ x^t_j) is the jacobian of the inverse change of variables. Then, volume and surface elements are related by d 𝐱^t = det 𝐉^t(𝐱) d 𝐱, d S_𝐱^t = det 𝐉^t(𝐱) (𝐉^t(𝐱))^-T𝐧 dS_𝐱 and the chain rule for derivatives reads ∇_𝐱 u_m(𝐱^t(𝐱)) = (J^t(𝐱))^T ∇_𝐱^t u_m(𝐱^t(𝐱)), that is, ∇_𝐱^t u_m = (𝐉^t)^-T∇_𝐱 u_m. For each component we have ∂ u_m ∂ x_β^t(𝐱^t(𝐱)) = ∂ u_m ∂ x_k(𝐱^t(𝐱)) (J^t)^-1_kβ(𝐱). We define 𝐮̃(𝐱)= 𝐮^t ∘ϕ^t (𝐱) = 𝐮^t (𝐱^t(𝐱)), definition that extends to 𝐰̃ and other functions. Changing variables and using (<ref>)-(<ref>) we have: b^t(Ω^t; 𝐮^t,𝐰^t) = ∫_Ω^t c_j α m β∂ u_m^t∂ x_β^t (𝐱^t) ∂ w_j^t ∂ x_α^t (𝐱^t) d𝐱^t = ∫_Ω^0 c_j α m β∂ũ_m∂ x_p(𝐱) (J^t)^-1_p β(𝐱) ∂w̃_j ∂ x_q(𝐱) (J^t)^-1_q α(𝐱) det 𝐉^t(𝐱) d 𝐱 = b̃^t(Ω^0; 𝐮̃,𝐰̃) ℓ^t(Ω^t; 𝐰^t) = ∫_Ω^t f_j(𝐱^t,t) w_j^t(𝐱^t) d𝐱^t + ∫_Γ_n^t g_j(𝐱^t,t) w_j^t(𝐱^t) d S_𝐱^t = ∫_Ω^0 -2mm f̃_j(𝐱,t) w̃_j(𝐱) det 𝐉^t d 𝐱+∫_Γ_n^0 -2mm g̃_j(𝐱,t) w̃_j(𝐱) det 𝐉^t (𝐉^t)^-T𝐧 dS_𝐱 = ℓ̃^t(Ω^0; 𝐰̃). For arbitrary test functions 𝐰^t ∈ [H^1_Γ_d^t(Ω^t)]^n, 𝐰̃∈ [H^1_Γ_d^t(Ω^0)]^n is a test function in Ω^0. Therefore, we obtain the equivalent variational formulation: Find 𝐮̃∈ [H^1_Γ_d^t(Ω^0)]^n such that b̃^t(Ω^0; 𝐮̃, 𝐰)= ℓ̃^t(Ω^0;𝐰), ∀ 𝐰∈ [H^1_Γ_d^t(Ω^0)]^n, with b̃^t(Ω^0; 𝐮̃, 𝐰) and ℓ̃^t(Ω^0;𝐰) defined in (<ref>)-(<ref>) replacing 𝐰̃ by 𝐰. Let us analyze the dependence on t of the terms appearing in the expression for b̃^t and ℓ̃^t. From the definitions of the Jacobian matrices (<ref>) we obtain <cit.> det 𝐉^t(𝐱) = 1 + t div(ν(𝐱) ) + O(t^2), (𝐉^t)^-1(𝐱) = 𝐈 - t ∇ν(𝐱) + O(t^2), det 𝐉^t(𝐱) (𝐉^t(𝐱))^-T𝐧 = 1 + t div_Γ(ν(𝐱)) + O(t^2), where div_Γ(ν(𝐱)) = div(ν(𝐱)) - 𝐧^T ∇ν(𝐱) 𝐧. Inserting (<ref>)-(<ref>) in (<ref>) we find the following expansions. When p=β and q=α we get ∫_Ω^0 c_j α m β∂ũ_m∂ x_β∂ w_j ∂ x_α d 𝐱 + t ∫_Ω^0 c_j α m β∂ũ_m ∂ x_β∂ w_j ∂ x_α div(ν) d 𝐱 - t ∫_Ω^0 c_j α m β[ ∂ũ_m∂ x_β∂ν_β∂ x_β∂ w_j ∂ x_α + ∂ũ_m ∂ x_β∂ w_j ∂ x_α∂ν_α∂ x_α] d 𝐱 + O(t^2), whose leading term is b^0(Ω^0; 𝐮̃, 𝐰 ). When p≠β and q ≠α the summands are O(t^2). The remaining terms provide the contribution -t ∫_Ω^0 c_j α m β[ ∂ũ_m ∂ x_p∂ν_p∂ x_β∂ w_j ∂ x_α + ∂ũ_m ∂ x_β∂ w_j ∂ x_q∂ν_q∂ x_α] d 𝐱 + O(t^2), with p ≠β, q=α in the first one and q ≠α, p=β in the second one. Adding up the contributions we get b̃^t(Ω^0; 𝐮̃, 𝐰 ) = b^0(Ω^0; 𝐮̃, 𝐰 ) + t[I_1(𝐮̃)+I_2(𝐮̃)+I_3(𝐮̃)] +O(t^2), where [ I_1(𝐮̃) = ∫_Ω^0 c_j α m β∂ũ_m ∂ x_β∂ w_j ∂ x_α div(ν) d 𝐱,; I_2(𝐮̃) = - ∫_Ω^0 c_j α m β∂ũ_m ∂ x_p∂ν_p∂ x_β∂ w_j ∂ x_α d 𝐱,; I_3(𝐮̃) = - ∫_Ω^0 c_j α m β∂ũ_m ∂ x_β∂ w_j ∂ x_q ∂ν_q ∂ x_α d 𝐱 = ∫_Ω^0∂∂ x_α(c_j α m β∂ũ_m ∂ x_β) ∂ w_j ∂ x_q ν_q d 𝐱; + ∫_Ω^0 c_j α m β∂ũ_m∂ x_β∂^2 w_j ∂ x_α∂ x_qν_q d 𝐱 - ∫_∂Ω^0 c_j α m β∂ũ_m ∂ x_β n_α∂ w_j ∂ x_q ν_q d S_𝐱. ] Similarly, from the definition (<ref>) of the linear form ℓ̃^t and the definition of 'material derivative' 𝐟̇ 𝐟̃(𝐱,t) = 𝐟(𝐱^t(𝐱),t) = 𝐟(𝐱,0) + t 𝐟̇(𝐱,0) + O(t^2), we find the expansion [ ℓ̃^t(Ω^0; 𝐰 ) = ∫_Ω^0 f_j(0) w_j d 𝐱 + t ∫_Ω^0 [ f_j(0) div(ν) + ḟ_j(0) ] w_j d 𝐱; [1.5ex] + ∫_Γ_n^0 g_j(0) w_j d S_𝐱 + t ∫_Γ_n^0 [ g_j(0) div_Γ(ν) + ġ_j(0) ] w_j d S_𝐱 + O(t^2) ] whose leading term is ℓ^0(Ω^0; 𝐰). Step 3. Variational problem for the domain derivative 𝐮'. Let us compare the transformed function 𝐮̃ and the solution 𝐮^0 of b^0(Ω^0;𝐮^0, 𝐰) = ℓ^0(Ω^0;𝐰). For any 𝐰∈ [H^1_Γ_d^t(Ω^0)]^n we have b^0(Ω^0;𝐮̃- 𝐮^0, 𝐰) = b^0(Ω^0;𝐮̃, 𝐰) - ℓ^0(Ω^0;𝐰) = b^0(Ω^0;𝐮̃, 𝐰) - b̃^t(Ω^0;𝐮̃, 𝐰) + ℓ̃^t(Ω^0;𝐰) - ℓ^0(Ω^0;𝐰). Well posedness of the variational problems (<ref>) with respect to changes in domains Ω^t and sources 𝐟(t), 𝐠(t), implies uniform bounds on the solutions for t ∈ [0,T]: 𝐮^t_[H^1(Ω^t)]^n≤ C(T), 𝐮̃ _[H^1(Ω^0)]^n≤ C(T). Expansions (<ref>)-(<ref>) show that the right hand side in (<ref>) tends to zero as t → 0. Well posedness of the variational problem again implies 𝐮̃→𝐮^0 in [H^1_Γ_d^t(Ω^0)]^n as t→ 0. Dividing by t equation (<ref>) and using (<ref>)-(<ref>), we find [ b^0(Ω^0;𝐮̃- 𝐮^0 t, 𝐰) = 1 t [b^0(Ω^0;𝐮̃, 𝐰) - b̃^t(Ω^0;𝐮̃, 𝐰)] + 1 t [ ℓ̃^t(Ω^0;𝐰) -ℓ^0(Ω^0;𝐰)]; [1.5ex] = - [I_1(𝐮̃)+I_2(𝐮̃)+I_3(𝐮̃)]+ ∫_Ω^0 [ f_j(0) div(ν) + ḟ_j(0) ] w_j d 𝐱; [1.5ex] + ∫_Γ_n^0 [ g_j(0) div_Γ(ν) + ġ_j(0) ] w_j d S_𝐱+ O(t). ] Then, the limit 𝐮̇ = lim_t → 0𝐮̃- 𝐮^0 t satisfies [ b^0(Ω^0;𝐮̇, 𝐰) = ∫_Ω^0 [ f_j(0) div(ν) + ḟ_j(0) ] w_j d 𝐱 - [I_1(𝐮^0)+I_2(𝐮^0)+I_3(𝐮^0)]; [1.5ex] + ∫_Γ_n^0 [ g_j(0) div_Γ(ν) + ġ_j(0) ] w_j d S_𝐱. ] As before, the function 𝐮̇ is the so called `material derivative', that is, 𝐮̇ = ∂𝐮∂ t + ∇𝐮^0 ν. The domain derivative becomes 𝐮' = 𝐮̇ - ∇𝐮^0 ν. Then, b^0(Ω^0; 𝐮', 𝐰) = b^0(Ω^0;𝐮̇, 𝐰) - b^0(Ω^0;∇𝐮^0 ν, 𝐰), where b^0(Ω^0;∇𝐮^0 ν, 𝐰) = ∫_Ω^0∂∂ x_β( c_j α m β∂ u_m^0∂ x_p ν_p) ∂ w_j ∂ x_α d 𝐱. Notice that this function vanishes on Γ_d whenever 𝐮̇ and ν do so. Step 4. Differential equation for the domain derivative 𝐮'. We evaluate the different terms in the right hand side of (<ref>) to calculate the right hand side in (<ref>). First, notice that -∂∂ x_α(c_j α m β∂ u_m^0 ∂ x_β) = f_j(0) in Ω^0 and c_j α m β∂ u_m^0 ∂ x_β n_α= g_j(0) on Γ_n^0, u_j^0=0 on Γ_d^0, j=1,...,n, imply: [ I_3(𝐮^0) = ∫_Ω^0( ∂ f_j(0) ∂ x_qν_q + f_j(0) ∂ν_q ∂ x_q) w_j d 𝐱 - ∫_∂Ω^0 f_j(0) w_j n_q ν_q d 𝐱; [1.5ex] - ∫_Γ_n^0 g_j(0) ∂ w_j ∂ x_q ν_q d S_𝐱 + ∫_Ω^0 c_j α m β∂ u_m^0∂ x_β∂^2 w_j ∂ x_q ∂ x_αν_q d 𝐱. ] Using ∂ u_m^0∂ x_p∂ν_p ∂ x_β = ∂∂ x_β(∂ u_m^0 ∂ x_p ν_p ) - ∂^2 u_m^0∂ x_p ∂ x_βν_p, we get [ I_2(𝐮^0) = - b^0(Ω^0;∇𝐮^0 ν, 𝐰) - ∫_Ω^0 c_j α m β∂ u_m^0∂ x_β∂ν_p∂ x_p∂ w_j ∂ x_α d 𝐱; [1.5ex] - ∫_Ω^0 c_j α m β∂ u_m^0∂ x_βν_p∂^2 w_j ∂ x_α∂ x_p d 𝐱 + ∫_∂Ω0 c_j α m β∂ u_m^0∂ x_βν_p n_p ∂ w_j ∂ x_α d 𝐱. ] As a result of the two previous identities [ - [I_1(𝐮^0)+I_2(𝐮^0)+I_3(𝐮^0)]= b^0(Ω^0;∇𝐮^0 ν, 𝐰) - ∫_∂Ω^0 c_j α m β∂ u_m^0∂ x_βν_p n_p ∂ w_j ∂ x_α d 𝐱; -∫_Ω^0( ∂ f_j(0) ∂ x_qν_q + f_j(0) ∂ν_q ∂ x_q) w_j d 𝐱 + ∫_∂Ω^0 f_j(0) w_j n_q ν_q d 𝐱 + ∫_Γ_n^0 g_j(0) ∂ w_j ∂ x_q ν_q d S_𝐱 ] and (<ref>) becomes [ b^0(Ω^0; 𝐮', 𝐰) = - ∫_∂Ω^0 c_j α m β∂ u_m^0∂ x_βν_p n_p ∂ w_j ∂ x_α d 𝐱 + ∫_∂Ω^0 f_j(0) w_j n_q ν_q d 𝐱 +; [1.5ex] ∫_Ω^0 f_j'(0) w_j d 𝐱 + ∫_Γ_n^0 [ g_j(0) div_Γ(ν) + ġ_j(0) ] w_j d S_𝐱 + ∫_Γ_n^0 g_j(0) ∂ w_j ∂ x_q ν_q d S_𝐱. ] Integrating by parts in b^0(Ω^0; 𝐮', 𝐰) and choosing 𝐰 with compact support inside Ω^0, this identity yields the following equation for 𝐮' in Ω^0 - ∂∂ x_α(c_j α m β∂ u_m' ∂ x_β(𝐱) ) = f_j'(𝐱,0), j=1,...,n. However, to obtain a pointwise boundary condition for 𝐮' we need to rewrite the integral on ∂Ω^0 in such a way that no derivatives of the test function 𝐰 are involved. Step 5: Boundary condition for the domain derivative 𝐮'. We integrate by parts the original expressions of I_i(𝐮^0), i=1,2,3 to get [ I_1 = - ∫_Ω^0∂∂ x_α( c_j α m β∂ u_m^0∂ x_β div(ν) ) w_j d 𝐱 + ∫_∂Ω^0 c_j α m β∂ u_m^0 ∂ x_β div(ν) n_α w_j d S_𝐱, ] -5mm [ I_2 = - ∫_Ω^0 c_jα mβ∂∂ x_β(∂ u_m^0 ∂ x_pν_p ) ∂ w_j ∂ x_α d 𝐱 - ∫_Ω^0 c_jα mβ∂∂ x_α∂∂ x_p( ∂ u_m^0 ∂ x_βν_p ) w_j d 𝐱; [1.5ex] + ∫_Ω^0 c_jα mβ∂∂ x_α( ∂ u_m^0 ∂ x_β∂ν_p ∂ x_p) w_j d 𝐱 + ∫_∂Ω^0 c_jα mβ∂^2 u_m^0 ∂ x_p ∂ x_βν_p w_j n_α d 𝐱 ] -4mm [ I_3 = ∫_Ω^0∂∂ x_q∂∂ x_α( c_jα mβ∂ u_m^0 ∂ x_βν_q ) w_j d 𝐱 + ∫_Ω^0∂∂ x_q( f_j(0) ν_q ) w_j d 𝐱; [1.5ex] - ∫_∂Ω^0 c_jα mβ∂ u_m^0 ∂ x_β∂ν_q ∂ x_α n_q w_j d S_𝐱 ] b^0(Ω^0; ∇𝐮^0 ν, 𝐰) - ∫_Ω^0( ∂∂ x_q f_j(0) ν_q + f_j(0) ∂ν_q ∂ x_q) w_j d 𝐱 + ∫_∂Ω^0 c_jα mβ∂ u_m^0 ∂ x_β∂ν_q ∂ x_α n_q w_j d S_𝐱 - ∫_∂Ω^0 c_jα mβ∂^2 u_m^0 ∂ x_p ∂ x_βν_p w_j n_α d 𝐱 + ∫_∂Ω^0 c_j α m β∂ u_m^0 ∂ x_β div(ν) n_α w_j d S_𝐱. We integrate by parts b^0(Ω^0; 𝐮', 𝐰) to get - ∫_Ω^0∂∂ x_α( c_j α m β∂ u_m' ∂ x_β) w_j d 𝐱 + ∫_Γ_n^0 c_j α m β∂ u_m' ∂ x_β n_α w_j d S_𝐱. Adding up to compute -[I_1+I_2+I_3], integrating by parts b^0(Ω^0; 𝐮', 𝐰), inserting (<ref>) in (<ref>) and setting ν=0 on Γ_d we find [ ∫_Γ_n^0 c_j α m β∂ u_m' ∂ x_β n_α w_j d S_𝐱 = ∫_∂Ω^0 c_jα mβ∂ u_m^0 ∂ x_β∂ν_q ∂ x_α n_q w_j d S_𝐱; [1.5ex] - ∫_∂Ω^0 c_jα mβ [ ∂^2 u_m^0 ∂ x_p ∂ x_βν_p n_α + ∂ u_m^0 ∂ x_β div(ν) n_α] w_j d S_𝐱; [1.5ex] + ∫_Γ_n^0 [ g_j(0) div_Γ(ν) + ġ_j(0) ] w_j d S_𝐱. ] Now, using identifies - c_jα mβ∂^2 u_m^0 ∂ x_p ∂ x_βν_p n_α = - ∂∂ x_p (g_j(0) ν_p) + c_jα mβ∂ u_m^0 ∂ x_β∂ (ν_p n_α) ∂ x_p, and g_j(0) div_Γ(ν) + ġ_j(0) = ∂∂ x_p (g_j(0) ν_p) - g_j(0) 𝐧^T ∇ν 𝐧 + g'_j(0), we obtain [ c_j α m β∂ u_m' ∂ x_β n_α = c_jα mβ∂ u_m^0 ∂ x_β∂ν_q ∂ x_α n_q + c_jα mβ∂ u_m^0 ∂ x_β∂ (ν_p n_α) ∂ x_p; [1.5ex] + c_j α m β∂ u_m^0 ∂ x_β∂ν_p ∂ x_p n_α - g_j(0) 𝐧^T ∇ν 𝐧 + g'_j(0) ] on Γ_n^0. □ § STUDY OF DIFFUSION PROBLEMS IN TIME DEPENDENT DOMAINS We study here parabolic problems of the form [ e_t - κΔ e = f(𝐱, t), 𝐱∈Ω^t, t>0,; e = g(t), 𝐱∈Γ^t, t>0,; e(𝐱, 0) = e_0, 𝐱∈Ω^t. ] As in Section <ref> we assume that the evolution of the moving part of the boundary is given by a family of deformations <cit.> Γ^t = {𝐱 + t ν(𝐱) | 𝐱∈Γ^0 }, starting from a smooth surface Γ^0 ∈ C^2 (twice differentiable) and following a smooth vector field ν∈ C(Ω^0) ∪ C^2(Ω^0). We can assume e(t)=0 by making the change e = ê + g. Then ê solves (<ref>) with zero Dirichlet boundary condition, initial datum e_0(𝐱)-g(0) and right hand side f(𝐱, t)-g'(t). Therefore, we will work with zero Dirichlet boundary conditions in the sequel. To solve (<ref>) we will first refer it to a fixed domain and then construct converging Faedo-Galerkin approximations. §.§ Variational formulation in the undeformed configuration As usual, we denote as H^1_0(Ω^t) the subspace of H^1(Ω^t) formed by functions whose trace vanishes on Γ^t with the induced norm. Multiplying (<ref>) by w^t ∈ H^1_0(Ω^t) and integrating, we find [ [ ∫_Ω^t e_t(𝐱^t,t) w^t(𝐱^t) d 𝐱^t +∫_Ω^t∇_𝐱^t e(𝐱^t,t) ∇_𝐱^t w^t(𝐱^t) d 𝐱^t =∫_Ω^t f(𝐱^t,t) w^t(𝐱^t) d 𝐱^t ] ] for each t. We use (<ref>), (<ref>), (<ref>) to refer these integrals to a fixed domain. The jacobian of the change of variables is the deformation gradient 𝐉^t(𝐱) = ∇_𝐱ϕ^t(𝐱) = (∂ x^t_i ∂ x_j(𝐱) ) = 𝐈 + t ∇ν(𝐱), and its inverse (𝐉^t)^-1 = (∂ x_i ∂ x^t_j) is the jacobian of the inverse change of variables. Then, volume and surface elements are related by d 𝐱^t = det 𝐉^t(𝐱) d 𝐱, d S_𝐱^t = det 𝐉^t(𝐱) (𝐉^t(𝐱))^-T𝐧 dS_𝐱, and the chain rule for derivatives reads ∇_𝐱 e(𝐱^t(𝐱)) = (J^t(𝐱))^T ∇_𝐱^t e(𝐱^t(𝐱)), that is, ∇_𝐱^t e = (𝐉^t)^-T∇_𝐱 e. For each component we have ∂ e ∂ x_α^t(𝐱^t(𝐱)) = ∂ e ∂ x_k(𝐱^t(𝐱)) (J^t)^-1_kα(𝐱). Changing variables we have: [ ∫_Ω^t∇_𝐱 e(𝐱^t(𝐱))^T ∇_𝐱 w(𝐱^t(𝐱)) d𝐱^t =; [2ex] ∫_Ω^0∇_𝐱 e^T (𝐉^t)^-T (𝐉^t)^-T∇_𝐱 e det 𝐉^t(𝐱) d 𝐱, ] where we assume the repeated index summing rule, that is, sum over repeated indices is intended. We define w̃(𝐱)= w^t ∘ϕ^t (𝐱) = w^t (𝐱^t(𝐱)), ϕ^t as in (<ref>). Notice that [ e_t(𝐱^t(𝐱),t) = d dt[e(𝐱^t(𝐱),t)] - ∇_𝐱^t e(𝐱^t(𝐱),t)^T d 𝐱^t dt; = d dtẽ(𝐱,t) - (𝐉^t)^-T∇_𝐱ẽ(𝐱,t)^T ν̃(𝐱). ] After changing variables, problem (<ref>) reads: Find e ∈ C([0,T],L^2(Ω^0)) ∩ L^2(0,T;H^1_0(Ω^0)) such that e(𝐱, 0) = e_0(𝐱) and [ ∫_Ω^0ẽ_t(𝐱,t) w̃(𝐱) det 𝐉^t(𝐱) d 𝐱 - ∫_Ω^0∇_𝐱ẽ(𝐱,t)^T (𝐉^t(𝐱))^-1ν̃(𝐱) w̃(𝐱) det 𝐉^t(𝐱) d 𝐱; [2ex] + ∫_Ω^0∇_𝐱 e(𝐱,t)^T ((𝐉^t(𝐱))^T 𝐉^t(𝐱))^-1∇_𝐱 e(𝐱,t) w̃(𝐱) det 𝐉^t(𝐱) d 𝐱; [2ex] = ∫_Ω^0f̃(𝐱,t) w̃(𝐱) det 𝐉^t(𝐱) d 𝐱. ] Since w^t ∈ H^1_0(Ω^t), we have w̃∈ H^1_0(Ω^0). In fact, we can take the same arbitrary function w ∈ H^1_0(Ω^0) for all t. §.§ Construction of stable solutions Consider a basis {ϕ_1, ϕ_2, …, ϕ_M …} of the Hilbert space L^2(Ω). We choose the normalized eigenfunctions ϕ_j ∈ H^2(Ω) ∩ H^1_0(Ω), j∈ℕ, of -Δ in H^1_0(Ω), see <cit.>. 1mm Theorem 3.1 Let Ω⊂ℝ^n be an open and bounded C^2 domain. Given a function f ∈ C^1([0,T]; L^2(Ω)) there exists a unique solution u ∈ C([0,T]; H^2(Ω)) ∩ H^1(0,T;H^1_0(Ω)) of [ ∫_Ω u_t(𝐱,t) w(𝐱) c(𝐱,t) d 𝐱 + ∫_Ω∇ u(𝐱,t)^T 𝐛(𝐱,t) w(𝐱) d 𝐱 +; [2ex] ∫_Ω∇ u(𝐱,t)^T 𝐀(𝐱,t) ∇ w(𝐱) d 𝐱 = ∫_Ω f(𝐱,t) w(𝐱) d 𝐱, ] for all w ∈ H^1_0(Ω), t ∈ [0,T], provided * 𝐀(𝐱,t) ∈ C^1(Ω× [0,T]), 𝐛(𝐱,t) ∈ C^1(Ω× [0,T]) and c(𝐱,t) ∈ C^2(Ω× [0,T]), * the matrices 𝐂^M(t) with elements ∫_Ω c(t) ϕ_m ϕ_k d 𝐱, m,k=1, …, M, are invertible for t ∈ [0,T], * the matrices 𝐀(𝐱, t) are uniformly coercive, that is, ξ^T 𝐀(𝐱, t) ξ≥ a_0 |ξ|^2, a_0>0, for all ξ∈ℝ^n, and the scalar field c(𝐱,t) is bounded from below, c(𝐱,t) ≥ c_0 >0, for all 𝐱∈ℝ^n and t>0, * u_0 ∈ L^2(Ω) and w_0 = div(𝐀(𝐱, 0) ∇ u_0(𝐱)) + 𝐛(𝐱, 0)^T ∇ u_0(𝐱) ∈ L^2(Ω). Moreover, the solution depends continuously on parameters and data. We obtain a solution for the original time dependent problem set in a moving domain undoing the change of variables. 1mm Proof. Existence. We use the Faedo-Galerkin method <cit.>. First, we change variables u(𝐱,t) = e^λ t v(𝐱,t), u_t(𝐱,t) = e^λ t [v_t(𝐱,t) + λ v(𝐱,t)], with λ >0 to be selected large enough. We obtain similar variational equations for v with an additional term λ c v and g and f multiplied by e^-λ t. Then we seek approximate solutions v^M(𝐱,t) = ∑_m=1^M α_m(t) ϕ_m(𝐱) such that [ ∫_Ω c(𝐱, t) v^M_t(𝐱,t) w(𝐱) d 𝐱 + ∫_Ω∑_p,q=1^n a_pq(𝐱,t) ∂ v^M ∂ x_p(𝐱,t) ∂ w ∂ x_q(𝐱) d 𝐱; + ∫_Ωλ c(𝐱,t) v^M(𝐱,t) w(𝐱,t) d 𝐱 + ∫_Ω∑_p=1^n b_p(𝐱,t) ∂ v^M ∂ x_p(𝐱,t) w(𝐱) d 𝐱; = ∫_Ω e^-λ t f(𝐱,t) w(𝐱) d 𝐱,; v^M(𝐱, 0) = ∑_m=1^M α_m(0) ϕ_m(𝐱), α_m(0) = ∫_Ω u_0(𝐱) ϕ_m(𝐱) d 𝐱, ] for all w ∈ V^M= span{ϕ_1, ϕ_2, …, ϕ_M}. We find a system of M differential equations for the coefficient functions α_m(t) setting w = ϕ_k, k=1,…,M, [ ∑_m=1^M α_m'(t) ∫_Ω c(t) ϕ_m ϕ_k d 𝐱 = - ∑_m=1^M α_m(t) ∫_Ω∑_p=1^n b_p(t) ϕ_m∂ x_pϕ_k d 𝐱 -; ∑_m=1^M α_m(t) ∫_Ω[ ∑_p,q=1^n a_pq(t) ∂ϕ_m ∂ x_p∂ϕ_k ∂ x_q + λ c(t) ϕ_m ϕ_k ] d 𝐱 + ∫_Ω e^-λ t f(t) ϕ_k d 𝐱. ] This can be written as a linear system with continuous and bounded coefficients in [0,T] d dtα^M = 𝐂^M(t)^-1𝐀^M(t) α^M + 𝐂^M(t)^-1𝐠^M(t) + 𝐂^M(t)^-1𝐟^M(t) with initial datum α^M(0), which admits a unique solution α^M(t), t ∈ [0,T] <cit.>. Multiplying identity (<ref>) by α_k and adding over k, we obtain [ 1 2d dt∫_Ω c(𝐱, t) |v^M(𝐱,t)|^2 d 𝐱 + ∫_Ω∑_p,q=1^n a_pq(𝐱,t) ∂ v^M ∂ x_p(𝐱,t) ∂ v^M ∂ x_q(𝐱,t) d 𝐱; +∫_Ω -1mm (λ c - 1 2 c_t)(𝐱, t) |v^M(𝐱,t)|^2 d 𝐱 + ∫_Ω∑_p=1^n b_p(𝐱,t) ∂ v^M ∂ x_p(𝐱,t) v^M(𝐱,t) d 𝐱; = ∫_Ω e^-λ t f(𝐱,t) v^M(𝐱,t) d 𝐱. ] Integrating in [0,T] and using coercivity, lower bounds for A and c, L^∞ bounds, as well as Young's inequality <cit.>, we find c_0 2∫_Ω |v^M(𝐱,t)|^2 d 𝐱 + a_0 2∫_0^t ∫_Ω |∇ v^M(𝐱,t)|^2 d 𝐱 ds + λ c_0 2∫_0^t ∫_Ω |v^M(𝐱,t)|^2 d 𝐱 ds ≤c _L^∞_xt 2 v^M(0)_L^2(Ω) + 1 2 f_L^2(0,T,L^2(Ω)) for λ large enough depending on a_0, c_t _L^∞_xt, c_0, 𝐛_L^∞_xt, n. Gronwall inequality, and the fact that v^M(0) → u_0 in L^2, imply that v^M is bounded in L^∞(0,T,L^2(Ω)) and L^2(0,T,H^1_0(Ω)). We extract a subsequence v^M' converging a limit v weakly star in L^∞(0,T,L^2(Ω)) and weakly in L^2(0,T,H^1_0(Ω)). Moreover, d dt∫_Ω c(t) v^M'(t) ϕ_k d 𝐱 tends to d dt∫_Ω c(t) v(t) ϕ_k d 𝐱 in the sense of distributions in D'(0,T) for any k. Similar convegences hold for u^M' and u= e^λ tv. We undo the change in (<ref>), multiply by a function ψ∈ C_c^∞([0,T)), integrate over t and pass to the limit as M' →∞ to find [ - ∫_Ω c(𝐱, 0) u(𝐱,0) w(𝐱) ψ(0) d 𝐱 - ∫_0^t ∫_Ω c_t(𝐱,t) u(𝐱,t) w(𝐱) ψ(t) d 𝐱 ds +; ∫_0^t ∫_Ω[ ∑_p,q=1^n a_pq(𝐱,t) ∂ v ∂ x_p(𝐱,t) ∂ w ∂ x_q(𝐱) + ∑_p=1^n b_p(𝐱,t) ∂ v^M ∂ x_p(𝐱,t) w(𝐱)] ψ(t) d 𝐱 ds; = ∫_0^t ∫_Ω e^-λ t f(𝐱,t) w(𝐱) ψ(t) d 𝐱 ds, ] for any w ∈ H^1_0(Ω), so that the limiting solution satisfies the condition on the initial data and the equation c u_t - div(𝐀∇ u) + 𝐛^T ∇ u = f in the sense of distributions <cit.>. Uniqueness. To prove uniqueness, we assume there are two solutions u_1 and u_2, and set u=u_1-u_2. We subtract the equations satisfied by both, multiply by u, set u=e^λ tv and integrate over Ω to get [ 1 2d dt∫_Ω c(𝐱, t) |v(𝐱,t)|^2 d 𝐱 + ∫_Ω∑_p,q=1^n a_pq(𝐱,t) ∂ v ∂ x_p(𝐱,t) ∂ v ∂ x_q(𝐱,t) d 𝐱; + ∫_Ω∑_p=1^n b_p(𝐱,t) ∂ v ∂ x_p(𝐱,t) v(𝐱,t) d 𝐱+∫_Ω -1mm (λ c -1 2 c_t)(𝐱, t) |v(𝐱,t)|^2 d 𝐱 = 0. ] Using uniform coercivity, the L^∞ bounds, and taking λ large enough, we see that ∫_Ω c(𝐱, t) |v(𝐱,t)|^2 ≤∫_Ω c(𝐱, 0) |v(𝐱,0)|^2 =0. Therefore, the solution is unique. Regularity. Next, we differentiate with respect to t to get [ d dt∫_Ω u_t(𝐱,t) w(𝐱) c(𝐱,t) d 𝐱 + ∫_Ω∇ u_t(𝐱,t)^T 𝐛(𝐱,t) w(𝐱) d 𝐱; [2ex] + ∫_Ω∇ u_t(𝐱,t)^T 𝐀(𝐱,t) ∇ w(𝐱) d 𝐱 + ∫_Ω u_t(𝐱,t) w(𝐱) c_t(𝐱,t) d 𝐱; [2ex] = ∫_Ω f_t(𝐱,t) w(𝐱) d 𝐱 - ∫_Ω u(𝐱,t) w(𝐱) c_tt(𝐱,t) d 𝐱; [2ex] + ∫_Ω∇ u(𝐱,t)^T 𝐛_t(𝐱,t) w(𝐱) d 𝐱 + ∫_Ω∇ u(𝐱,t)^T 𝐀_t(𝐱,t) ∇ w(𝐱) d 𝐱, ] with u_t(𝐱, 0) = w_0(𝐱). The functions ∇ u^T 𝐛_t, ∇ u^T 𝐀_t, u c_tt define linear forms in H^1(Ω). Arguing as in Theorem 3.1, we see that the function u_t is the unique solution in C([0,T];L^2(Ω)) ∩ L^2(0,T;H^1_0(Ω)) of this problem. Then, (<ref>) implies that - div(𝐀∇ u) + 𝐛^t ∇ u = -c u_t + f ∈ C([0,T];L^2(Ω)) zero Dirichlet boundary condition. Elliptic regularity theory ensures that u ∈ C([0,T];H^2(Ω)). Stability. The limiting solution inherits all the bounds established on the approximating sequence. Therefore its L^∞([0,T];H^2(Ω)) and H^1(0,T,H^1_0(Ω)) norms are bounded from above in terms of constants depending on the parameters of the problem and the norms of the data. □ 2mm Theorem 3.2 Under the hypotheses of Theorem 3.1, if f∈ L^q(Ω× [0,T]) and u_0 ∈ L^q(Ω), then u, its first and second order spatial derivatives, and u_t belong to L^q(Ω× [0,T]), 1<q<∞. Proof. We set v/c = u. Then ∇ u = ∇ v/c - v/c^2 ∇ c and cu_t = v_t - c_t/c v. Therefore, v is a solution of v_t - div(𝐀 c∇ v ) + 𝐀∇ c c^2∇ v + 𝐛^T c∇ v + [ div(𝐀∇ c c^2) + 𝐛^T ∇ c c^2 - c_t c] v = f. The result is a consequence of the regularity result stated in Theorem 9.1 in <cit.>. □ 2mm Theorem 3.3 Under the hypotheses of Theorem 3.2, if f∈ L^∞(Ω× [0,T]) and u_0 ∈ L^∞(Ω), then the solution u ∈ L^∞(Ω× [0,T]). As a result, u ∈ L^∞ ([0,T], L^q(Ω)), 1 ≤ q < ∞. Proof. Let M = max( f _L^∞_𝐱,t, u_0 _L^∞_𝐱)/(λ c_0). Then, v = e^-λ t u-M solves c v_t - div(𝐀∇ v) + 𝐛^T ∇ v + λ c v = e^-λ t f - λ c M ≤ 0 and v(0) ≤ 0. Multiplying the equation by v^+ and integrating we get [ 1 2d dt∫_Ω c(𝐱, t) |v^+(𝐱,t)|^2 d 𝐱 + ∫_Ω∑_p,q=1^n a_pq(𝐱,t) ∂ v^+ ∂ x_p(𝐱,t) ∂ v^+ ∂ x_q(𝐱,t) d 𝐱; + ∫_Ω∑_p=1^n b_p(𝐱,t) ∂ v^+ ∂ x_p(𝐱,t) v(𝐱,t) d 𝐱+∫_Ω -1mm (λ c -1 2 c_t)(𝐱, t) |v^+(𝐱,t)|^2 d 𝐱≤ 0. ] Choosing λ large enough, v^+=0 and u ≤ M e^λ t. A similar argument with M = - max( f _L^∞_𝐱,t, u_0 _L^∞_𝐱)/(λ c_0) shows that u ≤ max( f _L^∞_𝐱,t, u_0 _L^∞_𝐱)e^λ t/(λ c_0). □ 2mm Corollary 3.3 Let Ω^t ⊂Ω⊂ℝ^n, t>0, be a family of open and bounded C^2 domains, with Γ_- fixed and Γ_+^t defined deforming a reference curve Γ_+^0 by means of the transformation 𝐱^t = 𝐱 + t ν(𝐱), ν∈ C(Ω), t>0, Assuming that u_0 ∈ L^∞(Ω^0) and the transformed functions f̃(𝐱,t)= f (𝐱^t(𝐱),t) ∈ C^1([0,T], L^2(Ω^0)) ∩ L^∞(Ω× [0,T]), there exists a unique e solution of (<ref>) such that ẽ(𝐱,t) ∈ C([0,T],H^2(Ω^0)) ∩ H^1(0,T;H^1_0(Ω^0)) ∩ L^∞(Ω× [0,T]). Proof. We apply Theorems 3.1-3.2 and use the explicit characterizations of the variable matrix, vector and coefficient fields to prove existence in an interval [0,t_ν], t_ν small enough depending on ν. This solution can then be successively extended until we cover [0,T]. □ § WELL POSEDNESS RESULTS FOR THE QUASI-STATIONARY SUBMODELS In this section we establish the pertinent existence and regularity results for the elliptic submodels and the stationary transport problem in fixed domains. Constructing solutions for the stationary transport problems considered here is a non trivial issue. We are able to obtain them by a regularization procedure under sign hypotheses on the velocity fields motivated by asymptotic studies, which will have to be preserved by any implemented scheme. §.§ Elliptic problems for displacements, velocities and concentrations Consider the first the submodel for mechanical fields: [ μΔ𝐮_s + (μ +λ) ∇ div(𝐮_s) - ∇ p = Π∇ϕ_s, on Ω,; μΔ𝐯_s + (μ +λ) ∇ div(𝐯_s) = ∇ p', on Ω,; k_h Δ p - div(𝐯_s) =0, on Ω,; Δ p' = (2μ + λ) Δ e', on Ω,; p = p_ ext, p' = p_ ext' on Γ,; 𝐮 = 0, 𝐯 = 0, on Γ_-,; (σ̂(𝐮_s) - (p+Πϕ_s) 𝐈) 𝐧 = 𝐠, (σ̂(𝐯_s) - p' 𝐈) 𝐧 = 𝐠', on Γ_+. ] We denote by H^1_0,-(Ω) the Sobolev space of H^1(Ω) functions vanishing on Γ_-. 1mm Theorem 4.1. Let Ω⊂ℝ^n, n=2,3, be an open bounded domain with C^4 boundary ∂Ω. Let us assume that ϕ_s ∈ H^1(Ω) and e' ∈ H^2(Ω). Given positive constants μ, λ, k_h, Π, there exists a unique solution 𝐮_s ∈ [H^2(Ω)]^n × [H^1_0,-(Ω)]^n, 𝐯_s ∈ [H^3(Ω)]^n × [H^1_0,-(Ω)]^n, p ∈ H^4(Ω), p' ∈ H^2(Ω) of (<ref>) for any p_ ext, p_ ext' ∈ℝ and 𝐠, 𝐠' ∈ℝ^n. Moreover, if ϕ_s ∈ W^1,q(Ω) and e' ∈ W^1,q(Ω), n<q<∞, then p' ∈ W^1,q(Ω), 𝐯_s ∈ W^2,q(Ω), p ∈ W^3,q(Ω) and 𝐮_s ∈ W^2,q(Ω). Proof. The equation for p' uncouples from the rest and provides a solution p' ∈ H^2(Ω) by classical theory for Laplace equations <cit.>. Next, the equation for 𝐯 is a classical Navier elasticity system which admits a unique solution 𝐯_s ∈ [H^2(Ω)]^n × [H^1_0,-(Ω)]^n <cit.>. Since the source ∇ p' ∈ [H^1(Ω)]^n, elliptic regularity theory implies 𝐯_s ∈ [H^3(Ω)]^n. Now, div(𝐯_s) ∈ H^2(Ω) implies that the unique solution p of the corresponding Poisson problem has H^4(Ω) regularity. Finally, the equation for 𝐮_s is again a classical Navier elasticity system with L^2 right hand side which admits a unique solution 𝐮_s ∈ [H^2(Ω)]^n ∩ [H^1_0,-(Ω)]^n. When ϕ_s ∈ W^1,q(Ω) and e' ∈ W^1,q(Ω), we obtain the increased regularity <cit.>. Notice that since the boundary values are constant, we can construct extensions to H^k(Ω) and W^k,q for the necessary k, q <cit.>. □ 2mm Now, the equation for the concentrations is: [ -d Δ c + div (𝐯_f c) = - k_c g_c ϕ_s, 𝐱∈Ω,; c = c_0 𝐱∈Γ_-,; ∂ c ∂𝐧 = 0 𝐱∈Γ_+, ] given positive constants d, c_0, k_c, g_c and known functions 𝐯_f and ϕ_s. 1mm Theorem 4.2. Let Ω⊂ℝ^n, n=2,3, be an open bounded domain with C^2 boundary ∂Ω. Given positive constants k_c, g_c, d, c_0, a vector function 𝐯_l ∈ [H^1(Ω)]^n ∩ C(Ω), and a positive function ϕ_b ∈ L^2(Ω) there exists a unique nonnegative solution c ∈ H^1(Ω) of (<ref>) provided d is sufficiently large. Proof. Set c= c̃ + c_0. The resulting problem admits the variational formulation: Find c̃∈ H^1_0,-(Ω) such that d ∫_Ω∇c̃^T ∇ w d 𝐱 - ∫_Ω𝐯_f^T c̃∇ w d 𝐱 + ∫_Γ_+c̃ w 𝐯_l^T 𝐧 dS_𝐱 = - k_c g_c ∫_Ωϕ_s w d 𝐱 + c_0 ∫_Ω𝐯_f^T ∇ w d 𝐱, for all w ∈ H^1_0,-(Ω). The continuous bilinear form is coercive provided d is large enough compared to 𝐯_f_∞. Thus, we have a unique solution c̃∈ H^1_0,-(Ω) with H^2(Ω) regularity. The function c^- ∈ H^1_0,-(Ω) satisfies d ∫_Ω |∇ c^-|^2 d 𝐱 - ∫_Ω𝐯_f^T c^-∇ c^- d 𝐱 + ∫_∂Ω^+ |c^-|^2 𝐯_f^T 𝐧 dS_𝐱 = - k_c g_c ∫_Ωϕ_s c^- d 𝐱≤ 0. Coercivity implies c^-=0 and c ≥ 0 provided d is large enough compared to 𝐯_l _∞. For uniqueness, assume we have two positive solutions c_1 and c_2 in H^1(Ω) and set c = c_1 - c_2 ∈ H^1_0,-(Ω). Then u is a solution of [ - d Δ c + div (𝐯_l c) = 0, 𝐱∈Ω,; c = 0, 𝐱∈∂Ω^-,; [0.5ex] ∂ c ∂𝐧 = 0, 𝐱∈∂Ω^+. ] The variational equation with test function c and coercivity imply c=0, that is, c_1= c_2. □ §.§ Conservation law for volume fractions Consider the equation div(-𝐯_f ϕ_f) + k_s g_s ϕ_f = k_s g_s , 𝐱∈Ω, where k_s and g_s are positive constants and 𝐯_f a known function. 1mm Theorem 4.3. Let Ω⊂ℝ^n, n=2,3, be a thin open, bounded subset, with C^4 boundary ∂Ω. Let 𝐯_f ∈ [H^2(Ω) ∩ C(Ω)]^n such that div(𝐯_f)≤ 0 in Ω, div(𝐯_f) ∈ L^∞(Ω) and 𝐯_f^T 𝐧≤ 0 a.e. on ∂Ω. We assume that ∇𝐯_f ∈ [L^∞(Ω)]^n^2 with ∇𝐯_f_[L^∞]^n^2 small enough compared to k_s g_s. Then, given positive constants k_s and g_s, there exists a solution ϕ_f ∈ L^2(Ω) of (<ref>) in the sense of distributions. Moreover, * 0 ≤ϕ_f ≤ 1 on Ω and ϕ does not vanish in sets of positive measure. * ϕ_f ∈ H^1(Ω) is the unique solution of the variational formulation in H^1(Ω) and 1 2 k_s g_s ∇ϕ_L^2≤∇ div(𝐯_f)_[L^2]^n. * If we assume that Ω is a thin domain for which 𝐧∼𝐞_n and div(𝐯_f) ∈ W^1,q(Ω), n<q<∞, then ∇ϕ_f ∈ L^q(Ω) and 1 2 k_s g_s ∇ϕ_L^q≤∇ div(𝐯_f)_[L^q]^n. Proof. Existence. For each ε >0, we follow <cit.> and let ϕ_ε∈ H^1(Ω) be the solution of the variational formulation b(ϕ_ϵ, w) = ε∫_Ω∇ϕ _ε^T ∇ w d 𝐱 + ∫_Ω𝐯_f^T ϕ _ε∇ w d 𝐱 - ∫_∂Ωϕ_ε w 𝐯_f^T 𝐧 d S_𝐱 + ∫_Ω k_s g_s ϕ_ε w d 𝐱 = ∫_Ω k_s g_s w d𝐱 = L(w), ∀ w ∈ H^1(Ω) of - εΔϕ _ε - div(𝐯_f ϕ_ε) + k_s g_s ϕ_ε = k_s g_s in Ω, ∂ϕ_ε∂𝐧 = 0 on ∂Ω. The bilinear form b(φ, w) is continuous on H^1(Ω) <cit.>, while the linear form L is continuous on L^2(Ω). Since div(𝐯_f) ≤ 0 and 𝐯_f^T 𝐧≤ 0, the bilinear form b is also coercive in H^1(Ω). Indeed, ∫_Ω -1mm 𝐯_f^T ϕ_ε∇ϕ_ε d 𝐱 = 1 2∫_Ω -1mm 𝐯_f^T ∇ |ϕ_ε|^2 d 𝐱 = 1 2∫_∂Ω -1mm |ϕ_ε|^2 𝐯_f^T 𝐧 d 𝐱 - 1 2∫_Ω -1mm div(𝐯_f) |ϕ_ε|^2 d 𝐱. The positive term - ∫_Ω div(𝐯_f) |ϕ_ε|^2 d 𝐱 is finite because |ϕ_ε|^2 ∈ L^2(Ω) thanks to Sobolev embeedings <cit.>. Since the bilinear form ε∫_Ω∇ϕ^T∇ w d 𝐱 + ∫_Ω k_s g_s ϕ w d 𝐱 is coercive in H^1(Ω), we have a unique solution ϕ_ε∈ H^1(Ω) by Lax Milgram's theorem <cit.>. We set w=ϕ _ε and apply Young's inequality <cit.> to obtain the uniform bound ϕ_ε_L^2≤ meas(Ω)^1/2 from 0 ≤ε∫_Ω |∇ϕ _ε|^2 d 𝐱 - 1 2∫_∂Ω |ϕ_ε|^2 𝐯_f^T 𝐧 d S_𝐱 + ∫_Ω[ - 1 2 div(𝐯_f) + k_s g_s ] |ϕ_ε|^2 d 𝐱 = ∫_Ω k_s g_s ϕ_ε d𝐱≤ k_s g_s _L^2( ∫_Ω |ϕ_ε|^2 )^1/2. Each of the positive terms in the left hand side of the above inequality are uniformly bounded too. Thus, we can extract a subsequence ϕ_ε' such that ϕ_ε' tends weakly in L^2(Ω) to a limit ϕ, and ε∇ϕ_ε tends strongly to zero. Setting w ∈ C_c^∞(Ω) in the variational formulation, and taking limits <cit.>, ϕ is a solution of (<ref>) in the sense of distributions. The variational equation holds with ϵ =0, replacing the boundary integral by the duality _H^-1/2(∂Ω)<ϕ 𝐯_f^T 𝐧, w>_H^1/2(∂Ω) for w∈ H^1(Ω) <cit.>. L^∞ estimates. Setting ψ_ε = ϕ_ε - 1 and w = ψ_ε^+ we get ε∫_Ω |∇ψ _ε^+|^2 d 𝐱 - 1 2∫_∂Ω |ψ_ε^+|^2 𝐯_f^T 𝐧d S_𝐱 + ∫_Ω[ - 1 2 div(𝐯_f) + k_s g_s ] |ψ_ε^+|^2 d 𝐱 = ∫_Ω div(𝐯_f) ψ_ε^+ d𝐱≤ 0. Thus, ψ_ε^+=0 and ϕ_ε≤ 1. Similarly, we set ψ_ε = - ϕ_ε to find ε∫_Ω |∇ψ _ε^+|^2 d 𝐱 - 1 2∫_∂Ω (𝐯_f^T 𝐧) |ψ_ε^+|^2 d S_𝐱 + ∫_Ω[ - 1 2 div(𝐯_f) + k_s g_s ] |ψ_ε^+|^2 d 𝐱 = - ∫_Ω k_s g_s ψ_ε^+ d𝐱≤ 0. Thus, ψ_ε^+=0 and ϕ_ε≥ 0. Weak limits ϕ in L^2 inherit these properties. Moreover, (<ref>) implies that ϕ cannot vanish in sets of positive measure. H^1 Regularity. Elliptic regularity for system (<ref>) implies that ϕ_ε∈ H^2(Ω) <cit.>. We multiply (<ref>) by Δϕ_ε and integrate over Ω to get - ε∫_Ω |Δϕ _ε|^2 d 𝐱 - ∫_Ω𝐯_b^T ∇ϕ_εΔϕ _ε d 𝐱 + ∫_Ω[ - div(𝐯_b) + k_s g_s ] ϕ_εΔϕ_ε d 𝐱 = ∫_Ω k_s g_s Δϕ _ε d 𝐱. Integrating by parts, and using the boundary condition, we find - ε∫_Ω |Δϕ _ε|^2 d 𝐱 + ∫_Ω[ 1 2 div(𝐯_f) - k_s g_s ] |∇ϕ_ε|^2 d 𝐱 + 1 2∫_∂Ω |∇ϕ_ε|^2 𝐯_f^T 𝐧 d S_𝐱 = ∫_Ω∇[ - div(𝐯_f) + k_s g_s ] ^T ϕ_ε∇ϕ_ε d 𝐱 - ∫_Ω v_l,j,x_kϕ_ε, x_jϕ_ε, x_k d 𝐱. We know that 0≤ϕ _ε≤ 1. Therefore, ∫_Ω[ -1 2 div(𝐯_f) + k_s g_s ] |∇ϕ_ε|^2 d 𝐱≤∇ div(𝐯_f)_[L^2]^n∇ϕ_ε_L^2 + ∫_Ω |v_l,j,x_kϕ_ε, x_jϕ_ε, x_k| d 𝐱. If ∇𝐯_l_[L^∞]^n^2 is small enough compared to k_s g_s 1 2 k_s g_s ∇ϕ_ε_L^2≤∇ div(𝐯_f)_[L^2]^n. We extract a subsequence ϕ_ε' converging weakly in H^1(Ω) to a limit ϕ, strongly in L^2(Ω), and pointwise in Ω. The traces of ϕ on ∂Ω belong to L^2(∂Ω), and are weak limits of traces of ϕ_ε'. Passing to the limit in the variational formulation for (<ref>), ϕ∈ H^1(Ω) is a solution with ϵ =0 which inherits these bounds. Uniqueness. Given two solutions ϕ_1, ϕ_2 ∈ H^1(Ω), we set ψ = ϕ_1-ϕ_2. Subtracting the variational equations we get for the test function ψ∈ H^1(Ω) - 1 2∫_∂Ω (𝐯_f^T 𝐧) |ψ|^2 d S_𝐱 + ∫_Ω[ - 1 2 div(𝐯_f) + k_s g_s ] |ψ|^2 d 𝐱 = 0, that is, ϕ_1=ϕ_2 in view of the signs. □ W^1,q regularity. By elliptic regularity, ϕ _ε∈ W^3,q(Ω), since the source in (<ref>) belongs to W^1,q(Ω). Following <cit.>, we differentiate (<ref>) with respect to x_k, multiply by h(ϕ_ε) ϕ_x_k for h(ϕ_ε) = (|∇ϕ_ε|^2 + δ)^(q-2)/2, add k and integrate over Ω to get - ε∫_ΩΔ(∇ϕ_ε)^T h(ϕ_ε) ∇ϕ_ε d 𝐱 + ∫_Ω k_s g_s h(ϕ_ε) |∇ϕ_ε|^2 d 𝐱 - ∫_Ω v_l,iϕ_ε,x_i x_k h(ϕ_ε) ϕ_ε, x_k d 𝐱 - ∫_Ω v_l,i,x_kϕ_ε, x_i h(ϕ_ε) ϕ_ε, x_k d 𝐱 - ∫_Ω div(𝐯_f) h(ϕ_ε) |∇ϕ_ε|^2 d 𝐱 - ∫_Ω∇( div(𝐯_f))^T h(ϕ_ε) ϕ_ε∇ϕ_ε d 𝐱 = 0. Sum over repeated indices is intended. Notice that Lemma 3.1 from <cit.> holds in our framework for our thin domains, so that the first term is nonnegative. The fourth term becomes 1 q∫_Ω div(𝐯_f)(|∇ϕ_ε|^2 + δ)^q/2 d 𝐱 - 1 q∫_∂Ω (|∇ϕ_ε|^2 + δ)^q/2𝐯_l^T 𝐧 dS_𝐱. Putting all together we get ∫_Ω k_s g_s h(ϕ_ε) |∇ϕ_ε|^2 d 𝐱≤ - 1 q∫_Ω div(𝐯_f)(|∇ϕ_ε|^2 + δ)^q/2 d 𝐱 + ∫_Ω v_l,i,x_kϕ_ε, x_iϕ_ε, x_k h(ϕ_ε) d 𝐱 + ∫_Ω div(𝐯_f) h(ϕ_ε) |∇ϕ_ε|^2 d 𝐱 + ∫_Ω∇( div(𝐯_f))^T h(ϕ_ε) ϕ_ε∇ϕ_ε d 𝐱. We let δ→ 0 and use that ∇𝐯_f _[L^∞]^n^2 is small enough to find 1 2 k_s g_s ∫_Ω |∇ϕ_ε|^q d 𝐱≤∇( div(𝐯_f)) _L^q |∇ϕ_ε| _L^q^q-1, which yields the bound we seek letting ε→ 0. □ § WELL POSEDNESS RESULTS FOR THE FULL MODEL WITH A KNOWN BOUNDARY DYNAMICS Once we have analyzed the different submodels, we consider the whole system when the boundary of the domains Ω^t moves with time according to a given dynamics [ μΔ𝐮_s + (μ +λ) ∇ div(𝐮_s) - ∇ p = Π∇ϕ_s, in Ω^t,; μΔ𝐯_s + (μ +λ) ∇ div(𝐯_s) = ∇ p' , in Ω^t,; k_h Δ p = div(𝐯_s), in Ω^t,; Δ p' = (2μ + λ) Δ e', in Ω^t,; p = p_ ext, p = p_ ext' on Γ^t,; 𝐮_s = 0, 𝐯_s = 0, on Γ_-^t,; (σ̂ (𝐮_s) - (p+Πϕ_s) 𝐈) 𝐧 = 𝐠, on Γ_+^t,; (σ̂ (𝐯_s) - p' 𝐈) 𝐧 = 𝐠'(∇𝐮_s), on Γ_+^t, ] [ div (-𝐯_f ϕ_f) + k_s g_s ϕ_f = k_s g_s, 1cm in Ω^t,; 𝐯_f = - ξ_∞∇ p + 𝐯_s, ϕ_f+ϕ_s = 1, 1cm in Ω^t, ] [ de' dt = k_h (2 μ + λ) Δ e', 2.9cm ,; e' = e_ ext, 1mm on Γ^t,; e'(0) = e_0, 1mm on Ω^0, ] [ -d Δ c + div (𝐯_f c) = - k_c g_c ϕ_s, 1.8cm in Ω^t,; c = c_0 on Γ^t_-,; ∂ c ∂𝐧 = 0 on Γ^t_+. ] 1mm Theorem 5.1. Let Ω^t ⊂Ω⊂ℝ^n, n=2,3, t ∈ [0,T], be a family of open bounded C^4 domains. The lower boundary Γ_- is fixed, while the upper boundary Γ_+^t is obtained deforming Γ^0_+ along a vector field ν(𝐱) ∈ C(Ω) ∩ C^4(Ω). Assume that * e_ ext(t), 𝐠(t), 𝐠'(t), p_ ext(t), p_ ext'(t), c_0(t) ∈ C([0,T]), e_0 ∈ L^2(Ω^0) ∩ L^q(Ω^0), for q>n, * e_ ext, 𝐠', Π and p_ ext are small enough. Given positive constants μ, λ, Π, k_h, k_s, k_c, g_s, g_c, ξ_∞, and d large enough, system (<ref>)-(<ref>) admits a unique solution e' ∈ H^2(Ω^t)∩ W^2,q(Ω^t), 𝐮_s ∈ [H^2(Ω^t)]^n ∩ W^2,q(Ω^t), 𝐯_s, 𝐯_f ∈ [H^3(Ω^t)]^n, p ∈ H^4(Ω^t), p' ∈ H^2(Ω^t), ϕ_f, ϕ_s ∈ H^1(Ω^t)∩ W^1,q(Ω^t), c ∈ H^2(Ω^t), for q >n, satisfying c ≥ 0 and 0 ≤ϕ_f, ϕ_s ≤ 1, t ∈ [0,T]. Moreover, the norms of the solutions are bounded in terms of the parameters and data of the problem. Proof. Assume first that 𝐠'(∇𝐮_s) does not depend on 𝐮_s. Then, the result is a consequence of Corollary 3.3, Theorems 4.1-4.3 and Sobolev embeddings <cit.> (neither L^q regularity nor conditions on the domain geometry nor smallness assumptions are needed). We calculate the unknowns according to the sequence e', p', 𝐯_s, p, 𝐯_f, ϕ_f, ϕ_s, 𝐮_s, and c. When 𝐠'(∇𝐮_s) does depend on 𝐮_s, we construct e' thanks to Corollary 3.3. For each fixed t>0, e' ∈ H^2(Ω^t)∩ W^2,q(Ω^t) and we can construct p' ∈ H^2(Ω^t)∩ W^2,q(Ω^t). Next, we solve the quasi-stationary system by means of an iterative scheme. At each step ℓ, we freeze Π∇ϕ_s^(ℓ -1) in the equation for 𝐮_s^(ℓ) and 𝐠'(∇𝐮_s^ℓ-1) in the boundary condition for 𝐯_s^(ℓ). Initially, we set ϕ_s^(0)= ϕ_∞∈ (0,1) constant and ϕ_f^(0)=1-ϕ_∞. We set 𝐮^(0)=0. Theorem 4.1, Theorem 4.2, Theorem 4.3 guarantee the existence of 𝐯_s^(1), p^(1), 𝐮_s^(1), 𝐯_f^(1), ϕ_f^(1), ϕ_s^(1), and c^(1), with the stated regularity. In a similar way, given all the fields at step ℓ-1, we can construct the solutions for step ℓ. Notice that 𝐯_f^(ℓ-1)∈ W^2,q implies 𝐯_f^(ℓ-1)∈ W^1,∞(Ω) and 𝐯_f^(ℓ-1)∈ C(Ω). To apply Theorem 4.3 we also need to satisfy smallness and sign assumptions that we will consider later. Assuming they hold, we get for the elliptic system involving 𝐯_s^(ℓ), 𝐮_s^(ℓ), p^(ℓ) and for the transport equation for ϕ_s^(ℓ) p^(ℓ)_H^2(Ω^t) + 𝐯_s^(ℓ)_H^2(Ω^t) + 𝐮_s^(ℓ)_H^2(Ω^t)≤ C_1^t [Π∇ϕ_s^(ℓ-1)_L^2(Ω^t)   + ∇ p' _L^2(Ω^t) + p_ ext_H^3/2(Γ^t_+) + 𝐠'(∇𝐮_s^(ℓ-1)) _H^1/2(Γ^t_+) + 𝐠_H^1/2(Γ^t_+) ], p^(ℓ)_W^2,q(Ω^t) + 𝐯_s^(ℓ)_W^2,q(Ω^t) + 𝐮_s^(ℓ)_W^2,q(Ω^t)≤ C_2^t [ Π∇ϕ_s^(ℓ-1)_L^q(Ω^t) + ∇ p' _L^q(Ω^t) +  p_ ext_W^1-1 q,q(Γ^t_+) + 𝐠'(∇𝐮_s^(ℓ-1)) _W^1-1 q,q(Γ^t_+) + 𝐠_W^1-1 q,q(Γ^t_+) ], p^(ℓ)_W^3,q(Ω^t)≤ C^t_3 [ 𝐯_s^(ℓ)_W^1,q(Ω^t) + p_ ext_W^3-1/q,q(Γ^t) ] 𝐯_f^(ℓ)_W^2,q(Ω^t)≤ξ_∞ p^(ℓ)_W^3,q(Ω^t) + 𝐯_s^(ℓ)_W^2,q(Ω^t) 1 2 k_s g_s ∇ϕ_f^(ℓ)_L^q≤∇ div(𝐯_f^(ℓ))_[L^q]^n. Notice that ∇ϕ_f^(ℓ) = - ∇ϕ_s^(ℓ). Combining the above inequalities, and provided Π and 𝐠' are small enough, we obtain an upper bound for 𝐯_f^(ℓ)_W^2,q(Ω^t), 𝐯_s^(ℓ)_W^2,q(Ω^t), p^(ℓ)_W^2,q(Ω^t), ϕ_s _W^1,q(Ω^t), in terms of constants depending on the problem data and parameters, and also on time, but remain bounded in time for t∈ [0,T]. We guarantee by induction the smallness of 𝐯_f^(ℓ)|_[W^1,∞] and div(𝐯_f^(ℓ)) ≤ 0, 𝐯_f^(ℓ)·𝐧≤ 0. Initially, ϕ_s^(0) is constant and ∇ϕ_s^(0)=0. We construct 𝐯_s^(1) and p^(1) in such a way that 𝐯_s^(1)_[W^2,q]^n, p^(1)_[W^3,q]^n and 𝐯_f^(1)_[W^2,q]^n are bounded in terms of the problem parameters and data. By Sobolev injections for n < q < ∞, 𝐯_s^(1)_[W^1,∞]^n satisfies a similar bound, and can be made as small as required by making 𝐠' and p_ ext small. Then, ∇ϕ_f^(1)_L^q is bounded by 𝐯_f^(1)_[W^2,q]^n and is equally small. Furthermore, div(𝐯_f^(1)) ϕ_f^(1) + 𝐯_f^(1)∇ϕ_f^(1) = - k_s g_s ϕ_f^(1)≤ 0. Since 𝐯_f^(1) and ∇ϕ_f^(1) are small compared to - k_s g_s ϕ_f^(1)≤ 0 which is almost constant. Thus, div(𝐯_l^(1)) ≤ 0. Finally, ∫_A div(𝐯_l^(1)) d 𝐱 = ∫_∂ A𝐯_l^(1)·𝐧 d S_𝐱≤ 0 for all A ⊂Ω so that 𝐯_l^(1)·𝐧≤ 0 on ∂Ω. By induction, if 𝐯_f^(ℓ-1)_[W^1,∞]^n is small and 𝐯_f^(ℓ-1) satisfies the sign conditions, we can repeat the argument to show that this holds for 𝐯_f^(ℓ) too and that it also satisfies the sign conditions. We need to estimate ∇ div(𝐯_f^(ℓ-1)) _[L^q]^n, which is possible since Π is small. These estimates allow us to extract subsequences converging weakly to limits 𝐯_s, 𝐮_s, p, ϕ_s satisfying variational formulations of the equations. Problem (<ref>) is already studied in Theorem 4.2. □ A similar result (except for the uniqueness) can be obtained by means of an iterative scheme if we allow for almost constant smooth coefficients k_h(ϕ_f) ξ_∞(ϕ_f), g_s(c), g_s(c). 1mm § DISCUSSION AND CONCLUSIONS The study of biological aggregates and tissues often leads to complex mixture models, combining transport equations for volume fractions of different phases, with continuum models for mechanical behavior of the mixture and chemical species <cit.>. These models are set in domains that change with time, because cells grow, die and move and because of fluid transport within the biological network. Here, we have considered a fluid-solid mixture description of the spread of cellular systems called biofilms, which could be adapted to general tissues. These models involve different time scales, so that part of the equations are considered quasi-stationary, that is, they are stationary problems solved at different times in different domains and with some time dependent coefficients. Such equations are coupled to time dependent problems set in moving domains and to variables not directly characterized by means of equations. In this paper, we have developed mathematical frameworks to tackle some of the difficulties involved in the construction of solutions for these multiphysics systems and the study of their behavior. First, we have shown how to improve these models by characterizing time derivatives of solutions of stationary boundary value problems with varying coefficients set in moving domains in terms of complementary boundary value problems derived for them. In this way we obtain a quasi-stationary elliptic system for the mechanical variables of the solid phase, not only displacements and pressure, but also velocity, that can be solved at each time coupled to the other submodels. This option is more stable than evaluating velocities as quotients of differences of displacements calculated in meshes of different spatial domains. On one side, the error committed is easier to control. On the other side, the computational is cost smaller, since we use a single mesh at each time. Once we know the velocity of the solid phase and the pressure, the velocity of the fluid phase follows by a Darcy type law. Next, we have devised an strategy to construct solutions of an auxiliary class of time dependent linear diffusion problems set in moving domains with parametrizations satisfying a number of conditions. We are able to refer the model to a fixed domain and then solve by Galerkin type schemes. The complete model involves a quasi-stationary transport problem. We show that we can construct smooth enough solutions by a regularization procedure, under sign hypothesis on the fluid velocity field suggested by asymptotic solutions constructed in simple geometries. Once we know how to construct stable solutions of each submodel satisfying adequate regularity properties, an iterative scheme allows us to solve the full problem when the time evolution of the boundary of the spatial region occupied by the biological film is known. In applications one must couple these models with additional lubrication type equations for the motion of the film boundary, see equation (<ref>). Perturbation analyses <cit.> provide approximate solutions with selfsimilar dynamics for h. Establishing existence and regularity results for such complex models that can guide construction of reliable numerical solutions is a completely open problem. The techniques we have developed are general and can be applied in models with a similar structure arising in other biological and chemical engineering applications. § APPENDIX: THE MODEL EQUATIONS We study biofilms as solid-fluid mixtures, composed of a solid biomass phase and a liquid phase formed by water carrying dissolved chemicals (nutrients, autoinducers, waste). Under the equipresence hypothesis of mixtures, each location 𝐱 in a biofilm can contain both phase simultaneously, assuming that no voids or air bubbles form inside. Let us denote by ϕ_s(𝐱,t) the volume fraction of solid and by ϕ_f(𝐱,t) the volume fraction of fluid, which satisfy ϕ_s + ϕ_f = 1. Taking the densities the mixture and both constituents to be constant and equal to that of water ρ_f= ρ_s= ρ= ρ_w, the mass balance laws for ϕ_s and ϕ_f are <cit.> ∂ϕ_s ∂ t + div (ϕ_s 𝐯_s) = r_s(ϕ_s,c), r_s(ϕ_s,c) = k_s c c + K_cϕ_s, ∂ϕ_f ∂ t + div (ϕ_f 𝐯_f) = - r_s(ϕ_s,c), where 𝐯_s and 𝐯_f denote the velocities of the solid and fluid components, respectively, c is the substrate concentration and r_s(ϕ_s,c) = k_s c c + K_cϕ_s stands for the production of biomass due to nutrient consumption. The parameters K_c (starvation threshold) and k_s (intake rate) are positive constants. The substrate concentration c <cit.> is governed by: ∂ c ∂ t + div (𝐯_f c) - div (d ∇ c) = -r_n(ϕ_s,c), r_n(ϕ_s, c) =ϕ_s k_c c c + K_c, where r_n(ϕ_s,c) represents consumption by the biofilm. The parameters d (diffusivity), k_c (uptake rate) and K_c (half-saturation) are positive constants. We impose zero-flux boundary conditions on the air–biofilm interface and constant Dirichlet boundary condition on the agar–biofilm interface. In equation (<ref>), typical parameter values are such that the time derivatives can be neglected. The solutions depend on time though the motion of the biofilm boundary. Adding up equations (<ref>) and (<ref>), we obtain a conservation law for the growing mixture: 0= div (ϕ_s 𝐯_s+ ϕ_f 𝐯_f) = div (𝐯) = div (𝐯_s + 𝐪), where 𝐯= ϕ_s 𝐯_s+ ϕ_f 𝐯_f is the averaged velocity and 𝐪 = ϕ_f (𝐯_f - 𝐯_s) is the filtration flux. The theory of mixtures hypothesizes that the motion of each phase obeys the usual momentum balance equations <cit.>. In the absence of external body forces, the momentum balance for the solid and the fluid reads ρϕ_s a_s + divσ_s + ρϕ_s (𝐟_s + ∇π_s) = 0, ρϕ_f a_f + divσ_f + ρϕ_f (𝐟_f + ∇π) = 0. In biofilms, the velocities 𝐯_s and 𝐯_f are small enough for inertial forces to be neglected, that is, ρ_s 𝐚_s ≈ρ_f 𝐚_f ≈ρ𝐚≈ 0, where 𝐚_s, 𝐚_f, 𝐚 denote the solid, fluid, and average accelerations. Let us detail now expressions for the stresses and forces appearing in these equations, following <cit.>. When the biofilm contains a large number of small pores, the stresses in the fluid are σ_f = - ϕ_f p 𝐈, p being the pore hydrostatic pressure. In case large regions filled with fluid were present, the standard stress law for viscous fluids should be considered. Under small deformations, and assuming an isotropic solid, the stresses in the solid biomass are σ_s = σ̂_s - ϕ_s p 𝐈, σ̂_s = λ Tr (ε(𝐮_s)) 𝐈 + 2 μ ε(𝐮_s), ε_ij(𝐮)= 1 2( ∂ u_i ∂ x_j + ∂ u_j ∂ x_i), where 𝐮_s is the displacement vector of the solid, ε(𝐮) the deformation tensor, and λ, μ, the Lamé constants. The stresses in the solid are due to interaction with the fluid and strain within the solid. The interaction forces and concentration forces satisfy the relations ϕ_s 𝐟_s +ϕ_f 𝐟_s = 0 and ϕ_s ∇π_s + ϕ_f ∇π = 0 <cit.>. The osmotic pressure is a function of the biomass fraction ϕ_f = Π (ϕ_s) <cit.>. For isotropic solids with isotropic permeability the filtration force 𝐟_f = - 1 k_h𝐪, where k_h (hydraulic permeability) is a positive function of ϕ_s <cit.>. Typically, k_h(ϕ_f)=ϕ_f^2 ζ, where ζ is a friction parameter often set equal to ζ= μ_f ξ(ϕ_s)^2 >0 and ξ is the “mesh size” of the underlying biomass network <cit.>. Using the expressions for the stress tensors (<ref>) and (<ref>), equations (<ref>) become div σ̂_s + ϕ_s (-∇ p + ∇π_s ) + ϕ_s 𝐟_s = 0, ϕ_f (-∇ p+∇π) + ϕ_f 𝐟_f = 0. Combining (<ref>), (<ref>), and (<ref>) we obtain 𝐪 = - k_h ∇ (p - π) = ϕ_f (𝐯_f -𝐯_s). This is Darcy's law in the presence of concentration gradients. Adding up equations (<ref>), we find an equation relating solid displacements and pressure div σ̂_s(𝐮_s) - ∇ p = 0. At the biofilm boundary, the jumps in the total stress vector and the chemical potential vanish: (σ̂_s - p 𝐈) 𝐧 = 𝐭_ext, p - π = p_ext - π_f,ext, when applicable. The solid velocity is then 𝐯_s =∂𝐮_s ∂ t. These equations are complemented by (<ref>) and (<ref>), which now becomes div(𝐯_s) = - div(𝐪) = div(k_h ∇ (p - π)). 5mm Acknowledgements. This research has been partially supported by the FEDER /Ministerio de Ciencia, Innovación y Universidades - Agencia Estatal de Investigación grant PID2020-112796RB-C21. 5mm 9 adams R.A. Adams, Sobolev Spaces, Academic Press, New York, 1975 adn2 S. Agmon, A. Douglis, L. Nirenberg, Estimates Near the Boundary for Solutions of Elliptic Partial Differential Equations Satisfying General Boundary Conditions II, Communications on Pure and Applied Mathematics, XVII, 35-92, 1964 bamberger A. Bamberger, R. Glowinski, Q.H. Tran, A domain decomposition method for the acoustic wave equation with discontinuous coefficients and grid change, SIAM Journal on Numerical Analysis 34(2), 603-639, 1997 beirao H. Beirao da Veiga, On a stationary transport equation, Ann. Univ. Ferrara - Sz. VII - Sc. Mat., Vol XXXII, 1986 brezis H. Brézis, Analyse fonctionnelle, Théorie et applications, Masson, 1987 ibm A. Carpio, R. González-Albaladejo, Immersed boundary approach to biofilm spread on surfaces, Commun. Comput. Phys. 31, 257-292, 2022 entropy A. Carpio, E. Cebrián, Incorporating cellular stochasticity in solid-fluid mixture biofilm models, Entropy 22(2), 188, 2020 poroelastic A. Carpio, E. Cebrián, P. Vidal, Biofilms as poroelastic materials, International Journal of Non-Linear Mechanics 109, 1-8, 2019 amm16 A. Carpio, G. Duro, Well posedness of an angiogenesis related integrodifferential diffusion model, Applied Mathematical Modelling 40 (9-10), 5560-5575, 2016 econ C.C. de Carvalho, Biofilms: recent developments on an old battle, Recent. Pat. Biotechnol. 1, 49-57, 2007 coddington E.A. Coddington, N. Levinson, Theory of ordinary differential equations, New York: McGraw-Hill, 1955 degennes P.G. De Gennes, Wetting: statics and dynamics, Reviews of Modern Physics, 57(3), 828-863, 1985. biofilm H.C. Flemming, J. Wingender, The biofilm matrix, Nat. Rev. Microbiol. 8, 623-633, 2010 gurtin M.E. Gurtin, An introduction to continuum mechanics, Mathematics in Science and Engineering 158, Academic Press 1981. kapellos G.E. Kapellos, T.S. Alexiou, A.C. Payatakes, Theoretical modeling of fluid flow in cellular biological media: An overview, Math. Biosci. 225, 83-93, 2010 kozlov V.A. Kozlov, J.A. Maz'ya, Elliptic boundary value problems in domains with point singularities, Mathematical surveys and monographs 52, AMS, 1997 ladyzenskaya O.A. Ladyzhenskaya, N.N. Ural'tseva, Linear and quasilinear elliptic equations, Academic Press 1968. lanir Y. Lanir, Biorheology and fluid flux in swelling tissues. I. Bicomponent theory for small deformations, including concentration effects, Biorheology 24, 173-187, 1987 lionsmagenes J.L. Lions, E. Magenes, Problémes aux limites non homogénes, Dunod, 1968 lions J.L. Lions, Quelques Méthodes Pour les Problèmes aux Limites Nonlinéaires, Gauthier-Villards, 1969 raviart P.A. Raviart, J.M. Thomas, Introduction a l'analyse numérique des équations aux dérivées partielles, Masson 1983 slimy B. Schachter, Slimy business-the biotechnology of biofilms, Nat. Biotechnol. 21, 361-365, 2003 seminara A. Seminara, T.E. Angelini, J.N. Wilking, H. Vlamakis, S. Ebrahim, R. Kolter, D.A. Weitz, M.P. Brenner, Osmotic spreading of Bacillus subtilis biofilms driven by an extracellular matrix. Proc. Nat. Acad. Sci. USA 109, 1116–1121, 2012 feijoooberai G.R. Feijoo. A.A. Oberai, P.M. Pinsky, An application of shape optimization in the solution of inverse acoustic scattering problems Inverse Problems 20, 199-228, 2004 dirichlet2D P Li, Y Wang, Z Wang, Y Zhao, Inverse obstacle scattering for elastic waves, Inverse Problems 32, 115018, 2016 fstissue M.M. Schuff, J.P. Gore, E.A. Nauman, A mixture theory model of fluid and solute transport in the microvasculature of normal and malignant tissues. I. Theory, J. Math. Biol. 66, 1179-1207, 2013 fsbrain M. Terzano, A. Spagnoli, D. Dini, A.E. Forte, Fluid-solid interaction in the rate-dependent failure of brain tissue and biomimicking gels, Journal of the Mechanical Behavior of Biomedical Materials 119, 104530, 2021 hai K. Vickery, H. Hu, A.S. Jacombs, D.A. Bradshaw, A.K. Deva, A review of bacterial biofilms and their role in device-associated infection, Healthcare Infection 18, 61-66, 2013 zhu Y. Zhu, G. McHale, J. Dawson, S. Armstrong, G. Wells, R. Han, H. Liu, W. Vollmer, P. Stoodley, N. Jakubovics, J. Chen, Slippery liquid-like solid surfaces with promising antibiofilm performance under both static and flow conditions, ACS Appl. Mater. Interfaces 14, 5, 6307-6319, 2022
http://arxiv.org/abs/2307.04298v1
20230710013021
Edge Storage Management Recipe with Zero-Shot Data Compression for Road Anomaly Detection
[ "YeongHyeon Park", "Uju Gim", "Myung Jin Kim" ]
cs.SD
[ "cs.SD", "cs.LG", "eess.AS" ]
Edge Storage Management Recipe with Zero-Shot Data Compression for Road Anomaly Detection YeongHyeon Park SK Planet Co., Ltd. Seongnam, Rep. of Korea [email protected] Uju Gim SK Planet Co., Ltd. Seongnam, Rep. of Korea [email protected] Myung Jin Kim SK Planet Co., Ltd. Seongnam, Rep. of Korea [email protected] August 12, 2023 =============================================================================================================================================================================================================================================== Recent studies show edge computing-based road anomaly detection systems which may also conduct data collection simultaneously. However, the edge computers will have small data storage but we need to store the collected audio samples for a long time in order to update existing models or develop a novel method. Therefore, we should consider an approach for efficient storage management methods while preserving high-fidelity audio. A hardware-perspective approach, such as using a low-resolution microphone, is an intuitive way to reduce file size but is not recommended because it fundamentally cuts off high-frequency components. On the other hand, a computational file compression approach that encodes collected high-resolution audio into a compact code should be recommended because it also provides a corresponding decoding method. Motivated by this, we propose a way of simple yet effective pre-trained autoencoder-based data compression method. The pre-trained autoencoder is trained for the purpose of audio super-resolution so it can be utilized to encode or decode any arbitrary sampling rate. Moreover, it will reduce the communication cost for data transmission from the edge to the central server. Via the comparative experiments, we confirm that the zero-shot audio compression and decompression highly preserve anomaly detection performance while enhancing storage and transmission efficiency. anomaly detection, data compression, edge computing, storage management, transmission efficiency § INTRODUCTION Knowing road conditions is an effective way to prevent traffic accidents <cit.>. Most of the road hazards are highly related to icy or wet roads which reduce the friction between the road and tires. When considering a vision sensor-based road anomaly detection system <cit.>, inclement weather will make occlusion on the camera which makes it difficult for understanding road conditions <cit.>. Moreover, intensity-changing situations such as at night also make it difficult to determine road conditions. As an approach to solving these problems, an audio-based anomaly detection approach has been developed. The audio-based system receives information from the medium wave in the air, so it shows a better response-ability than the occlusion situation of the vision sensor. In addition, sound can be properly transmitted even at night time, an audio-based system is recommended for this situation. Even in the case of successful anomaly detection as above, continuous updating of the anomaly detection model is required considering that target environments are continuously aging <cit.>. For this, we need high-quality large data to update the neural network-based anomaly detection model. We have installed edge computers on the road for anomaly detection, which has limited resources such as storage. Considering the above, we have a limitation to keep audio data for the long term. The above limitation can be partially mitigated by transmitting the audio to large central storage in time, but in this case, enormous data transmission costs will be incurred <cit.>. Also, we can lower the quality of the audio collected from high fidelity (Hi-Fi) to low fidelity (Lo-Fi) to reduce the file size for each audio, but this is not recommended as it leads to fundamental information loss. Motivated by this, we propose a storage management method that can keep as much data as possible in the edge computer for a long time in limited storage space while minimizing the cost of the data transmission into a central server. Our method is based on zero-shot encoding and decoding using a pre-trained audio super-resolution (ASR) model <cit.>. Referring ASR models can convert input audio to high-resolution, so they can handle arbitrary resolution inputs that are lower than the target resolution they are trained on. The overall scheme of our proposal is shown in Fig <ref>. The edge computer continuously collects data and determines whether the situation is abnormal or not through a microphone facing the road. Basically, collected audio is stored in the original resolution, but it is saved after being converted into a latent vector by an encoder of pre-trained ASR. The latent vectors, stored in edge storage, are sent to the central server at regular intervals or the storage space fills up to a certain level. Then, at the central computer, they are restored to audio form by a decoder paired with the above encoder <cit.>. At this time, even if the resolution of the collected sound sources is different, it is characterized by being restored to a similar Hi-Fi quality by the decoder of the central server. The restored audio data is used for the purpose of developing a novel anomaly detection model or updating existing models. To verify the proposed method, we collected data from three roads with different characteristics. The audio is basically collected at 44,100 Hz. For the purpose of saving storage space experiments in which downsampling is applied to assume a situation in which a microphone such as 11,025 Hz, and our zero-shot audio encoding method are also covered. Overall, our contributions are summarized below: * When downsampling is applied, high-frequency information loss occurs, confirming that hinders precise anomaly detection. Our approach, zero-shot encoding and decoding minimize information loss and preserves anomaly detection performance at an appropriate level while maximizing storage efficiency. * We show that there is no need to train new models to encode and decode for our domain, road noise. Our example deals with road anomaly detection as a target, but our method can also be extended to another edge computer-based data collection approaches in other domains. § RELATED WORKS §.§ Road condition identification To identify abnormal situations on the road such as road bumps, cracks, or potholes, methods based on vision sensors have been proposed <cit.>. However, since the abnormal situation on the road is highly related to bad weather, and bad weather can obscure the view of the camera <cit.>, a vision sensor-based approach will show constrained detection performance. Some approaches using motion or gyroscope sensors rather than a vision sensor can partially ease the above problem but it shows unstable performance that highly depends on pre-defined settings <cit.>. An audio-based anomaly detection method has been proposed as a way to overcome the problem of invisible situations to make decisions in bad weather or night situations <cit.>. To construct a reliable road anomaly detection system in outdoor environments, we inherit the above approach from prior research to take advantage of audio-based road anomaly detection. §.§ Data compression The most intuitive way to maximize data storage efficiency is to collect data at a lower resolution. However, when we need to create a high-quality anomaly detection model, we also need high-resolution data <cit.>. The computational approach which collects high-resolution data and encodes it into lower dimensions can be considered other than the above <cit.>. When the encoding method is provided with a paired decoding method, we can easily compress the data into small sizes and decompress them into the original resolution. In this case, some information loss may occur in the process of encoding and decoding, but it is recommended as an alternative to hardware-based capacity-saving methods that block information fundamentally. An artificial neural network-based encoder-decoder (ED) shows better reconstruction performance than traditional methods <cit.>. In particular, a model trained for super-resolution (SR) purposes is useful as a method to help roughly estimate the high-frequency region of data collected at lower resolution <cit.>. We propose a storage space management method based on zero-shot encoding decoding using the pre-trained ED for SR purposes, considering that the resolution of audio sensors installed in each region may be different. § APPROACH §.§ Overview The overall of our proposal is shown in Fig. <ref>. which is an encoding and decoding system for storing as much audio data as possible in an edge computer. We separate pre-trained ED into each component encoder and decoder. Then, we locate each of the above on the edge and central computers respectively. Our approach only uses a single encoder and decoder pair rather than having each model for each different road environment as shown in Fig. <ref>. This is dubbed as a zero-shot inference that can eliminate the hassle of preparing models to reflect the surrounding environment of countless sensors installed at outdoor points. §.§ Audio super-resolution For data compression, we utilize a pre-trained ASR model, EnCodec <cit.>, which is constructed with an encoder and decoder. The sensors installed at each site to perform the road anomaly detection that we will cover may include expensive high-resolution microphones or low-resolution microphones, depending on the management budget of the local government. The advantage of using the ASR model is that the input data can be converted into high-resolution data regardless of the input resolution. This allows audio data collected from microphones of arbitrary resolutions as aforementioned can be integrated into Hi-Fi audio. Any ASR model can be employed for the encoding and decoding process, but a structure in which the encoder and decoder can be used separately is recommended. A method of performing information augmentation in a feature map or latent vector stage may rather increase the capacity of encoded data <cit.>, so it should be avoided. §.§ Zero-shot encoding and decoding It is difficult to obtain a pre-trained ASR model on the road noise domain because the audio corresponding to the friction noise between the tire and the road surface, which we deal with, is not commonly used data. In addition, the number of sensors installed on the roads we deal with is numerous and their characteristics are highly diverse, so it takes a huge amount of time and cost to build a model while guaranteeing the generalization ability. As a way to easily overcome these methods, we adopt a zero-shot inference that utilizes a highly generalized ASR model which pre-learned with a wide range of audio data including general audio, speech, and music. Following the above, we propose a method to separate the encoder of the pre-trained ASR model, place it on the edge computer and encode all the collected data. The encoded data will be transmitted o the central computer and decoded. § EXPERIMENTS §.§ Dataset To validate the zero-shot inference-based method, we should deal with varied data from different road environments. Among the collectible road points, we selectively use three points with significantly different environmental characteristics as shown in Fig. <ref>. We have collected the audio samples for four weather conditions at three locations as summarized in Table <ref>. §.§ Zero-shot compression Referring to our purpose, maximizing data compression, it is important to minimize the restoration error as well as the compression capacity of the data. Note that, we abbreviate 44,100 Hz, 22,050 Hz, and 11,025 Hz as f_44, f_22, and f_11 respectively. The f_22 and f_11 in Fig. <ref> show the fundamental loss of high-frequency components compared to f_44 which the sampling rate is set as low to reduce the file size. Therefore, it should be avoided to set the low sampling rate with a hardware-based approach, and a method of collecting and compressing Hi-Fi data needs to be used in a computational approach. When applying a pre-trained ASR model for audio compression and decompression, it shows not only preserving high-frequency components but also less information loss than the hardware-based approach as shown in f̂_44 in Fig. <ref> and Table <ref>. Through this experiment, we confirm that the data compression method based on the ASR model can increase the total amount of samples saved in an edge computer by 34.6× while minimizing information loss. This means that the cost of data transmission can also be 34.6× reduced. §.§ Anomaly detection We simulate the compressed data collecting situation by the zero-shot encoding method in the central computer to check whether an anomaly detection model can perform at the appropriate level when it is trained with the decompressed dataset. It is clear that zero-shot encoding is a method that can minimize information loss while maximizing compression rate compared to others, but considering that there is a slight difference from the original, we can estimate that anomaly detection performance may also be decreased. Considering this, the method with the least performance degradation can be considered as the optimal method. For the experiments, we downsample each audio by 2 and 4 scales to simulate the same Hi-Fi data as collected at Lo-Fi conditions. Note that, the resolution of the original audio is 44,100 Hz, resolution for each downsampled audio is 22,050 Hz and 11,025 Hz respectively. Also, the compressed and decompressed audio data are used to verify the ASR case. The measured anomaly detection performance with the area under the receiver operating characteristic curve (AUROC) <cit.> is summarized in Table <ref>. In the case of using ASR, the average performance decreased to 92%-level compared to the original Hi-Fi audio case. However, we confirm that the anomaly detection performance of our method is more compliant than Lo-Fi. §.§ Resolution integration We verify that it can be integrated into equal-level of high-resolution audio through the encoding and decoding process when the resolution (sampling rate) of the collected audio is different. If this premise is satisfied, it can guarantee that the proposed data compression and restoration framework via the ASR model works properly no matter which audio resolution. If the data collected at low-resolution can be converted into high-resolution, it can be helpful to build a high-performance anomaly detection model considering the case where low-cost and low-resolution microphones are inevitably installed according to the budget. When the three samples, assuming original high-resolution data and low-resolution data, are upsampled through the ASR model, they show almost the same difference from the original as summarized in Table <ref>. Thus, we confirm that data of arbitrary resolution can be integrated into high-quality audio at the central server. § CONCLUSION We propose a method based on the pre-trained ASR model for storing as many audio samples as possible in an edge computer with limited storage capacity installed for the purpose of road anomaly detection. Our method shows that adequate performance can be obtained by using only one generalized encoder-decoder pair instead of each encoder-decoder corresponding to each post or type of road with high environmental diversity. Each audio is highly compressed from its original size of 173 KiB per second to 5 KiB, showing that it can store up to 34.6× as many audio samples. In addition, even if an anomaly detection model is trained by collecting compressed audio samples at the central computer, an appropriate level can be achieved. Some degradation of the anomaly detection performance is caused by a slight information loss during the encoding and decoding process but there is room for improvement via proper encoder-decoder pairs. In future work, we plan to explore the encoder-decoder model with better generalization ability or trained on the road noise domain as a way to construct more stable systems. § ACKNOWLEDGEMENTS We are grateful to all the members of SK Planet Co., Ltd., who have supported this research, providing equipment for the experiment. 00 park_ncae_2022 YeongHyeon Park, and JongHee Jung. “Efficient Non-Compression Auto-Encoder for Driving Noise-based Road Surface Anomaly Detection." IEEJ Transactions on Electrical and Electronic Engineering (2022). ryu_camera_2015 Seung-Ki Ryu, Taehyeong Kim, and Young-Ro Kim. “Image-based pothole detection system for ITS service and road management system." Mathematical Problems in Engineering 2015 (2015): 1-10. rui_camera_2019 Rui Fan, Mohammud Junaid Bocus, Yilong Zhu, Jianhao Jiao, Li Wang, Fulong Ma, Shanshan Cheng, and Ming Liu. “Road crack detection using deep convolutional neural network and adaptive thresholding." 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2019. bibi_camera_2021 Rozi Bibi, Yousaf Saeed, Asim Zeb, Taher M Ghazal, Taj Rahman, Raed A Said, Sagheer Abbas, Munir Ahmad, and Muhammad Adnan Khan. “Edge AI-based automated detection and classification of road anomalies in VANET using deep learning." Computational intelligence and neuroscience 2021 (2021): 1-16. vojir_segmentation_2021 Tomas Vojir, Tomáš Šipka, Rahaf Aljundi, Nikolay Chumerin, Daniel Olmeda Reino, and Jiri Matas. “Road anomaly detection by partial image reconstruction with segmentation coupling." Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021. ershadi_camcond_2017 Nastaran Yaghoobi Ershadi and José Manuel Menéndez. “Vehicle tracking and counting system in dusty weather with vibrating camera conditions." Journal of Sensors 2017 (2017). qian_lidar_2021 Kun Qian, Shilin Zhu, Xinyu Zhang, and Li Erran Li. “Robust multimodal vehicle detection in foggy weather using complementary lidar and radar signals." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021. song_aging_2023 Young Jong Song, Ki Hyun Nam, and Il Dong Yun. “Anomaly Detection through Grouping of SMD Machine Sounds Using Hierarchical Clustering." Applied Sciences 13.13 (2023): 7569. azar_tradcomp_2019 Joseph Azar, Abdallah Makhoul, Mahmoud Barhamgi, and Raphaël Couturier. “An energy efficient IoT data compression approach for edge machine learning." Future Generation Computer Systems 96 (2019): 168-175. li_AST_unet_2022 Yuang Li, Yuntao Wang, Xin Liu, Yuanchun Shi, Shwetak Patel, and Shao-Fu Shih. “Enabling Real-Time On-Chip Audio Super Resolution for Bone-Conduction Microphones." Sensors 23.1 (2022): 35. han_NuWave_2022 Seungu Han, and Junhyeok Lee. “NU-Wave 2: A general neural audio upsampling model for various sampling rates." arXiv preprint arXiv:2206.08545 (2022). alex_EnCodec_2022 Alexandre Défossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. “High fidelity neural audio compression." arXiv preprint arXiv:2210.13438 (2022). salau_sensor_2018 H. Bello-Salau, A.M. Aibinu, A.J. Onumanyi, E.N. Onwuka, J.J. Dukiya, and H. Ohize. “New road anomaly detection and characterization algorithm for autonomous vehicles." Applied Computing and Informatics 16.1/2 (2018): 223-239. sattar_sensor_2021 Shahram Sattar, Songnian Li, and Michael Chapman. “Developing a near real-time road surface anomaly detection approach for road surface monitoring." Measurement 185 (2021): 109990. park_foi_2023 YeongHyeon Park, Myung Jin Kim, and Won Seok Park. “Frequency of Interest-based Noise Attenuation Method to Improve Anomaly Detection Performance." 2023 IEEE International Conference on Big Data and Smart Computing (BigComp). IEEE, 2023. zhao_dataquality_2021 Yuxiang Zhao, Wenhao Wu, Yue He, Yingying Li, Xiao Tan, and Shifeng Chen. “Good practices and a strong baseline for traffic anomaly detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021. lu_tradcomp_2021 Shaofei Lu, Qinhua Xia, Xiaolin Tang, Xuyang Zhang, Yingping Lu, and Jingke She. “A reliable data compression scheme in sensor-cloud systems based on edge computing." IEEE Access 9 (2021): 49007-49015. zhang_neuralcomp_2021 Shifeng Zhang, Ning Kang, Tom Ryder, Zhenguo Li. “iflow: Numerically invertible flows for efficient lossless compression via a uniform coder." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021): 5822-5833. azar_neuralcomp_2022 Joseph Azar, Gaby Bou Tayeh, Abdallah Makhoul, and Raphaël Couturier. “Efficient Lossy Compression for IoT Using SZ and Reconstruction with 1D U-Net." Mobile Networks and Applications 27.3 (2022): 984-996. fawcett_auroc_2006 Fawcett, Tom. “An introduction to ROC analysis." Pattern recognition letters 27.8 (2006): 861-874.
http://arxiv.org/abs/2307.07390v2
20230714145927
GENEVA Monte Carlo: status and new developments
[ "Giulia Marinelli" ]
hep-ph
[ "hep-ph" ]
* § 0pt0.90.5 =6.0in =8.25in =-0.3in =-0.20in #1 #1 #1 #1 #1 #1 and #1 Submitted to #1 Abstract Presented PRESENTED AT Monte Carlo: status and new developments Giulia Marinelli Università degli Studi di Milano-Bicocca & INFN, Piazza della Scienza 3, Milano 20126, Italy We review the Monte Carlo framework, that combines three theoretical tools used for QCD precise predictions into a single structure. In this talk we highlight its main features, discussing some new improvements involving both colour singlet productions, as well as for the production of final states with heavy coloured partons and jets. DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects, Michigan State University, USA, 27-31 March 2023 < g r a p h i c s > § THE GENEVA FRAMEWORK , which stands for GENerate EVents Analytically, is a Monte Carlo event generator which combines the fixed-order (FO) predictions with the resummed ones, and then merges them with a parton shower (PS). The aim of is to obtain fully exclusive hadronized events by combining these three descriptions, while maintaining the perturbative accuracy of a higher-order calculation. It consistently improves the perturbative accuracy away from the fixed-order regions and provides a systematic estimate of theoretical perturbative uncertainties on an event-by-event basis. It provides fully differential fixed-order calculations up to next-to-next-to-leading order (NNLO), which are then combined with higher-order resummation in the N-jettiness resolution variable. Currently, for colour singlet processes, the resummation is carried out up to next-to-next-to-next-to-leading logarithmic (N^3LL) accuracy for zero-jettiness (𝒯_0) and to next-to-leading logarithmic (NLL) accuracy for one-jettiness (𝒯_1), through Soft Collinear Effective Theory (SCET) or transverse momentum resummation. The resulting parton-level events are further combined with parton shower, hadronization and multiple particle interaction (MPI) simulations. employs the infrared-safe resolution variables 𝒯_0 and 𝒯_1 to classify events as having 0- (Φ_0), 1- (Φ_1) or 2- (Φ_2) jets. This classification is done by comparing the value of the resolution variables with two cutoffs for each configuration generated. Emissions below the resolution cutoff, 𝒯_N < 𝒯^ cut_N, are considered unresolved and are integrated over. All the configurations with 𝒯_0 < 𝒯_0^ cut are Φ_0 events, the ones with 𝒯_0 > 𝒯_0^ cut but 𝒯_1 < 𝒯_1^ cut are Φ_1 events and the remaining configurations with 𝒯_1 > 𝒯_1^ cut are Φ_2 events. A fundamental role in the Monte Carlo event generator is played by the splitting functions, which are needed for the mappings used to project the configurations into the N-jet bins and to ensure that all the terms in the Monte Carlo cross section are fully differential in Φ_N. One of the recent improvements is the on-the-fly evaluation of the splitting functions. Consider a generic splitting N → N+1, the relative splitting function 𝒫(Φ_N+1) is defined as defined as 𝒫Ł( Φ_N+1) = f_kjŁ( Φ_N, 𝒯_N, z )/∑_k'=1^N+2∫_z_^k'Ł(Φ_N,𝒯_N ) ^z_^k'Ł( Φ_N, 𝒯_N ) z' J_k'Ł( Φ_N, 𝒯_N, z' ) I_ϕ^k'Ł( Φ_N, 𝒯_N, z' ) ∑_j'=1^n_ split^k' f_k'j'Ł( Φ_N,𝒯_N, z' ) , where f_kj are generic functions based on the Altarelli Parisi splitting functions (depending whether the radiation is in the initial or final state), z is an energy ratio and ϕ an azimuthal angle. These variables are needed besides 𝒯_N to define the a splitting Φ_N →Φ_N+1. In Eq. (<ref>), J is the Jacobian of the change of variables and the I_ϕ is the integral in ϕ. Under the assumption that the Jacobian does not depend on the ϕ variable (which holds true for the splittings used in the framework), the integral in the denominator is computed for each configuration generated, considering the appropriate limits of integrations in z and ϕ. More details and a comprehensive study on this development can be found in Refs. <cit.>. Another new development in the framework is the extension of the shower interface, designed for  <cit.>, to include other parton showers such as  <cit.> and  <cit.>. These parton showers differ mainly in their choice of the evolution variable, which, along with the starting scale of the shower matched to the FO and resummed result, determines the extent to which the parton shower explores the phase space beyond the strict soft and collinear limits. Investigating the effects of different parton showers allows to estimate in a realistic way the uncertainty of the shower matching. § STATUS OF THE GENEVA EVENT GENERATOR Until now, numerous colour singlet processes have been implemented in the framework. One of the earliest process has been the Drell-Yan, first with the N-jettiness resummation <cit.>, and later with the q_T one <cit.>. We then implemented the Higgsstrahlung process <cit.> and the NNLL' resummed two-jettiness distribution for decays of the Higgs boson to b b̅ and gg <cit.>. Also the photon pair and W γ production are available <cit.>. Recently, we presented the Z boson pair production <cit.> and the zero-jettiness resummation for t t̅ production <cit.>. Some of the latest processes implemented in involve colour singlet production with gluons in the initial state. Specifically, we studied single and double Higgs boson production, see Refs. <cit.>. Both processes are considered in the infinite top-quark mass limit (m_t →∞), where the top-quark is treated as infinitely heavy and is integrated out. Apart from the current interest in these processes within the particle physics community, their implementation allowed us to directly test the new developments discussed above. To present the effects of the improved splitting function implementation, in Fig. <ref> we show the comparison between the various contributions of the cross section at NLO+NLL' (left panel) and NNLO+NNLL' (right panel) for the transverse momentum of the Higgs boson, both for the original and improved version of the splitting function implementation. The soft and collinear limit for the 𝒫_0 → 1 (left panel) is now correctly reproduced as can be seen by the nonsingular distribution converging to zero. The same limit for the 𝒫_1 → 2 (right panel) is improved, but it appears to miss a single logarithmic contribution. Indeed, the nonsingular distribution converges to a nonzero constant at low value of the transverse momentum. In Fig. <ref>, we show + (QCD+QED shower, including MPI) result for the transverse momentum of the Higgs boson, compared with the latest experimental results for the Higgs boson inclusive and differential cross sections in the H →γγ decay channel. We use matrix elements computed in the infinite top-quark mass limit and rescaled in the rEFT scheme. The contributions from other Higgs boson production modes (named as XH) are included by summing them to the results for the gluon-fusion channel alone. The XH distributions are obtained from the plots in ATLAS and CMS publications. We find overall good agreement between the predictions and the measurements. There is a slight deviation in the peak region and a more pronounced discrepancy in the tail of the distribution, where the infinity top-quark mass limit approximation is less accurate. Notably, the results consider the 7-points scale variations. Considering now the double Higgs boson production, in Fig. <ref>, we show a comparison between the partonic result and the three showered results for the invariant mass (left panel) and transverse momentum of the Higgs pair (right panel). We observe good agreement between the partonic and showered level both for inclusive and exclusive distributions. Note also that in principle the parton shower is not required to preserve the transverse momentum, but all the shower predictions largely agree among themselfs and with the partonic result, except for the first bin where they agree with uncertainties. Regarding these two processes presented, it would be useful to include the top-quark mass corrections to both single and double Higgs boson production. Additionally, considering processes which involve heavy coloured partons in the final state, we are looking to the implementation of a NNLO+PS framework for the V+jets process in , with the extension of the one-jettiness resummation up to N^3LL. JHEP
http://arxiv.org/abs/2307.06267v1
20230712161157
Physics-informed Machine Learning for Calibrating Macroscopic Traffic Flow Models
[ "Yu Tang", "Li Jin", "Kaan Ozbay" ]
cs.LG
[ "cs.LG" ]
thmTheorem *thm*Theorem prpProposition remRemark lmmLemma asmAssumption clmClaim corCorollary cndCondition proProperty definition exmExample dfnDefinition *genericthm* namedthm*[1] Physics-informed Machine Learning for Calibrating Macroscopic Traffic Flow Models Yu Tang, Li Jin and Kaan Ozbay This work was in part supported by US NSF Award CMMI-1949710, USDOT Award # 69A3551747124 via the C2SMART Center, NYU Tandon School of Engineering, SJTU UM Joint Institute, and J. Wu & J. Sun Endowment Fund. Y. Tang is with C2SMART Center, Department of Civil & Urban engineering, Tandon School of Engineering, New York University, USA. L. Jin is with the UM Joint Institute and with the Department of Automation, Shanghai Jiao Tong University, China. K. Ozbay is with C2SMART Center, Department of Civil & Urban engineering, Tandon School of Engineering, New York University, 11201, USA. (emails: [email protected], [email protected], [email protected]). Extended Abstract ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Key Words: Physics-informed machine learning, traffic flow models, parameter identification. § INTRODUCTION §.§ Motivation Macroscopic traffic flow models have been shown to be capable of reproducing congestion propagation and explaining complicated phenomena, such as capacity drops <cit.> and stop-and-go waves <cit.>. They provide a solid foundation for the performance analysis of traffic systems <cit.> and control design for freeway management <cit.>. However, before macroscopic models are put into practice, they should be carefully calibrated to accurately replicate real-life complications briefly mentioned above. Extensive studies have been devoted to the calibration of traffic flow models <cit.>. They mainly utilized optimization algorithms to determine model parameters over a certain period (i.e., morning or evening peaks) with multiple days' data, but only a few of them, with the goal of testing for parameter transferability <cit.>, validated the calibrated models for the same period but on different days. This transferability can be poor since road traffic is prone to perturbations induced by demand variations, weather conditions, driving behavior, and so on, as illustrated by the example in Figure <ref>: our data analysis indicates that daily re-calibration is necessary for quantifying parameter uncertainties. Admittedly, one can apply optimization-based approaches to day-to-day calibration, but this can be cumbersome in practice. First, these methods bring about heavy computation costs when being applied to long-term modeling up to several months or years. They typically involve non-convex and even non-smooth problems, and it is hard to solve them using optimization, let alone to repeat the calibration procedures for each day. The second problem arises from data quality. Traditional traffic sensors, such as widely-used inductive loops, are infamously unreliable. For instance, it is reported that only around 64% of detectors work any given day in the California's freeway system <cit.>. Figure <ref> illustrates practical observation rates of induction loops that are closely related with how many convincing observations are collected. Clearly, day-to-day optimization methods are vulnerable to the fluctuation in data quality. However, limited research discussed how to fulfill parameter identification on corrupted data in a robust way. In response to the above challenges, we develop a novel physics-informed, learning-based approach of identifying traffic flow parameters across days, including capacity, free flow speed, jam density and congestion wave speed. The proposed method belongs to the unsupervised machine learning category since the actual values of parameters are unknown in advance. We inform our machine learning model of physics knowledge about traffic flows so that it can achieve the calibration even without ground truths of the parameters. Once well trained, our method is expected to yield reasonable parameter values given boundary conditions and traffic measurements with possibly missing values. It provides efficient calibration over massive periods with robustness to incomplete data. §.§ Our contributions We try to address two main questions: (i) How can we train an encoder, given perfect freeway measurements, to yield appropriate traffic parameters that are comparable to those calibrated by traditional methods? (ii) Can we also complete the task in (i), even given corrupted traffic data? To resolve (i), we extend the deep autoencoder <cit.>, a classical unsupervised machine learning approach for dimension reduction, i.e., converting high-dimensional data into latent variables, typically with lower dimensions, by minimizing the error between encoder inputs and decoder outputs; see Figure <ref>a). Generally, the original low-dimensional representations do not have significant physical meanings; they cannot be recognized as parameters of traffic flow models. To address this problem, we also feed encoder outputs into traffic flow models and inform our decoder of extra discrepancies between the simulation result and its own output; see Figure <ref>b). Clearly, minimizing the new error encourages the decoder to learn physical laws of traffic flows, and the total error decreases only when the encoder yields appropriate parameters of traffic flow models. Besides, we introduce the concept of conditional generation. That is, the decoder relies not only on latent variables (parameters), but also on boundary conditions, such as upstream traffic volumes. This is a straightforward but indispensable extension since traffic observations, mimicked by the decoder, are determined by these conditions. Note this paper considers the classical first-order cell transmission model (CTM). Assuredly, high-order models like Payne-Whitham (PW) model <cit.>, along with its discretized version METANET <cit.>, and Aw-Rascle-Zhang (ARZ) model <cit.> can reproduce traffic phenomena with higher accuracy, but they have more parameters that could complicate the training process. To the best of our knowledge, this is the first attempt to estimate traffic flow parameters using unsupervised training. It is thus worthwhile starting with a simple model. We leave identification of high-order models as a future research task. Specially, it will be interesting to investigate whether we can accelerate it via calibrating first-order models since first- and high-order models have common traffic flow parameters. To answer (ii), we integrate denoising autoencoder <cit.>, a simple but robust variant of the original autoencoder, into the calibration approach. That is, we use unspoiled sensor readings to generate new data with partially missing values, which mimics the pattern of real unreliable data, and then apply this synthesized data to training; see Figure <ref>c). It allows to deploy the parameter identification on real data with missing values after training. Ideally, if there are too many missing values, any method will learn nothing and end up with poor calibration. Thus we also provide a sensitivity analysis by controlling the missing rates to empirically reveal how our approach degenerates. §.§ Related work Most of the previous work on calibrating macroscopic traffic flow models employed optimization methods applied over specific days. One approach is to estimate the fundamental diagram (FD), especially for first-order traffic flow models <cit.>. It requires little computation, but may suffer from accuracy losses. First, fitting the left part of FD, namely free-flow regime, is usually easy, but the same task can be hard for the right part since traffic data collected during congestion periods are sparse and scattered. Besides, individual calibration of FDs along a freeway corridor fails to capture flow interactions. More studies formulated the calibration problem as mathematical programming that considered flow dynamics specified by first- or high-order models <cit.>. In this case, one needs to solve a non-convex optimization problem with locally optimal solutions, and thus heuristic optimization/search algorithms, such as simultaneous perturbation stochastic approximation, simulated annealing, etc, can be used; please see <cit.> for a comprehensive review. Recently, some researchers discussed the idea of learning-based calibration in the context of traffic state estimation (TSE). That is, they integrated traffic flow models into machine learning methods to enhance TSE <cit.>, to incorporate simultaneous parameter identification and state estimation techniques. It should be pointed out that these studies are different from what is proposed in this paper. First, they put emphasis on TSE that used partial observations to infer full states. Thus, they divided training and testing data by sensor locations to evaluate transferability over space. By contrast, our method takes in full observations and returns traffic parameters. We desire transferability over time periods and thereby separate training and testing data by time. Second, although the current studies can update traffic parameters, they still require relatively good initial values that are normally obtained from classical calibration approaches <cit.> because poor initial traffic parameters do not guarantee convergence <cit.>. This kind of good initial parameter estimates, however, is not necessarily required by our approach. Our proposed method is inspired by deep autoencoder-based system identification which has emerged in recent years. Depending on identification goals, the latent variables, obtained from the autoencoder, can be recognized as either states or parameters. If one wants to fit a dynamical model approximating to the physical one, the encoder gives the states <cit.>; see the full framework in Figure <ref>a). If the target is exact parameters of physical models, the encoder can yield calibration results as well; see Figure <ref>b). In that case the decoder is a physical model rather than a NN. Up to now, this framework has only been applied to identification of linear time-invariant (LTI) systems <cit.>. Though it has the same objective as our method, it cannot be directly applied in our problem setting for the following reasons. First, LTI systems have closed-form solutions, and it is convenient to compute gradients with respect to parameters, which implies easy training. However, when physical models, like traffic flow models, are too complicated to have analytical solutions, gradient calculation becomes very hard by making the training inefficient. Second, it is assumed that step signals are standard inputs for all LTI systems and they are thus ignored in the identification framework. In practice, however, we cannot always manipulate model inputs, i.e., boundary conditions, and should include them in the learning-based calibration. § PROBLEM STATEMENT In this section, we generally state the calibration problem and then explicitly present the considered freeway model. It consists of a dynamics model, the CTM incorporating capacity drop, and an observation model. §.§ Learning-based calibration problem We consider a freeway corridor with K mainline cells, K on-ramp buffers, and K off-ramps, as shown in Figure <ref>. The kth buffer has a state of queue length, denoted by q_k(t), and the kth cell is characterized by traffic density, denoted by ρ_k(t). Note that the first buffer is not an actual on-ramp; instead, it represents the upstream freeway section and stores the upstream mainline traffic. The kth buffer is subject to a time-varying demand α_k(t)∈ℝ_≥0. In addition, we apply mainline ratio η_k(t)∈[0, 1] to model off-ramp flows. This ratio denotes the fraction of traffic from cell k entering cell k+1; the remaining traffic flow leaves the freeway at the kth off-ramp. We also assume that the last cell K discharges outflows at a speed of v_K(t), which can be measured as the downstream boundary condition. Finally, we denote by f_k(t) the flows from cell k to the downstream cell and by r_k(t) the flow from buffer k to cell k. Now we suppose that the following freeway model: ρ(t+1) = F(ρ(t), q(t), u(t);θ), q(t+1) = G(ρ(t), q(t), u(t);θ), y(t) = H(ρ(t),q(t)), where F and G in (<ref>)-(<ref>) denote the dynamics models parameterized by θ, and H in (<ref>) is the observation model with the ideal senor output y(t), i.e., without missing values. We let y^obs(t) represent real observations of y(t), which can contain missing values. Clearly, the traffic flow model assumes constant parameters. Though we admit variation in traffic parameters, it is still acceptable to assume stationary values over a certain period, i.e., morning or evening peaks <cit.>. This is the basis of our method. Our calibration does not address abrupt parameter changes due to incidents and other unexpected disruptions. This kind of real-time parameter identification that may be solved by online machine learning algorithms is a future research task. For convenience of notation, we let θ_p, 𝐌_p:= (y^obs(0), y^obs(1), ⋯, y^obs(T))_p and 𝐁_p:=(ρ(0), u(0), u(1), ⋯, u(T))_p denote the period-specific parameter, real measurements and boundary conditions, respectively. The calibration problem is formulated as follows. Suppose 𝒫^train is a set of periods, each of them with T+1 time steps. Given observations {𝐌_p}_p∈𝒫^train and boundary conditions {𝐁_p}_p∈𝒫^train, we aim at training a machine learning model so that it can calibrate the model parameters not only on the training data set but also on the testing ones 𝒫^test, {𝐌_p}_p∈𝒫^test and {𝐁_p}_p∈𝒫^test. Since we do not have true values of {θ_p}_p∈𝒫^train and {θ_p}_p∈𝒫^test, we evaluate the calibrated results {θ̂_p}_p∈𝒫^train and {θ̂_p}_p∈𝒫^test by comparing real measurements with those from the model (<ref>). §.§ Freeway model In the following, we first use the CTM to specify the dynamics functions F and G, and the parameter θ in (<ref>)-(<ref>). The CTM is favored due to its simplicity and wide use, and it is not a necessary requirement. Then we consider induction loops for the observation function H and the sensor output y(t) in (<ref>). Although Lagrangian sensors, such as floating cars, with higher accuracy have been previously studied <cit.>, their observations could be sparse given low market penetration rates. Besides, there is limited public access to them. By contrast, we have rich sources of induction loop data, which supports training and validation of learning-based models. §.§.§ Dynamics model The flow from buffer k to cell k, r_k, is specified by r_k(t) = min{α_k(t)+q_k(t)/δ_t,U_k, w_k(ρ^max_k-ρ_k(t))}, where δ_t denotes time step size, U_k denotes capacity of buffer k, ρ_k^max denotes jam density of cell k, and w_k denotes congestion wave speed of cell k. Then the flows between cells, f_k for k=1,2,⋯,K, are given by f_k(t) =η_k(t)min{v_kρ_k(t), Q_k(t),w_k+1(ρ^max_k+1-ρ_k+1(t))-r_k+1(t)},  1≤ k ≤ K-1 f_K(t) = v_K(t)ρ_K(t) where δ_t denotes time step size, v_k denotes free-flow speed of cell k, Q_k(t) denotes capacity of cell k, ρ_k^max denotes jam density of cell k, and w_k denotes congestion wave speed of cell k. The flow functions (<ref>)-(<ref>) indicate higher merging priority of on-ramp flows and the first-in-first-out rule for off-ramp flows <cit.>. Besides, note that we consider time-varying capacity Q_k(t) which allows to model capacity drop as follows: Q_k(t) = Q^nominal_k, ρ_k(t)≤ρ_k^critical, Q^drop_k, ρ_k(t) > ρ_k^critical, where ρ_k^critical:=Q_k^nominal/v_k denotes critical density of cell k; see more discussions and implementations of capacity drop in <cit.>. Then, by the conservation law of flows, the traffic dynamics is given by q_k(t+1) = q_k(t) + δ_t(α_k(t) - r_k(t)), 1≤ k ≤ K, t=0, 1, ⋯, ρ_1(t+1) = ρ_1(t) + δ_t/ℓ_1(r_1(t)-f_1(t)/1-η_1(t)), t=0, 1, ⋯, ρ_k(t+1) = ρ_k(t) + δ_t/ℓ_k(r_k(t)+f_k-1(t)-f_k(t)/1-η_k(t)), 2≤ k ≤ K, t=0, 1, ⋯, where ℓ_k denotes the length of cell k. Note that the model above assumes infinite-sized buffers. It helps to store boundary inflows given insufficient cell space that is probably caused by the selection of inappropriate parameters during the training process. We also introduce U_k, k=1,2,⋯,K, to prevent unrealistically large inflows when there are long queues at buffers. All of these parameters can be specified in advance. Then the parameters to be calibrated are presented below: θ = ({v_k}_k=1^K-1, {Q_k^nominal}_k=1^K-1, {Q_k^drop}_k=1^K-1{ρ_k^max}_k=1^K, {w_k}_k=1^K). §.§.§ Observation model In practice, induction loops update measurements of flow rates and speed at a certain frequency Δ_t that is larger than the time step size δ_t of the traffic model. We suppose Δ_t=mδ_t with a multiple m∈ℤ_>0. Then the sensor outputs are given by r̅_k(t) = ∑_i=t-m+1^t r_k(i)/m, 1≤ k ≤ K, t=m, 2m, ..., f̅_k(t) = ∑_i=t-m+1^t f_k(i)/m, 1≤ k ≤ K, t=m, 2m, ..., v̅_k(t) = ∑_i=t-m+1^t f_k(i)v_k(i)/∑_i=t-m+1^t f_k(i), 1≤ k ≤ K, t=m, 2m, ..., where v_k(t) := f_k(t)/ρ_k(t) denotes traffic speed of cell k at time t. Clearly, (<ref>)-(<ref>) yield y(t)=[r̅_1(t), r̅_2(t),⋯,r̅_K(t),f̅_1(t), f̅_2(t),⋯,f̅_K(t), v̅_1(t), v̅_2(t),⋯,v̅_K(t)]^T, t=m,2m,⋯. § PROPOSED METHOD In this section, we first introduce the autoencoder-based parameter identification that assumes complete traffic measurements and then present its extension for corrupted data. §.§ Autoencoder-based parameter identification At first, the encoder outputs the estimated parameter θ̂_p given the measurement 𝐌_p and the boundary condition 𝐁_p. Then the parameter, along with the boundary condition, is passed into the encoder and the CTM to obtain 𝐌̂_p and 𝐌̃_p, respectively. The computations are presented below: θ̂_p = E(𝐌_p, 𝐁_p; w_E), 𝐌̂_p = D(θ̂_p, 𝐁_p; w_D), 𝐌̃_p = C(θ̂_p,𝐁_p), where E, D, C denote the encoder, the decoder and the CTM. Note that both the encoder and the decoder are neural networks, parameterized by weights w_E and w_D respectively. The estimated parameters θ̂_p should fall within feasible ranges; otherwise it cannot be passed into the CTM. Thus we exploit the sigmoid function for rescaling in the last layer of the encoder as follows: θ̂_p = (θ^max - θ^min)⊙sigmoid(θ̂^unscaled_p)+ θ^min where ⊙ denotes element-wise product, θ̂_p^unscaled is the output from the previous layer, and θ^min (resp. θ^max) is a lower bound (resp. an upper bound) of traffic parameters. As for training, we consider minimizing the following loss function: min_w_E,w_D L = min_w_E,w_D∑_p∈𝒫^train||𝐌̂_p-𝐌̃_p||_2^2 + γ∑_p∈𝒫^train ||𝐌_p-𝐌̂_p||_2^2 where γ is a loss weight. Clearly, the loss function includes two parts. Minimizing the first part ensures that the decoder learns the physical laws of the CTM, and then minimizing the second part induces the encoder to yield appropriate traffic flow parameters. §.§ Denoising autoencoders-based parameter identification Considering the observation 𝐌_p could have missing values, we define a binary matrix 𝐈_p, with the same dimension as 𝐌_p, to indicate whether the corresponding data is missing. We first sample from the set of complete observations and then from {𝐈_p}_p∈𝒫^train to assemble a new observation 𝐌_p^*. Although it may have missing values, their corresponding ground truths are known. We feed the artificial observation 𝐌_p^* to the encoder. That is, instead of (<ref>), we use the following θ̂_p = E(𝐌_p^*, 𝐁_p; w_E) but we still apply (<ref>) to training. § EXPERIMENTAL DESIGN §.§ Data preparation We consider the freeway segment, up to 6.2 kilometers, shown in Figure <ref>. It has 6 on-ramps and 4 off-ramps. We divide it into 9 cells based on locations of on-ramps, off-ramps and sensors. We also collected 5-min traffic flow and speed data from 2017 to 2019 via the PeMS <cit.>. The preliminary data analysis showed recurrent morning and evening peaks. Thus we separately calibrate for these two periods for each day. We select the data of 2017 and 2018 as the training dataset and the data of 2019 as the testing dataset. §.§ Preliminary results We have trained our machine learning model for the calibration given complete sensor measurements. We set the structure of neural networks, as the encoder and the decoder, based on the LeNet <cit.> with customized input and output layer sizes. We also selected the loss weight γ=0.8. Figure <ref> presents the training loss. We tested the trained model on March 26, 2019. For that, we first used the encoder to yield the parameters required by the CTM and then applied the traffic flow model to numerical simulation. Figure <ref> and <ref> presented the comparison in terms traffic speed and density, respectively. These results showed that our machine learning model can yield reasonable calibration so that the traffic flow model can reproduce the congestion pattern. § SUMMARY AND FUTURE WORK In this paper, we propose a physics-informed, learning-based calibration approach, inspired by autoencoders comprising of one encoder and one decoder, that is expected to achieve performances comparable to those of optimization-based methods while not requiring day-to-day optimization after training. We consider calibrating the CTM, a widely-used traffic flow model. In our approach, the encoder takes as input traffic measurements and boundary conditions, and yields parameters required by CTM; the decoder recovers the measurements from the encoder output and the boundary conditions. Specially, we feed the decoder input to CTM and inform the autoencoder of a novel error between the decoder output and the simulation results besides the conventional error between the traffic measurements and the decoder output. This encourages the encoder to produce reasonable parameters so that the new error is minimized. We also introduce the denoising autoencoder into our calibration method so that it can handles with corrupted data. It is expected to demonstrate the effectiveness of the proposed method through a case study of I-210 E in California. We are currently continuing to train our machine learning model to test the calibration with corrupted data. In the full version of the paper, we will also compare our method with several benchmarks. We will first consider the case of complete sensor measurements. We will then apply case-by-case optimization on the testing data, one fitting the fundamental diagrams <cit.> and the other one solving non-convex optimization <cit.>. Then, for the case of corrupted data, we will test these two methods for different scenarios, one only with raw data and the other one with imputed data. IEEEtran
http://arxiv.org/abs/2307.04331v1
20230710035458
On the Jets Induced by a Cavitation Bubble Near a Cylinder
[ "Yuxin Gou", "Junrong Zhang", "Akihito Kiyama", "Zhao Pan" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
Quasicrystalline second-order topological semimetals Dong-Hui Xu August 12, 2023 ==================================================== The dynamics of cavitation bubbles in the vicinity of a solid cylinder or fibre are seen in water treatment, demolition and/or cleaning of composite materials, as well as bio-medical scenarios such as ultrasound-induced bubbles near the tubular structures in the body. When the bubble collapses near the surface, violent fluid jets may be generated. Understanding whether these jets occur and predicting their directions—departing or approaching the solid surface—is crucial for assessing their potential impact on the solid phase. However, the criteria for classifying the onset and directions of the jets created by cavitation near a curved surface of a cylinder have not been established. In this research, we present models to predict the occurrence and directions of the jet in such scenarios. The onset criteria and the direction(s) of the jets are dictated by the bubble stand-off distance and the cylinder diameter. Our models are validated by comprehensive experiments. The results not only predict the jetting behaviour but can serve as guidelines for designing and controlling the jets when a cavitation bubble collapses near a cylinder, whether for protective or destructive purposes. § INTRODUCTION Cavitation is a phase transition process from liquid to gas, which is often observed when the pressure of the liquid experiences a significant drop within a short time. The collapse and rebound of the bubble may generate shock waves, extreme heating, and high-velocity jets, resulting in damage to the solid boundaries nearby. This process is detrimental in many scenarios, such as cavitation erosion to hydraulic machinery and destruction of human tissues (e.g., bone or brain, <cit.>). On the other hand, some applications such as biomedical ultrasound and ultrasonic cavitation cleaning <cit.> take advantage of the force acting on the boundary. Hence, the cavitation dynamics near the boundaries have been of interest to the community. Studies on bubble dynamics near a wall and associated damaging mechanisms can be traced back to 1940's <cit.>, focusing on the cavitation phenomena near a flat surface (see, for example, <cit.>, and an illustration in figure. <ref>(a)). When a bubble collapses near a flat solid wall, the bubble may migrate to the wall, and a directional liquid jet towards the wall is created. The concentrated momentum impacts a small area on the wall, where the induced pressure and shear are considered to be one of the primary mechanisms for cleaning and/or damaging the surfaces <cit.>. Therefore, the onset of the directional jet is the key factor determining the interaction between the bubble and the boundary. The direction of the jet depends on a multitude of factors, especially the geometry of the boundaries. <cit.> experimentally studied the direction of the jet generated upon the rebound of a bubble in a corner of two solid boundaries, where the angle between them was set to either 90  or less (figure. <ref>(b & c)). <cit.> proposed a generalized formula that predicts the jet direction in a corner with an arbitrary opening angle α and proximity to the walls (figure. <ref>(d)). They show that there exist analytic solutions that predict the jet direction for α = π/n, where n is a natural number. Several studies reported that the fluid jet formed upon the bubble collapse near a solid wall with complex geometry does not always point to the wall. <cit.> reported the dynamics of the bubbles near trapezoidal ridges and valleys (figure. <ref>(e)) and found that the fluid jet can appear in two different directions (i.e., a departing or approaching jet to the wall). The departing jet may appear when a bubble collapses near the ridge, while a bubble near the valley can only form an approaching jet in their experiments. The configuration might share some similarity to the bubble dynamic near a curved surface (e.g., the surface of a cylinder or a sphere, see figure <ref>(f & g)). The morphology of the bubble in the neighbourhood of a curved surface has been studied <cit.>, and the curvature of the solid wall was found to be one of the primary parameters in addition to the stand-off distance <cit.>. A departing jet may appear when the bubble collapses near a convex (positive curvature) surface. However, extensive data or detailed discussions on the direction of the dual fluid jets were not reported. An interesting feature of the bubble near a convex surface is the “mushroom” bubble before collapsing, which is almost always associated with the departing jet. This observation has been reported in earlier studies (e.g., <cit.>) and recent research on cavitation near the tip of a thin cylinder also concurred with similar evidence. <cit.> reported that the mushroom-shaped collapsing bubble could happen when a cavitation bubble was initiated near the tip of a thin cylinder (figure. <ref>(h)). The fluid-gas interface resembling the `stem of the mushroom' (i.e., the interfaces close to the tip of the cylinder) contracts faster than the `mushroom cap', which results in a departing jet when the bubble fully collapses. <cit.> also suggested that an optimal length scale of cylinder thickness exists, compared to a fixed bubble diameter, so that the jet becomes the most powerful. <cit.> numerically approached this problem and revealed that the mushroom-shaped bubble near the tip of the cylinder might be linked to the reduction of the impact load on the surface. It is perhaps because the not-yet-formed departing jet carries momentum away from the solid surface. Beyond the distinct physics, this setup of bubbles near the tip of a thin cylinder can generate a high-speed departing jet (up to O(1000) m/s according to the simulations by <cit.>) and is of interest to applied research. However, the direction of the jets and the criteria of the departing jet onset were not analyzed. In the current work, we are interested in the dynamics of bubbles and jets next to the side surface of cylinders. To the best of our knowledge, this scenario has not been reported except for <cit.> studying the micro-bubbles near a fiber, as well as <cit.> where bubble behaviour near a thick cylinder (inspired by cavitation near the hull of a ship) was investigated. There are no detailed discussions on the direction of the jet(s) when the bubble collapses near a cylinder available in the current literature. In this paper, we report a regime diagram, validated by vast experimental data, that classifies the onset and the direction of the jet(s), which is dictated by two non-dimensional parameters (i.e., bubble stand-off distance and the cylinder thickness relative to the bubble diameter). Particularly, we find that when a large bubble is close to a thin cylinder, a departing jet is likely to form after collapsing and the cylinder is protected. This discovery might be insightful for some applied scenarios. For example, fibrous or tubular structures in the vicinity of a cavitation bubble could be free from severe damage and it is possible to design patterned surface <cit.> or fibrous structure to reduce cavitation erosion. § EXPERIMENTAL SETUP The experimental setup is shown in figure <ref>(a). The cavitation bubbles were generated by shorting adjustable direct current voltage carried by two thin wires of 0.14 mm in diameter. The sizes of the bubbles varied from 5.45 to 24.58 mm in diameter by adjusting the voltage (within the range of 60 – 120 V). The cylinders used in the experiments are made from stainless steel with a contact angle of around 60^∘. The wires are at least one order of magnitude thinner compared to the size of the cylinders and the cavitation bubbles and thus the influence of the wires is negligible. The wires and the cylinder were placed in the middle of a tank (20 × 20 × 20 cm^3) filled with degassed tap water. The tank is large enough to ensure the bubble behaviour was not affected by either the free surface or the rigid wall. The dynamics of the cavitation bubbles was filmed by a high-speed camera (FASTCAM SA-Z or NOVA S20, Photron, Tokyo, Japan) at 60,000 frames per second. A schematic of the bubble and the cylinder overlaid on a high-speed image is shown in figure <ref>(b). Two key non-dimensional parameters—the standoff distance γ and the non-dimensional cylinder diameter η—are defined as γ =d_s/D_0  and  η = D/D_0, respectively, where d_s is the distance from the spark location, which can be considered as the nominal center of the bubble, to the closest cylinder surface, D_0 is the maximum bubble diameter (marked by a blue circle), and D is the cylinder diameter (marked by a red circle). The distance between the nominal center of the bubble and the center line of the cylinder is written as d = d_s + D/2, which can be normalized by D_0 as ζ = d/D_0 = γ +η/2. This is an alternative non-dimensional length scale characterizing the distance between the bubble and the cylinder. § RESULTS We carried out comprehensive experiments on spark-induced cavitation bubbles in the vicinity of a cylinder by varying η and γ. The experiments revealed five distinct bubble behaviours for various conditions (demonstrated in figure <ref>). The dimensional and non-dimensional parameters of these typical cases are listed in Table <ref>. When the bubble is initiated far enough from the surface of a cylinder, it is expected that the bubble remains spherical when expanding and collapsing, and no jets are formed after the bubble collapses. We refer to this observation as a “no jet (NJ)” case hereafter. For example, in figure <ref>(a), a bubble is initiated by a spark (indicated by the apex of the green triangle at t=0 ms) at γ=1.44 from a cylinder (marked by the scarlet circle). The bubble grows and reaches its maximum diameter D_0 at t = 0.46 ms, collapses at t = 0.87 ms for the first time, and rebounds to the maximum of the cloud at t=1.03 ms. The direct observation of the jets (onset and directions) during collapse can be difficult, thus we use the displacement (δ_D) from the bubble onset location (marked by the green triangle in figure <ref>) to the centroid of the maximum bubble cloud of the second expansion (marked by the yellow triangle) as an indicator of the net momentum due to the bubble collapse. The positive direction of δ_D points from the centerline of the cylinder to the center of the bubble). A non-zero δ_D infers a liquid jet generated when the bubble collapses. The non-dimensional displacement, δ = δ_D/D_0, in figure <ref>(a) was δ = 0.00 (note that NJ is classified for |δ|<δ_0, δ_0 = 0.03 is a small value as the measurement threshold in this work.) As the center of the bubble moves closer to the cylinder, a jet shooting toward the cylinder is generated when the bubble collapses and we address this case as “approaching jet only (AJO)". As shown in figure <ref>(b) as an example, the bottom of the bubble is deformed when approaching the cylinder from a standoff distance of γ = 0.45 (e.g., see two frames at t = 0.40 and 0.96 ms). The centroid of the rebound bubble (marked by the yellow triangle at t = 2.24 ms) moves towards the cylinder (δ=-0.12 in this case), compared to the spark location (marked by the green triangle at t = 0 ms). This footprint indicates a liquid jet approaching the cylinder is generated during the bubble rebound. In addition, no other jet(s) were observed. The bubble cloud formed during the second expansion cycle collapses and largely covers the cylinder (t = 2.72 ms), implying that the approaching jet may carry a large momentum. This process that generates an approaching jet is similar to a bubble collapsing near a flat rigid surface. Figure <ref>(c) presents a typical case where the mushroom bubble forms and a departing jet starts to appear. In this work, we refer to this scenario as “departing jet emerging (DJE)”. The stand-off distance γ = 0.26 and the non-dimensional cylinder size η=0.09 in this case were smaller than those of the case in figure <ref>(b). In figure <ref>(c), when the bubble reaches its maximum volume (at t = 1.09 ms), the bubble partially warps the narrow cylinder and maintains its spherical shape in general. The stem of the “mushroom" is formed due to the fast-retracting liquid jets pinching the bubble near the cylinder (indicated by the orange arrowheads at t = 2.02 ms). While collapsing, the cap of the mushroom remains spherical as the gas-liquid interface (indicated by the purple arrowhead at t =2.02 ms) is far away from the cylinder and recedes slower compared to the pinching jets. The dynamics are similar to the observations made by <cit.>. It is noteworthy that the bubble cloud in the second expansion cycle moves in two directions. The centroid of the rebound bubble moves toward the cylinder (δ= -0.05, comparing the location of the green and yellow triangles at t =  0 and 2.58 ms, respectively), similar to the case in figure <ref>(b), while there is a minor cloud bubble shooting away from the cylinder (see t = 2.58 ms, marked by the short pink arrowhead in figure <ref>(c)). This observation indicates that two jets exist after the collapse: one jet is approaching and the other one is departing from the cylinder. The departing jet, which is an emerging feature compared to the case in figure <ref>(b), however, does not yet dominate the entire jetting process. When the bubble is close to a relatively thin cylinder, the departing jet may dominate over the approaching jet and we denote this scenario as “departing jet dominant (DJD)”. A typical case is shown in figure <ref>(d) for γ = 0.06 and η = 0.09. The bubble completely wraps the cylinder when it expands to the maximum diameter (t= 1.15 ms) and then collapses. Similar to the case shown in figure <ref>(c), the elongated rebound bubble cloud covering the cylinder meanwhile moving away from the (t=2.53 ms) indicates the existence of both approaching and departing jets. Noting centroid of the bubble cloud (t= 2.53 ms, marked by the yellow triangle) is further away from the cylinder than the center of bubble onset (green triangle at t= 0 ms) and the corresponding displacement δ = +0.04, we argue that the jet forming at collapse is mainly departing. Figure <ref>(e) shows another “no-jet (NJ)” case. A bubble is initiated right next to a thin cylinder, where the size of the bubble is much larger than that of the cylinder (η = 0.05). The bubble behaviour in this case is similar to a free bubble. The centroid of the bubble (cloud) does not show any apparent movement upon rebound, indicating that no jet was generated. Despite the NJ outcome that is similar to the case shown in figure <ref>(a), we emphasize that the phenomenon shown in figure <ref>(e) is due to vanishing cylinder diameter (η→ 0) whereas the NJ case in figure <ref>(a) is associated with the standoff distance in the limit of γ→∞. § MECHANISMS The observations in figure <ref> imply that when a bubble collapses near a cylinder, depending on the relative position as well as the size of the bubble and the cylinder (γ and η), the cylinder may affect the liquid flow in two ways (i.e., blocking and focusing). First, the cylinder can block the liquid behind it from directly moving to the center of the bubble, while the liquid on the other side of the bubble is free to move to fill the cavity during collapsing. This causes a pressure gradient and, in turn, the collapsing bubble generates a jet approaching the cylinder <cit.>. This often happens when the cylinder is relatively large and/or the bubble is not too close to the cylinder (e.g., see the case in figure <ref>(b)). This mechanism is similar to the well-known jet formation from a bubble collapsing next to a solid flat surface. Second, when the cylinder is relatively small and the bubble is initiated close enough to the cylinder, the bubble can be significantly deformed during its growth. In figure <ref>(c), for example, the bubble partially wraps the cylinder while achieving its maximum volume (at t = 1.09 ms), leaving two regions of the gas-liquid interface having a higher curvature than other parts of the bubble. The higher curvature is corresponding to a smaller equivalent local bubble radius, which is associated with a shorter time for a local collapse. This mechanism has also been argued by <cit.> based on the Rayleigh's collapse time, T ≃ 0.915D̃_0√(ρ /p_∞), where T is the collapse time, ρ is the liquid density, p_∞ is the ambient pressure, and D̃_0 is the equivalent bubble size reflecting the local curvature of the bubble. Over the initial stage of the collapse, the advantage of the high-speed flows driven by the high curvature interface accumulates, which results in two jets pinching the bubble (see the orange arrowheads in figure <ref>(c) for instance). The two pinching jets forms the stem of the mushroom-shaped bubble before collapsing. After pinch-off, the two pinching jets merge and the momentum is focused upward, pointing away from the cylinder, which can dominate the retracting liquid near the cap of the mushroom-shaped bubble (see the purple arrowhead in figure <ref>(c)). This focusing mechanism is similar to the shaped charge effect. The competition between these two mechanisms dictates the onset and direction(s) of the jet(s), and some typical results as shown in figure <ref>. § REGIME DIAGRAMS AND VALIDATION Based on the above experimental observations and analysis on the mechanisms, we hypothesize that the direction(s) of the jet(s) caused by the bubble collapsing near a cylinder are dictated by two parameters. One is the standoff distance γ = d_s/D_0 measuring the distance from the bubble to the cylinder, and the other is the non-dimensional cylinder diameter η = D/D_0. Several critical states regarding γ and η are proposed below and illustrated in figure <ref>. When a bubble wraps about half of the cylinder, the virtual circle enclosing the bubble passes the center of the cylinder (see figure <ref>(a)). We conjecture that this is a state separating the blocking and focusing mechanisms and determines if a departing jet would emerge. The corresponding geometric relationship for the circles representing the bubble and cylinder is d_s=1/2(D_0-D), and the non-dimensional form is γ = 1/2 - 1/2η. If the standoff distance is smaller than this threshold, that is to say γ < 1/2 - 1/2η, high curvature on the sufficiently deformed bubble leads to the evident focusing effect and a departing jet is expected. When the bubble is even closer to the cylinder, especially when the bubble is relatively large, the focusing effect is more pronounced than the blocking and the departing jet starts to dominate. This condition translates to d < κ_1 D_0, where κ_1 is a coefficient that can be determined by experimental data (see figure <ref>(b) for illustration). Invoking d = d_s+1/2D, the non-dimensional form of this criterion is γ < κ_1 - 1/2η. When the bubble is far enough from a sufficiently small cylinder, d_s > 1/2D_0 + κ_2 D, where κ_2 is another constant to be determined (see figure <ref>(c)), the effect of the cylinder (blocking or focusing) is negligible and thus no jet is expected. The corresponding non-dimensional form is γ > 1/2 + κ_2 η. This criterion considers the combined effects of the relative size and position of a bubble and cylinder. The asymptotic behaviours (i.e., small η→ 0 and large γ→∞) of such a setup are also of interest. When the cylinder is significantly smaller than the bubble (see figure <ref>(d) for illustration), for example, D < κ_3 D_0 ≪ D_0 with corresponding non-dimensional form η < κ_3 ≪ 1, the relative placement of the bubble and cylinder is not important anymore. Jets are not expected when the bubble collapses due to the diminishing impact of the cylinder of a small length scale. κ_3 ≪ 1 is a small constant that can be found by experiments. When the bubble is too far away from the cylinder (see figure <ref>(e)), the size of the cylinder does not matter. We expect there exists a critical value κ_4 so that if d_s > κ_4 D_0 ≫ D_0/2, no jet would be generated when the bubble collapse. The non-dimensional form of this criterion is γ > κ_4 ≫1/2. Recall (<ref>) again, the above criteria can also be expressed using ζ instead of γ. We use γ to be consistent with the current literature, however, ζ is practical to investigate some of the critical states regarding the directions of the jets. The directions of the jets after bubble collapsing can be qualitatively observed by the direction of the moving bubble cloud in the high-speed videos. For example, when a departing jet appears, the bubble cloud tends to move away from the cylinder over the collapsing-rebound cycles. This can be quantitatively identified using the value of δ = δ_D/D_0 as a measure, which is a characteristic displacement of the bubble cloud. If there is only an approaching jet appears after the first collapse, the momentum of the jet would carry the bubble cloud towards the cylinder (e.g., see figure <ref>(b)) and we expect δ < -δ_0<0. Similarly, when the departing jet dominates the approaching one, δ > +δ_0>0 (see figure <ref>(d) for instance). However, if the departing and approaching jets cannot dominate one to the other, the direction of δ_D and the `sign' of δ are not necessarily determined. We present δ as a function of ζ in figure <ref> to show our argument above is valid. Viewing the δ–ζ phase diagram vertically, we can see that all the AJO cases (orange upside-down triangle in figure <ref>) are located in the region of δ<-δ_0, whereas DJD cases (pink upright triangles) are in δ>+δ_0. NJ cases (black crosses) are distributed along δ = 0 (-δ_0 < δ < +δ_0 to be more specific) whereas the DJE cases (blue diamond symbols) are scattered on both sides of δ = 0. Interrogating the experimental data on the δ – ζ phase diagram ( figure <ref>) horizontally is useful for verifying the aforementioned models and identifying the coefficients such as κ_1. It is visible that the jet direction evolves from approaching to departing as ζ decreases. In the region of ζ > 0.5 (yellow-shaded, to the right of the blue chain line in figure <ref>), almost only AJO cases exist. Recalling (<ref>), ζ= 0.5 is an alternative expression of γ=1/2-1/2η, thus, (<ref>) is validated. The departing jets emerge when ζ < 0.5, and further reducing ζ, the departing jet eventually becomes dominant for ζ < 0.25, which is equivalent to (<ref>) for κ_1 = 1/4. This is supported by observing that in the red-shaded region to the left of the magenta line (corresponding to ζ = 0.25), almost only DJD cases exist. The DJE cases (blue diamond symbols) are located in the transient region for 0.25<ζ < 0.5. The black symbols represent the data extracted from <cit.>, where a laser-induced micro-bubble collapsing near a micro-fibre was studied. This work did not focus on the direction of jets, and the bubble dynamics after the first collapse was not reported. Instead, the location of the bubble near collapsing was recorded. Comparing the displacement from the location of the bubble onset to the center of the bubble at the first collapse, one could still infer the directions of jets. Despite being a different measure of δ than we used for our data, this qualitative classification is sufficient to tell the AJO, DJE, and DJD cases apart in <cit.>, and we see that the experimental data by <cit.> agree with our model. To validate equations (<ref>) and (<ref>), we plot the non-dimensionalized experimental data on the γ – η plane (figure <ref>). The blue chain line indicates equation (<ref>) separating the AJO and the DJE cases. The magenta line in figure <ref> is based on equation (<ref>) that separates most DJD cases from the DJE cases. Experimental data on the γ – η plane also provides quantitative insights into the NJ cases due to different reasons. κ_2=0.5 for (<ref>) separates the NJ cases and the AJO cases for 5×10^-2≲η≲ 7 (see the orange dotted line in figure <ref>). For η < 5×10^-2, a sufficiently thin cylinder cannot affect the dynamics of the bubble and almost no jets were observed in our experiments. Thus, κ_3 = 5×10^-2 in (<ref>) allows our model to establish the criterion of a thin cylinder. For the other extreme, κ_4 = 4 for equation (<ref>) was suggested by our experiments, which is the criterion for a large stand-off distance. We note that κ_4=4 agrees with the established data about a cavitation bubble near a flat surface (<cit.>), which can be considered as a thick cylinder with vanishing curvature (i.e., η→∞). In figure <ref>, criteria based on equations (<ref>) – (<ref>) separate the γ – η phase diagram into four regimes. Regime I (yellow shade) covers most of the AJO cases (orange upside-down triangles). In regime III (pink shade), almost only pink triangles (associated with DJD cases) appear. The transient cases for the directional jet(s) (DJE cases, marked by the blue diamond symbols in blue-shaded regime II) are in between Regimes I and III. Regime IV (different shades of green for three sub-regimes) indicates NJ cases rooted in different mechanisms. In Regime IV-1, NJ happens as a cylinder is too thin (small η). In Regime IV-3, NJ is expected as the bubble is too far away from the solid surface (large γ). Regime IV-2 can be thought of as the transient region between Regime IV-1 and IV-3, where the combined effect of η and γ must be considered and is governed by (<ref>). Again, the data extracted from <cit.> falls in our regime diagram, and provides additional validation based on the interaction of micro-bubbles. § CONCLUDING REMARKS In the current work, we carried out systematic experiments to investigate a cavitation bubble collapsing near a cylinder. We find that the onset and the direction of the jet(s) are dictated by the relative positioning and the size of the bubble and the cylinder (i.e., the standoff distance γ and the normalized cylinder diameter η). When the cylinder is too thin and/or too far away from the bubble, a bubble does not expel any visible jets. Once the bubble starts interacting with the cylinder—when γ and/or η are small enough—a jet approaching the cylinder occurs, as one might expect, which is similar to that for a bubble collapsing in the vicinity of a flat wall. When the cavitation bubble is onset closer to an even smaller cylinder within a particular range, the bubble possesses a mushroom-like collapse followed by a departing jet. Given a certain maximum bubble size, the departing jet carries the energy away from the cylinder, which might result in a reduction of the cavitation-induced damage. In this sense, the cylinder is protected by being thin and staying close to the cavitation. We proposed models to classify these phenomena including transition into four regimes on the γ – η phase diagram, which are validated by experiments. The experimental results and criteria shown in this work may be of interest to applications where cavitation bubbles interact with (thin) cylinders and fibres. For example, a direct implication based on our result is that the demolition of thin fibres and fibrous materials could be challenging, and small bubbles are more effective than bigger ones. When a cylinder near a cavitation bubble needs protection, our regime diagram provides a guideline: one may want to manage the standoff distance and bubble size to avoid the jet onset or staying in the departing jet dominant regime. § ACKNOWLEDGMENTS We thank Drs. S. Peterson and M. Worswick for lending us equipment and J. Beginner and J. Imbert-Boyd for manufacturing and technical support.
http://arxiv.org/abs/2307.04505v1
20230710115649
Analysis of the possible satellite contamination in LAMOST-MRS spectra
[ "Mikhail Kovalev", "Olivier R. Hainaut", "Xuefei Chen", "Zhanwen Han" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.IM" ]
firstpage–lastpage Distributed Decisions on Optimal Load Balancing in Loss Networks Qiong Liu1, Chenhao Wang2, Ce Zheng1 1Télécom Paris, Institut Polytechnique de Paris, France 2Beijing Normal University, China Email: [email protected], [email protected], [email protected] ========================================================================================================================================================================================================================== We present the detection of false positive double-lined spectroscopic binaries candidates (SB2) using medium-resolution survey (MRS) spectra from the one time-domain field of LAMOST data release 10 (DR10). The secondary component in all these binaries has near zero radial velocity and solar-like spectral lines. Highly likely this is light from the semi-transparent clouds illuminated by the full Moon. However we also suspect that partially this contamination can be caused by a solar light reflected from the surface of low-orbital artificial satellites launched in the beginning of 2022. We found several possible contaminant candidates using archival orbital data. We propose measures to reduce risk of such contamination for the future observations and methods to find it in archived ones. binaries : spectroscopic – techniques : spectroscopic § INTRODUCTION Since the launch of the Sputnik-1 in 1957, we can see artificial satellites flying in the night sky. Such observations can be very useful for Earth-related science (i.e. determination of the geopotential), although for astrophysics, satellites can be an obstacle. This problem has become more serious with the start of the active populating of low earth orbits, which now host many thousands of telecommunication satellites, which form huge constellations. In most pessimistic scenario such intensive commercialisation of space can be the end of all ground base astronomy. In spectroscopic observations, the flyby of an artificial satellite will result as a fake spectroscopic binary, where contamination will be visible as a solar-like spectral component. For low orbit satellites, the line of sight velocity () is near zero when the satellite rise close to culmination, but the transverse velocity is very high, so contamination lasts much less than a second for typical values of field of view. Thus for a typical bright astrophysical target, contamination is usually negligible and only relatively faint objects are affected <cit.>. <cit.> identified many double-lined spectroscopic binary (SB2) candidates in LAMOST (Large Sky Area Multi-Object fiber Spectroscopic Telescope) MRS <cit.>. However some of them can be false positives, which can be identified by taking advantage of multiple observations in a time domain sub-survey. Here we present results for one particular field, where these false-positive SB2s can be caused by satellite contamination. The paper is organised as follows: in Sections <ref> and <ref>, we describe the observations and methods. Section <ref> presents our results. In Section <ref> we discuss the results. In Section <ref> we summarise the paper and draw conclusions. § OBSERVATIONS LAMOST is a 4-meter quasi-meridian reflective Schmidt telescope with 4000 fibers installed on its 5 FoV focal plane. These configurations allow it to observe spectra for at most 4000 celestial objects simultaneously (<cit.>). For the analysis in this paper, we downloaded all available time-domain DR10 spectra from <www.lamost.org/dr10/v0/> observed within the field “TD164021N701415T01". We use the spectra taken at a resolving power of R=λ/ Δλ∼ 7 500. Each spectrum is divided on two arms: blue from 4950 Å to 5350 Å and red from 6300 Å to 6800 Å. During the reduction, heliocentric radial velocity corrections in range of _h=-5,-2 were applied to all spectra. We convert the wavelength scale in the observed spectra from vacuum to air using <cit.>. Observations are carried out in MJD=59676.8-59692.8 days, spanning an interval of 16 days. We selected only spectra stacked for whole night[ Each epoch contains seven short 20 min individual exposures, which were stacked to increase ] and apply a cut on the signal-to-noise (>=20). In total we have 5625 spectra from 1323 targets. The number of epochs varies from 2 to 4 per target, as very noisy epochs were not selected for some targets. § METHODS We use the same spectroscopic models and method as <cit.> to analyse individual LAMOST-MRS spectra, see very brief description below. The normalised binary model spectrum is generated as a sum of the two Doppler-shifted normalised single-star spectral models f_λ,i[they are designed as a good representation of the LAMOST-MRS spectra], scaled according to the difference in luminosity, which is a function of the and stellar size. We assume both components to be spherical and use the following equation: f_λ, binary=f_λ,2 + k_λf_λ,1/1+k_λ,  k_λ= B_λ(_,1) R^2_1/B_λ(_,2) R^2_2 where k_λ is the luminosity ratio per wavelength unit, B_λ is the black-body radiation (Plank function), is the effective temperature and R is the stellar radius. Throughout the paper we always assume the primary star to be brighter one. In comparison with <cit.> we directly use the ratio of stellar radii q as a fitting parameter, instead of the mass ratio with difference of the surface gravity . Each spectrum is analysed with the single and binary spectral model, thus we can calculate the difference in reduced χ^2 between two solutions and the improvement factor (), computed using Equation <ref> similar to <cit.>. This improvement factor estimates the absolute value difference between two fits and weights it by the difference between the two solutions. f_ imp=∑[ (|f_λ, single-f_λ|-|f_λ, binary-f_λ|)/σ_λ] /∑[ |f_λ, single-f_λ, binary|/σ_λ] , where f_λ and σ_λ are the observed flux and corresponding uncertainty, f_λ, single and f_λ, binary are the best-fit single-star and binary model spectra, respectively, and the sum is over all wavelength pixels. § RESULTS We carefully checked the quality of the spectral fits through visual inspection of the plots. Several spectra were selected as SB2 candidates using criteria formulated in <cit.>, although this selection was not complete as these criteria prioritise purity. This study is focused on possible satellite contamination, so we introduce a new selection of the fitted parameters, like and improvement factor, see Table <ref>. Out of four epochs, one with MJD=59685.8 d has significantly more selected candidates, so we explored it more carefully. Thus we keep only stars that appear as a regular single star in all epoch except MJD=59685.8 d. In total we left with 37 SB2 candidates, with a secondary component at _2∼ 0. They are marked as open triangles on Figure <ref>. We show the most clear example J162843.74+680439.7 (G=14.55 mag) with very large ∼410 in Fig. <ref>. In the top panel we show fits of the co-added spectrum by the single-star and binary model. The single-star model obviously failed to fit double-lined spectrum, while binary models fit the primary component (67 per cent) at _1=-418.72 and catch another additional spectrum component (33 per cent) at _2=-7.62. In the middle panel we show fitting results for a mock spectrum of J162843.74+680439.7 contaminated by solar spectrum of V=16 mag, where we applied Gaussian noise according to . As you can see both panels are very similar. In the bottom panel we show all seven short 20 min exposures spectra before coaddition. It is clear that contamination happened at UTC times t=19:59 and t=20:21 as these two exposures have an additional spectral component, which has a brightness comparable with the main target. When all exposures were co-added we got the double-lined spectrum with significantly smaller noise. In the other candidates contamination is not that clearly visible as they have smaller . The majority of the candidates have G∼14.5 and _ red<50 in the co-added spectrum, thus probably for brighter targets contamination was negligible and comparable to the noise level. § POSSIBLE SOURCE OF CONTAMINATION Highly-likely such contamination was caused by the clouds, illuminated by the full Moon, which significantly increased sky background in the spectra. Unfortunately, sky subtraction failed to completely remove it during the spectral reduction, see the last two individual 20-minute exposures in the bottom panel of Fig. <ref>. This can explain such solar-like spectral component very well as sky becomes brighter as sun is rising. At the moment of the end of observation it's height was around -12. This also supported by the fact that contamination is visible only in relatively faint targets. Nevertheless we decided to test other possible sources of contamination. We checked if this contamination can be due to a solar system object. We used Minor Planet Center checker[<https://minorplanetcenter.net/cgi-bin/mpcheck.cgi>] to check 1284083 known objects and found none of them brighter than 18 mag in our field. With slightly larger search radii we found comet C/2019 K7 (Smith) with coordinates α=16:20:45.9, δ=+67^∘ 56' 19", although it is unlikely to be our contaminant, because otherwise it will be visible in all exposures, as it moves very slowly. In order to investigate whether this contamination could have been caused by a satellite passing through the field of view, we verified that, at the time of the observations, low-Earth orbit (LEO) satellites were illuminated by the Sun. This was tested using the formalism described in <cit.> for generic LEOs as well as for Starlink and OneWeb satellites. To evaluate the number of fibres typically affected by a satellite trail, a million trails, randomly positioned, were shot through a realistic LAMOST field of view. For the considered field, 1324 fibres had object of interest (with suitable S/N>20) over the 4000 fibres of the instrument, so 1324 fibres were considered in this experiment. A trail is considered to affect a fibre if the impact distance is less than 3”, which accounts for the radius of the fibre (whose diametre is 3.3”) and the width of the trail, which is set to 2” accounting for the seeing and the marginally resolved satellite. For each trail, the number of fibres affected was counted. Figure <ref> illustrates this for 100 trails on the left panel, and displays a histogram of the number of fibres affected on the left panel. This method is the same that was used to evaluate (Michevat priv.comm.) the impact on 4MOST, a similar spectrograph built at ESO <cit.>. About 64% of the trails hit no fibre and while 0.01% of the satellites hit 7 fibres, a trail will hit 0.44 fibres on average. As 37 fibres were contaminated, this suggests up to ∼80 satellites crossed the 5 field of view during the exposure. These numbers should be taken with a fairly large uncertainty, as the seeing and the width of the trail will cause the number of fibres affected to be larger, but the contamination for a larger impact distance will be smaller. To estimate the visual magnitude of the satellite causing the contamination, one must estimate the level of contamination of the spectra, and take into account the effect of motion of the satellite. With typical angular velocities of the order of 1 s^-1 at zenith, a LEO satellite spends only a few milliseconds t_ eff crossing the fibre during the total exposure time t_ exp = 1200 s. The apparent magnitude m of the object can be estimated from its effective magnitude m_ eff measured on the spectrum, m = m_ eff + 2.5 log_10t_ eff/t_ exp = m_ eff + 2.5 log_10r_ fibre/ω_ sat t_ exp , where r_ fibre = 3.3” is the angular diameter of a fibre on the sky. Using the method in <cit.>, the angular velocity of the satellite in the direction of observations was estimated for Starlink (0.66 s^-1) and OneWeb (0.30 s^-1) satellites. The effective magnitude can be estimated from the contamination. The S/N of the G ∼ 14.5 was up to 50 in the co-added spectrum, corresponding to ∼ 20 in the individual 1200s exposures. To be noticeable, the contamination must have S/N > 5 (which corresponds to G∼16), and to be detectable at all, S/N>2 (G∼ 17). Combining these pieces of information, Eq. <ref> gives visual magnitudes ∼1–2. Fainter satellites will not be detected. As of the time of the observations, about 4500 satellites were present on LEOs (roughly 2000 pre-existing, and 2002 Starlink[ Jonathan McDowell’s Starlink web page <https://planet4589.org/space/con/star/stats.html> ] and 426 OneWeb[ Jonathan McDowell’s OneWeb web page <https://planet4589.org/space/con/ow/stats.html> ] from recently launched mega-constellations). Using the method of <cit.>, this results in ∼ 15 satellite trails per exposure during long twilight, as illustrated in Fig. <ref>. This number is much too low to explain the observed contamination. Furthermore, the magnitudes of the satellites differs widely (some of them, such as HST or ISS can be as bright as V -5 to 2), but the bulk of the Starlink satellites are in the 5.6–7.2 range <cit.> and OneWeb in the 7–9 range <cit.>, ie well below the reach of the spectrograph. We also checked Satellite Track Predictor (STP)[<http://www.astro.amu.edu.pl/STP>] for time interval UTC=19:30, 20:30 and found that 12 bright satellites with V≤6 mag crossed our field. We show their tracks in Fig. <ref>. STP reports that errors can be up to 0.1-0.5 for sky-positions and σ_V=2 mag for brightness, so some of these satellites (like Starlink and Cosmos with reported V=4 mag) can be bright enough to cause contamination. In the week after their launch, the satellites appear as a train, or like a string of pearls while they slowly disperse in elongation along their very low orbit. During that phase, they appear much brighter than when on their operational orbit, because of the shorter distance to the observer, and because the configuration and attitude of the satellites are different than when in operations. In the days of the earliest Starlink launches, they could be as bright as mag ∼ 0. Since then, the operator has modified the attitude of the satellites so that they are much dimmer, in the 1–3 range most of the time[Although very bright (up to V∼ 0 mag) and short (∼1 sec) flashes are possible. First author saw them several times.]. A batch of satellites launched with one rocket consist typically of 60 satellites. In order to test whether such a train of recently launched satellites could have crossed our field of view, Two Line Elements (TLEs), the orbital elements of the satellites, were retrieved for the date of the observations using CelesTrack [<https://celestrak.org/NORAD/archives/request.php>]. Using the skyfield[<https://rhodesmill.org/skyfield/>] package, the visibility of the satellite was verified, from LAMOST for the time of the observations. It appears that a series of Starlink satellites from the 2022 Feb. 21 launch^<ref> crossed the sky during the exposure. While their tracks, as computed by us, are in the general vicinity of our observation, they does not cross the field of view. However, the TLEs are notoriously not very accurate –especially at a phase when the operator frequently adjust the orbit, and our method to compute the satellite position is not verified. At that time, the satellites were at an altitude of 350km, with a magnitude in the 1–2 range. The apparent angular velocity of these satellites was ω∼ 1.0 s^-1, which leads to effective magnitudes m_ eff∼ 16–17, i.e. in the range of the contamination. Therefore, we suggest that the observations can be theoretically, "photobombed" by a train of Starlink satellites on their low, parking orbit, although contamination by clouds is more likely. In the future, the number of satellites in mega-constellations is likely to grow significantly. Assuming 65 000 satellites (as in <cit.>), this would result in a typical 1200s exposure being crossed by about 200 satellite trails, potentially resulting in ∼ 260 fibres contaminated per exposure taken during long twilight (3% of the fibres). However, the limiting magnitude of the LAMOST-MRS instrument for 1200s exposure is V∼ 15 (5σ). Converting the apparent magnitudes of the satellites (using the crude photometric model described in <cit.>) into effective magnitudes, these will be in the 18 to 23 range (depending on the satellite's orbit and altitude and azimuth), well below the limit of LAMOST-MRS, even accounting for a possible 1 mag error on the photometric model. As usual, it is important to note that once the sun dips far enough under the horizon, most of the satellites fall in the shadow of the Earth. This problem is therefore only critical during the first and last hours of the night. While the satellites on operational orbits will not be a major concern for LAMOST, the compact trains of very low satellites can affect the observations. The probability of such a train crossing a telescope field of view is low, but considering that constellations will need to be regularly replenished, new satellites will need to be continuously launched. Considering 100 000 satellites with a life-time of 5 years, this would result in about one launch per day (each with 60 satellites). If the satellites stay one month in low orbit, this would result in about 60 trains in orbit, at various stage of dispersion. It is therefore important that the satellite operators also keep the brightness of the satellites to the absolute minimum possible during their stay on transit orbit. The changes of satellite attitude implemented by Starlink illustrate the improvements than can be made. § CONCLUSIONS We successfully detected false-positive SB2 candidates in the LAMOST-MRS spectra. The secondary component in all these binaries have near zero radial velocity and solar-like spectral lines. Highly likely this is light from the semi-transparent clouds illuminated by the full Moon. However we also suspect that partially this contamination can be a solar light reflected from the surface of low-orbital artificial satellites launched in the beginning of 2022. We found several possible contaminant candidates using archival orbital data from CelesTrack and STP web service. Unfortunately results presented in this paper cannot definitely confirm satellites as contaminant, as other sources like clouds and problem with sky subtraction will have similar effect on the spectral observations. To identify and remove such contamination we recommend analysis of all spectra taken during twilight, assuming a binary spectrum model, where one component has solar-like spectrum with radial velocity in the range =-10,+10. Also the short exposures should be carefully checked prior the co-addition to avoid the production of false double-lined spectra with contaminated exposures. During the scheduling of the observation one should consider possibility of the contamination by the bright "train" of newly launched satellites and avoid observations near the twilight if possible. Also we recommend to take additional image of the observed field, to reliably identify possible satellite tracks. § ACKNOWLEDGEMENTS MK is grateful to his parents, Yuri Kovalev and Yulia Kovaleva, for their full support in making this research possible. We thank Hans Bähr for his careful proof-reading of the manuscript. We thank Zhang Haotong and Luo A-Li for useful discussions. We thank Dr. Nikolay Emelyanov for providing the link to Minor Planet Center Checker. We thank Monika Kamińska for providing sky positions for satellites from STP. We are grateful to Dr. T.S. Kelso for development and maintaining of the CelesTrack. This work is supported by National Key R&D Program of China (Grant No. 2021YFA1600401/3), and by the Natural Science Foundation of China (Nos. 12090040/3, 12125303, 11733008). Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. The authors gratefully acknowledge the “PHOENIX Supercomputing Platform” jointly operated by the Binary Population Synthesis Group and the Stellar Astrophysics Group at Yunnan Observatories, Chinese Academy of Sciences. This research has made use of NASA’s Astrophysics Data System. It also made use of TOPCAT, an interactive graphical viewer and editor for tabular data <cit.>. § DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the corresponding author. LAMOST-MRS spectra are downloaded from <www.lamost.org>. mnras @urlcharsothermakeother $&#_% @doi@urlcharsother ifnextchar [ @doi@ @doi@[] @doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1 @eprint#1#2@eprint@#1:#2::nil @eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1 @eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml dblp:#1 @eprint@#1:#2:#3:#4niltempa #1tempb #2tempc #3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc [Bassa, Hainaut & Galadí-EnríquezBassa et al.2022]bassa2022 Bassa C. G., Hainaut O. R., Galadí-Enríquez D., 2022, @doi [] 10.1051/0004-6361/202142101, https://ui.adsabs.harvard.edu/abs/2022A A...657A..75B 657, A75 [Cui et al.,Cui et al.2012]2012RAA....12.1197C Cui X.-Q., et al., 2012, @doi [Research in Astronomy and Astrophysics] 10.1088/1674-4527/12/9/003, https://ui.adsabs.harvard.edu/abs/2012RAA....12.1197C 12, 1197 [Czesla, Schröter, Schneider, Huber, Pfeifer, Andreasen & ZechmeisterCzesla et al.2019]pya Czesla S., Schröter S., Schneider C. P., Huber K. F., Pfeifer F., Andreasen D. T., Zechmeister M., 2019, PyA: Python astronomy-related packages (@eprint ascl 1906.010) [El-Badry et al.,El-Badry et al.2018]bardy2018 El-Badry K., et al., 2018, @doi [] 10.1093/mnras/sty240, https://ui.adsabs.harvard.edu/abs/2018MNRAS.476..528E 476, 528 [Kovalev, Li, Zhang, Li, Chen & HanKovalev et al.2022a]tyc Kovalev M., Li Z., Zhang X., Li J., Chen X., Han Z., 2022a, @doi [] 10.1093/mnras/stac1177, https://ui.adsabs.harvard.edu/abs/2022MNRAS.513.4295K 513, 4295 [Kovalev, Chen & HanKovalev et al.2022b]bincat Kovalev M., Chen X., Han Z., 2022b, @doi [] 10.1093/mnras/stac2513, https://ui.adsabs.harvard.edu/abs/2022MNRAS.517..356K 517, 356 [Liu et al.,Liu et al.2020]lamostmrs Liu C., et al., 2020, arXiv e-prints, https://ui.adsabs.harvard.edu/abs/2020arXiv200507210L p. arXiv:2005.07210 [MallamaMallama2020]mallama2020 Mallama A., 2020, @doi [arXiv e-prints] 10.48550/arXiv.2012.05100, https://ui.adsabs.harvard.edu/abs/2020arXiv201205100M p. arXiv:2012.05100 [MallamaMallama2021a]mallama2021a Mallama A., 2021a, @doi [arXiv e-prints] 10.48550/arXiv.2101.00374, https://ui.adsabs.harvard.edu/abs/2021arXiv210100374M p. arXiv:2101.00374 [MallamaMallama2021b]mallama2021b Mallama A., 2021b, @doi [arXiv e-prints] 10.48550/arXiv.2111.09735, https://ui.adsabs.harvard.edu/abs/2021arXiv211109735M p. arXiv:2111.09735 [TaylorTaylor2005]topcat Taylor M. B., 2005, in Shopbell P., Britton M., Ebert R., eds, Astronomical Society of the Pacific Conference Series Vol. 347, Astronomical Data Analysis Software and Systems XIV. p. 29 [Zhao, Zhao, Chu, Jing & DengZhao et al.2012]2012RAA....12..723Z Zhao G., Zhao Y.-H., Chu Y.-Q., Jing Y.-P., Deng L.-C., 2012, @doi [Research in Astronomy and Astrophysics] 10.1088/1674-4527/12/7/002, https://ui.adsabs.harvard.edu/abs/2012RAA....12..723Z 12, 723 [de Jong et al.,de Jong et al.2019]4most2019 de Jong R. S., et al., 2019, @doi [The Messenger] 10.18727/0722-6691/5117, https://ui.adsabs.harvard.edu/abs/2019Msngr.175....3D 175, 3
http://arxiv.org/abs/2307.05038v1
20230711064027
Disentangled Contrastive Image Translation for Nighttime Surveillance
[ "Guanzhou Lan", "Bin Zhao", "Xuelong Li" ]
cs.CV
[ "cs.CV" ]
IEEEexample:BSTcontrol IEEE TRANSACTIONS ON Image Processing Roberg et al.: Disentangled Contrastive Image Translation for Nighttime Surveillance Disentangled Contrastive Image Translation for Nighttime Surveillance Guanzhou Lan, Bin Zhao, Xuelong Li, Fellow, IEEE August 12, 2023 ================================================================================= Nighttime surveillance suffers from degradation due to poor illumination and arduous human annotations. It is challengable and remains a security risk at night. Existing methods rely on multi-spectral images to perceive objects in the dark, which are troubled by low resolution and color absence. We argue that the ultimate solution for nighttime surveillance is night-to-day translation, or Night2Day, which aims to translate a surveillance scene from nighttime to the daytime while maintaining semantic consistency. To achieve this, this paper presents a Disentangled Contrastive (DiCo) learning method. Specifically, to address the poor and complex illumination in the nighttime scenes, we propose a learnable physical prior, i.e., the color invariant, which provides a stable perception of a highly dynamic night environment and can be incorporated into the learning pipeline of neural networks. Targeting the surveillance scenes, we develop a disentangled representation, which is an auxiliary pretext task that separates surveillance scenes into the foreground and background with contrastive learning. Such a strategy can extract the semantics without supervision and boost our model to achieve instance-aware translation. Finally, we incorporate all the modules above into generative adversarial networks and achieve high-fidelity translation. This paper also contributes a new surveillance dataset called NightSuR. It includes six scenes to support the study on nighttime surveillance. This dataset collects nighttime images with different properties of nighttime environments, such as flare and extreme darkness. Extensive experiments demonstrate that our method outperforms existing works significantly. The dataset and source code will be released on GitHub soon. nighttime vision, image translation, disentanged representation, contrastive learning § INTRODUCTION Low-light environment prompts human to use rod photoreceptors rather than cone photoreceptors operating in daytime <cit.>. It results in poor spatial resolution, diminished contrast sensitivity, and absent color vision at night. Similar problems appear in imaging systems, shown as low lightness, contrast, and resolution with high ISO noise <cit.>. Those issues challenge the visual perception and cognition of surveillance in the dark, which results in security threats in the night environment. To overcome such difficulty, plenty of efforts have been made. One option is multi-spectral image analysis, such as the fusion of visible and infrared images<cit.>. Nevertheless, those techniques depend on adequate multisource information retrieval and result in resolution and color destruction frequently. Another option is to improve the clarity and details of images using low-light image enhancement <cit.>. However, it cannot deal with pixel-wise information loss and domain invariance from nighttime to daytime images. We argue that the ultimate solution for nighttime surveillance is night-to-day translation, or Night2Day, which aims to improve night scene perceptual quality by transforming an image from night to day while maintaining semantic consistency of image content in a generative way. It consequently restores the pixel-wise information loss and eliminates the domain gap between nighttime and daytime images, which not only comforts human eyes but also benefits the subsequent tasks, such as object detection and instance segmentation. It is challenging for the following significant properties: (1) Nighttime images are more difficult to represent for the widespread distribution shift in nighttime images. e.g., the extreme dimming light and flare both exist in the night domain and even in a single image. (2) The supervision is not accessible because it is impossible to obtain the paired nighttime and daytime images in the real world. (3) Weak supervision from human annotations is also not accessible. It is usually out of human cognition at night, especially in surveillance cases. These issues influence learning target distribution or keep the original semantics less or more. In this paper, we present an unsupervised method for real-world nighttime surveillance, dubbed as Disentangled Contrastive learning (DiCo).<ref> shows the overview of DiCo. DiCo tries to deal with the issues discussed above with two intuitions: (1) Employing the physical prior to reduce the distribution shift in the night; (2) Fully utilizing the fixed scene prior in surveillance scenes as the supervision. Firstly, derived from the Kubelka-Munk theory, a learnable color invariant is designed to access stable perception in the highly dynamic nighttime environment. Secondly, a disentanglement module is developed to separate images into foreground and background. This module offers a disentangled representation of nighttime and daytime images with only the reference to the images in the daytime. Thirdly, we design a disentangled contrastive learning strategy for the foreground and background to keep the semantic consistency. Finally, all the modules above are incorporated into the generative adversarial network to learn the target distribution. Moreover, we also contribute a dataset with various surveillance scenes for Night2Day. Our main contributions lie in the following three folds: 1. To present the highly dynamic nighttime environment, we propose a stable perception strategy with the proposed learnable color invariant. It reduces the distribution shift of the model caused by the extreme light conditions in nighttime surveillance. 2. Targeting the surveillance scenes, we propose a novel disentangled representation for foreground and background features referring to the daytime domain. In this case, better visual effects and instance-aware image translation are achieved in unsupervised conditions. 3. We also publish a new dataset for nighttime surveillance scenes called NightSuR. This dataset includes six scenes and 6574 images with extreme dimming and flare cases. § RELATED WORK As aforementioned, nighttime surveillance can be achieved by low-light image enhancement, image-to-image translation, and Night2Day. In the following, the three tasks are reviewed and discussed in detail. Low-light image enhancement is a traditional low-level task in the night scene. Guo et al. propose a traditional method to estimate the illumination map by exposing the structure prior <cit.>. Li et al. propose a robust Retinex model and develop an optimization approach <cit.>. Recently, as deep learning develops, more low-light image enhancement solutions are designed based on deep neural networks. Ren et al. propose a novel spatially variant recurrent neural network to compose an encoder-decoder model <cit.>. Furthermore, Xu et al. apply the attention module to enhance the low-light image in decomposition <cit.>. Guo et al. expand the generalization of the deep learning model on low light image enhancement <cit.>. Besides, EnlightenGAN employs a pretrained VGG and adversarial training which achieves excellent results <cit.>. By employing the Retinex model, URetinex-Net achieves new state-of-the-art enhancement<cit.>. Another trend is: researchers prefer to distinguish between nighttime enhancement and low-light enhancement due to the complex light condition of nighttime environments <cit.>. The image-to-image translation is proposed to synthesize new filtered target images <cit.>. Since generative adversarial networks show competitive performance in image generation <cit.>, lots of image-to-image translation models have been proposed and show impressive results <cit.>. As the diffusion model developed, a new research paradigm for image-to-image translation is also proposed. One of the representative works is Palette <cit.>. Particularly, for unpaired image translation, cycle-consistency training <cit.> has become a strong baseline and has been applied in extensive later works <cit.>. Recently, one-side translation without cycle-consistency training raises the attention of researchers. Park et al. <cit.> first introduce contrastive learning to image-to-image translation and get excellent performance. <cit.> apply contrastive learning into a conditional GAN. <cit.> perform patch-wise contrastive learning on self-similarity maps and present a strong ability on controlling the semantic consistency of generated images. <cit.> employ a generative strategy to mine hard negative examples during contrastive learning. Improving the contrastive learning strategy, a batch of methods reach new state-of-the-art in unpaired image translation<cit.>. Night2Day towards driving scenes has received much attention. <cit.> first propose ToDayGAN for the Night2Day task. <cit.> propose ForkGAN to improve visual perceptual quality on a rainy night. Some works also discuss Night2Day briefly. Inspired by InstaGAN <cit.>, some following works change Night2Day into supervised settings to improve the results in driving scenes. <cit.> contribute a dataset in 4 domains with sufficient annotations, including daytime and nighttime images, and develop a supervised method to translate images across domains. Following INIT <cit.>, many instance-aware translation works have been proposed for driving scenes in supervised settings <cit.>. Given nighttime image x_𝒩∈𝒩 and daytime image x_𝒟∈𝒟, Night2Day aims to translate images from nighttime to daytime while maintaining content semantic consistency. It needs to construct a map ℱ with parameters 𝒲, which can be modeled as: ℱ_𝒲: 𝒩→𝒟. However, Night2Day is a natural unsupervised task, which brings more difficulties in obtaining the parameters 𝒲. Employing an unpaired daytime image x_𝒟∈𝒟 as the supervision inevitably leads to semantic flipped. Employing the input nighttime image x_𝒩 to keep the content semantic is also a popular solution. However, it will fail in converting the domain information. Referring to the assumption of previous works, two kinds of features are widely discussed in image-to-image translation: the domain-independent and domain-specific features. The domain-independent features are preserved during translation while the domain-specific features are converted. Intuiationallty, the different domain provide a natural cluster from a high dimensonal space. We present the domain-independent feature as the center domian cluster, which is modeled as: S_𝒟 = ∫_x_𝒟∈𝒟ζ(x_𝒟)p(x_𝒟) dx_𝒟 = 𝔼_x_D ∼ p(𝒟) ζ(x_𝒟) ≈1/|𝒟|∑_x_𝒟∈𝒟ζ(x_𝒟) , Disentangling these two features can provide specific optimization targets to obtain the parameters 𝒲. The problems are: how to model the domain-specific features and merge the two features in generating the natural daytime image. For the first problem, taking the daytime images domain as an example, we model the domain-specific feature as follows. Domain-specific features are bound with the domain information, which presents invariance properties of a specific domain. Thus, assuming feature extractor ζ can represent the tangling image features, the domain-specific features of the daytime domain can be modeled as: S_𝒟 = ∫_x_𝒟∈𝒟ζ(x_𝒟)p(x_𝒟) dx_𝒟 = 𝔼_x_D ∼ p(𝒟) ζ(x_𝒟) ≈1/|𝒟|∑_x_𝒟∈𝒟ζ(x_𝒟) , let S^' be: S_𝒟^' = ζ(1/|𝒟|∑_x_𝒟∈𝒟 x_𝒟), S can be estimate through: S_𝒟 = S_𝒟^' + c, since ∇_x S_𝒟 = ∇_x S_𝒟^' , where c is constant. Without loss of generality, we represent the domain specific features as S_𝒟: S_𝒟≈ζ(1/|𝒟|∑_x_𝒟∈𝒟 x_𝒟), S_𝒟 represents the overall features in the daytime domain, for any daytime images, the domain specific features are defined by the features closer to S_𝒟 and the domain independent features For a surveillance camera with a fixed field of view, such representation has explicit semantic meaning: the background of a specific scene in the daytime. It inspires us to disentangle the domain-specific features spatially. Specifically, for any x_𝒟∈𝒟, disentangling on spatial aims to estimate a mask M in pixel space: ζ^s(x_𝒟) = M ⊙ζ(x_𝒟), ζ^i(x_𝒟) = (1-M) ⊙ζ(x_𝒟), where ζ^s denotes the domain-specific features and ζ^i denotes the domain-independent features. For the second problem, we provide implicit modeling. With the discussion above, any generated daytime image x_𝒩→ 𝒟 can be disentangled into the ζ^s(x_𝒩→ 𝒟) and ζ^i(x_𝒩→ 𝒟). Then we can model the merging operator in the F_w with the following optimization target: min d_s ( M ⊙ S_𝒟, ζ^s(x_𝒩→ 𝒟)) + d_i (ζ^i(x_𝒩), ζ^i(x_𝒩→ 𝒟)), where d_s and d_i are distance measurements. d_s can employ the L1/L2 norm to get direct supervision from S_𝒟. d_i still needs to carefully design to prevent the domination of x_𝒩. In <ref>, we will explain how to disentangle the daytime image spatially and design d_i to model domain-independent features. In <ref>, the experiments on driving datasets with a moving field of view will demonstrate the generality of our design. § DISENTANGLED CONTRASTIVE NETWORKS Given nighttime image x_𝒩∈𝒩 and daytime image x_𝒟∈𝒟, Night2Day aims to translate images from nighttime to daytime while maintaining content semantic consistency. It needs to construct a map ℱ with parameters 𝒲, which can be modeled as: ℱ_𝒲: 𝒩→𝒟. As the issues discussed in the introduction, it is difficult to set the optimization target to obtain the parameters 𝒲. However, in surveillance scenes, the fixed scenes provide supervision in the background regions of the image. If we can obtain the disentanglement of the background and foreground, the optimization target can be modeled as: min d_back (ζ^b(x^ref_𝒟), ζ^b(x_𝒩→ 𝒟)) + d_fore (ζ^f(x_𝒩), ζ^f(x_𝒩→ 𝒟)), where d_back and d_fore are distance measurements. ζ^f and ζ^b denote the features of foreground and background. d_back can employ the L1/L2 norm to get direct supervision from x^ref_𝒟. Despite d_fore still needs to carefully design to prevent the domination of x_𝒩, this modeling has greatly reduce the supervision dilemma discussed before. In this section, We will introduce our Disentangled Contrastive Networks in detail. First, we introduce the proposed learnable color invariant, which provides robust perception in flare and darkness. Then we present our disentangled representation of the background and foreground based on the element-wise Pearson correlation coefficient. Finally, we incorporate all the modules into the disentangled contrastive learning framework to achieve semantic preserving Night2Day in surveillance scenes. §.§ Learnable Color Invariant The flare and darkness outside of human recognition will also result in an unstable perception of the Convolutional Neural Network (CNN), which manifests as a distribution shift of feature map activations of CNN's layers and ultimately leads to the collapse of the flare and darkness regions of synthetic images, as depicted in <ref>. Fortunately, color invariants can represent object properties regardless of the recording conditions, especially the low illumination. It motivates us to search for a color invariant that is resistant to flare and darkness and has stable perception <cit.>. Searching for color invariants typically relies on a photometric model. Our learnable color invariant employs Geusebroek's invariant edge detectors <cit.>, which is derived from Kubelka-Munk theory <cit.>. It describes the spectrum of light E reflected from an object in the view direction as: E(λ, x) = e(λ, x)((1-ρ_f(x))^2R_∞(λ, x) + ρ_f(x)), where x is a vector that denotes the spatial location on the image plane, and λ is the wavelength of light. e(λ, x) is the spectrum. R_∞ represents the material reflectivity and ρ_f is the Fresnel reflectance coefficient. Simplifying assumptions in <ref>, the derived invariants E, W, C, N, and H present edge detectors invariant to various combinations of illumination changes, including scene geometry, Fresnel reflections, and the intensity and color of the illumination. These invariant properties are the base of our learnable color invariant due to their significance for nighttime surveillance. We introduce the invariant E in the following while leaving the other invariants to the <ref>. The visualizations of each invariant in different illumination conditions are shown in <ref>. Firstly, the Gaussian color model is employed to estimate E and the partial derivatives E_λ, E_λλ. [ E(i,j); E_λ(i,j); E_λλ(i,j) ] = [ 0.06, 0.63, 0.27; 0.3, 0.04, -0.35; 0.34, -0.6, 0.17 ][ R(i,j); G(i,j); B(i,j); ], where i,j are pixel locations of the image. The spatial derivatives E_i and E_j are calculated by convolving E with Gaussian derivative kernel g and standard deviation σ: E_i(i, j, σ) = ∑_t ∈𝐙E(t,j)∂ g(i-t, σ)/∂ i. And the spatial derivatives for E_λ i and E_λλ i are the same to operate <ref> on the estimated E_λ(i,j) andE_λλ(i,j). Following <cit.>, the preliminary color invariant E is computed as: E = √(E_i^2 + E_λ i^2 + E_λλ i^2 + E_j^2 + E_λ j^2 + E_λλ j^2). The invariants are derived under different assumptions, which means each invariant can only provide robustness toward some properties. It is difficult to represent such a complex scene using just one invariant. Such observation is also confirmed by the results in <ref>. This inspires us to design the invariant adaptively. We set a learnable parameters Λ under ensemble learning strategy: ξ = ΛΦ, where Φ denotes the concatenation of color invariants E, W, C, N, and H. ξ is the final learnable color invariant. Such design brings two benefits for night scene perception: (1) It can adaptively adjust the weight towards the different illumination conditions, which provides more robust and flexible color invariant representations. (2) With more observation from different invariants, it can provide more information in the input and avoid solving the ill-pose problem with one channel observation of the input and three channel variables of the generated image in Night2Day. Later, the learnable color invariant (LCI) ξ will be put into the ResNet-based generator to estimate the daytime images with the implicit model and described in the following sections. §.§ Element-wise Similarity The disentangled representation for surveillance scenes aims to estimate a mask M, so as to obtain the foreground features ζ^f(x) and background features ζ^b(x) of any image x as follows: ζ^b(x) = M ⊙ζ(x), ζ^f(x) = (1-M) ⊙ζ(x), Referring to the background features ζ(x^ref), computing the similarity between ζ(x^ref) and daytime image ζ(x_𝒟) can provide natural clusters into background and foreground. Based on this intuition, we introduce our Element-wise Similarity (EleSim). As a common practice, we employ a pretrained VGG-16 Network as the feature extractor ζ. Employing the Pearson correlation coefficient as the similarity measurement, we compute the feature-level element-wise similarity between ζ(x_𝒟) and the referring background ζ(x^ref). Denoting features extracted from the k-th layer of VGG-16 as ζ_k, the size of ζ_k is (Bs, C_k, H_k, W_k). The elements are selected from the ζ_k with the size of (Bs, C_k, 1, 1). Computing the similarity between elements from ζ(x_𝒟) and ζ_k(x^ref) in the corresponding location, we can obtain similarity scores for each element with the size of (Bs, 1, H_k, W_k). The formulation of EleSim is derived from the Pearson correlation coefficient: EleSim(ζ_k(i,j), ζ_k^ref(i,j)) = ∑ (ζ_k(i, j)-μ_ζ_k(i,j))(ζ_k^ref(i, j)-μ_ζ_k(i,j))√(∑ (ζ_k(i, j)-μ_ζ_k(i,j))^2 ∑(ζ^ref_k(i, j)-μ_ζ_k^ref(i,j))^2) , where i,j denote the locations in height and width of the input features. μ_ζ_k(i,j) is the mean of the feature on the second dimension. The element with a higher score means higher similar to the background. With a sigmoid function and a threshold value, EleSim can be turned into the mask M_k with the size of (Bs, 1, H_k, W_k). Following the <ref>, the disentangled representation is obtained. We visualize some disentangled results in different layers of VGG-16 in <ref>. The disentangled results are growing better in deeper layers. It ensures that such disentangled representation can provide good patterns for following the processing step. §.§ Disentangled Representation The EleSim operator can provide a good disentangled representation on a daytime domain 𝒟, which benefits from the adequate training of VGG-16 and the distinguishing features of daytime images. However, these advantages are both absent in the nighttime. Few discriminative features in nighttime images lead to disastrous consequences in computing the similarity, especially in some scenes with extreme lightness conditions. For a generated daytime image x_𝒩 → 𝒟, the ultimate objective is becoming a realistic natural daytime image. In view of the good effectiveness on daytime images and the non-optimization nature of EleSim, the disentangling effectiveness of EleSim is also a measurement for the generated quality. We perform the EleSim on the generated daytime to produce the mask M_𝒩 → 𝒟 and let it jointly optimize with the generated daytime image. Reference to the ζ_k(x^ref_𝒟), the optimization target is modeled as: ℒ_back = ∑_k^N M_𝒩→ 𝒟, k⊙|| ζ_k(x_𝒩→ 𝒟) - ζ_k(x^ref_𝒟)||_1, = ∑_k^N||ζ_k^b(x_𝒩→ 𝒟) - ζ_k^b(x^ref_𝒟)||_1 where k denotes the results in k-th layer of VGG-16 ζ. In <ref>, the M_𝒩→ 𝒟, k and ζ_k(x_𝒩→ 𝒟) are updated together in each learning step. M_𝒩→ 𝒟 getting closer to the real daytime image, the mask M_𝒩→ 𝒟, k will be convergent to the real mask of the original nighttime image x_𝒩. Such strategy will force the generated image to be consistent with the domain-specific features of daytime, which will also push the mask of generated image M_𝒩→ 𝒟, k produce a correct disentangled representation in the background regions. Similar strategy is also applied in modeling the foreground. Chosen the generated daytime image to produce the mask, the second item loss function can be modeled as follow ζ^f_k(x_𝒩) = (1 - M_𝒩→ 𝒟, k) ⊙ζ_k(x_𝒩→ 𝒟) ζ^f_k(x_𝒩 → 𝒟) = (1 - M_𝒩→ 𝒟, k) ⊙ζ_k(x_𝒩 → 𝒟), ℒ_fore = d_f (ζ^f(x_𝒩), ζ^f(x_𝒩→ 𝒟)). The challenge is the modeling of distance measurement d_i. Different from the <ref>, the foreground features do not have the direct supervision from the daytime image x_𝒟∈𝒟. We have to employ the original nighttime image x_𝒩 → 𝒟 to provide the supervision. However, introducing the massive night domain information is adverse to our optimization objective. A distance measurement that model the abstract semantic rather the specific features, i.e.: the texture and color, is urgent needed. Thus, following the CUT <cit.> and <ref>, we introduce our disentangled contrastive learning. The contrastive learning aims to provide a graph-like structure for the image by computing the similarity between any two elements. To construct a successful contrastive learning, two points are significant: the hard negative examples digging strategy and the sample quantity. Too few samples in computing the contrastive loss will have serious consequences in performance, which is common in the foreground modeling. Based on our disentangled representation, we construct a contrastive learning strategy elaborated as follows. For each disentangled representation M, we will obtain a element-wise similarity scores before obtain the mask, which is denoted as P. P presents the possibility of which each element is the foreground. By selecting the nearest neighbors around the 1 matrix, we can obtain the elements that is most similar to the foreground of generated image, which is also the hard negative examples for each other. Thus, we construct the hard negative examples and make sure the samples quantity will not be too small. Our contrastive loss is formulated as: ℒ_fore = -∑_k^Nlog exp(ζ_k^f(x_𝒩 → 𝒟)^Tζ_k^f(x_𝒩)/ τ)/exp(ζ_k^f(x_𝒩 → 𝒟)^Tζ_k^f(x_𝒩)/ τ)) + exp(ζ_k^b(x_𝒩 → 𝒟)^Tζ_k^b(x_𝒩)/ τ), where τ is the hyper-parameter temperature. k denotes the number of layer in VGG-16. In practice, the mask of ζ_k^f is obtained from the generated image x_𝒩 → 𝒟. Such strategy makes sure that the disentangled representation can also focus on the graph structure. Finally, we construct our merging operator with adversarial training so that we can model the merging operator into the parameters w in ℱ. Following LSGAN<cit.>, the adversarial loss is formulated as: ℒ_adv(ℱ) = ||D(x_𝒩 → 𝒟)-1||_2^2, ℒ_adv(D) = ||D(x_𝒟)-1||_2^2 + ||D(x_𝒩 → 𝒟)||_2^2, where D denotes the discirminator network. The final loss function is formatted as : ℒ_total(ℱ) = ℒ_adv(ℱ) + ℒ_back + ℒ_fore, ℒ_total(D) = ℒ_adv(D) . § EXPERIMENTS §.§ Datasets To support our research on Night2Day for surveillance, a new dataset, NightSuR, is constructed in this paper. It contains 6 fixed scenes with 6574 images in total. Firstly, we build an indoor dark scene (Darkness) to simulate the night and day scenes in surveillance views. The light sources are controlled manually to simulate night and day. In Darkness, 167 pairs of images are collected in bright and dark environments. Secondly, two outdoor scenes (Traffic and Pedestrian-b) are captured with fixed views of cameras. Traffic aims to simulate the data from the traffic monitor. Pedestrian-b presents the surveillance with tiny targets. Thirdly, three scenes from real surveillance cameras are selected (Pedestrian-g, Crowds, and Vehicle). As discussed above, the background reference is the individual supervision for Night2Day in surveillance. Thus, an empty background image is captured from the time when the foreground is sparse. For the scenes that are impractical to capture a background image (Traffic), we synthesize a background image by computing the average value across all the images in the scene. Overall, our dataset covers widespread scenes and light conditions for surveillance, especially for some extreme environments (e.g., Vehicles with flare, and Darkness with extremely low lightness). Additionally, it contains three scenes for pedestrians, which is common in surveillance and raises more challenges in detecting less orthogonal edges. The comparison of NightSuR and former datasets is demonstrated in <ref> to conclude the necessity of the NightSuR in some significant characteristics. §.§ Experimental Settings Experiments are conducted on the proposed dataset for Night2Day in surveillance scenes. Moreover, we also evaluate our method for driving scenes that are collected from the FLIR dataset to explore the extensibility of methods. Evaluation Metric. Following the common practice, we employ fréchet inception distance (FID) scores <cit.> to evaluate the quality of translated images. The FID scores results are shown in <ref>.We compare DiCo with several effective image translation methods for qualitative comparison. Please refer to <ref> and <ref> for the reconstruction result. Implementation Details. We implement our framework with a ResNet-based generator and PatchGAN-based discriminator. Our disentanglement module is based on the ImageNet-pretrained VGG-16, where the employed layers of VGG are relu3-1 and relu4-1. It should be noted that, in Darkness dataset, the paired daytime data are not utilize to keep the unsupervised setting. Additionally, the results of compared methods are reproduced from their released source code. §.§ Results on NightSuR In this paper, several unsupervised image-to-image translation methods are compared: CycleGAN <cit.>, CUT <cit.>, NEGCUT <cit.> and F-LSESim <cit.>. Some Night2Day methods for driving scenes, ForkGAN <cit.> and InstaFormer <cit.>, are also in comparison. Note that InstaFormer <cit.> here is trained in unsupervised settings without a detection box as prior. We also compare two low-light enhancement methods to display the difference between Night2Day and low-light enhancement. In <ref>, our method outperforms state-of-the-art among all the related works. With the benefit of regression loss applied to the background, DiCo can easily outperform in the Pedestrian-b in which the background is the major component. It could also explain the poor performance in Flir and suboptimal results in Traffic. Crowds is crowded with little supervision of the background, which contributes to poor FID scores. Notably, despite the poor FID scores in Crowds and Flir, DiCo still achieves SOTA with the help of a learnable color invariant and disentangled contrastive learning strategy. This demonstrates the efficacy of our modules and reveals common issues of unsupervised image translation in complex scenes. In addition, the DCLGAN <cit.> is a strong baseline that reaches the suboptimal FID scores. It combines the strengths of cycle consistency training and contrastive learning, however, has its boundary under the design of traditional task formulation. The qualitative results are displayed in <ref> and <ref>. <ref> shows the results of some extremely dark and flaring scenes. The <ref> are some common application scenes. Significantly, in the Vehicle, DiCo can effectively manage flare using color invariant prior and disentangled contrastive learning, whereas others cannot. Moreover, due to the direct supervision from the background, DiCo can reconstruct the background with more details than others. Although F-SeSim <cit.> has a diminished capacity to reconstruct color information, it excels in semantic consistency protection for a variety of scenes. It motivates us to apply our method to additional scenes in future projects. In supervised settings, InstaFormer <cit.> is a strong baseline because it is the only method with a vision transformer backbone. However, the vision transformer's potential remains untapped. Due to the data dependence of the vision transformer, more training data and steps may be helpful. This paper is, as far as we are aware, the first to successfully perform Night2Day on the human body, which lacks sufficient orthogonal edges like street scenes. Although the performance is not yet excellent, it demonstrates the possibility of achieving a higher standard of non-orthogonal Night2Day. §.§ Semantic Segmentation and Detection The semantic segmentation and detection experiments are also conducted to prove the ability in keeping the semantic consistency of DiCo. Specifically, we employ Faster-RCNN <cit.> as the detection model and Deeplabv3 <cit.> as the semantic segmentation model. The ResNet101 is employed as the backbone of Faster-RCNN and is trained on the COCO. The Deeplabv3 is trained on the Cityscapes dataset. We conduct inference with the two models on the generation results of DiCo and the nighttime images. The visualization of results is in <ref> and <ref>. DiCo mitigates the degradation of nighttime images in the following aspects. (1) It enhances some insignificant local patterns in nighttime images. For example, in the second line of <ref>, DiCo enhances the insignificant features of far-small objects in the nighttime (The buses and the children). (2) It corrects some confused local patterns at night. For example, in the second line of <ref>, DiCo mitigates the light on the ground that is similar to the headlight of vehicles, and finally corrects the detection results. (3) It adjusts the global features of nighttime images. For example, in <ref>, DiCo adjusts the global feature and corrects the pixel-wise classification, and achieves instance-aware results at night. §.§ Expand to the Auto-Driving Scene Experiments in the driving scene datasets are also conducted to explore the expansion of DiCo. The FLIR and BDD100K are popular auto-driving datasets with sufficient daytime and nighttime images. For the FLIR dataset, we randomly chose 1000 pairs of nighttime and daytime images from the FLIR dataset to achieve a balanced distribution of nighttime and daytime images marked as FLIR1k. For the BDD100k, we choose clear daytime images and nighttime images in various kinds of weather to construct the Night2Day dataset. We conduct DiCo in comparison with the former methods. The background in the driving scene is constructed with the average of a batch of daytime images. The quantitative results of FID scores are in the <ref>. The qualitative results are in the <ref> and <ref>. We can obtain two significant observations from the experiments. First, DiCo still shows expressive results in comparison with former work. It inspires us that disentangled representation is a general framework. It essentially provides the direct supervision for explicit invariance in target domain. In this work, we represent such explicit invariance in a simple way: the expectation of the target domain. It is within our expectation that improving such representation can provide even more surprising results. Second, it is obvious that the performance of DiCo is better on the BDD100k than the FLIR1k. It indicates that the perception of complex scenes still relies on the large-scale dataset. However, these experiments prove the expansion of DiCo and inspire us to develop disentangled representations with more generalized prior knowledge and solve problems in more general scenes. §.§ Ablation Study To figure out how these modules influence DiCo, our ablation study is conducted under these settings: DiCo w\o LCI, DiCO w\o L_fore, and DiCo w\o L_back. Quantitative results are summarized in <ref>. The ablation of the three modules emphasizes different aspects of DiCo. Under the DiCo w\o LCI, scenes with gentle light conditions (Traffic and Flir) are less impacted while the dark scenes (Pedestrian-b, Crowds, and Darkness) degrade the most. It confirms that the LCI module is important for accessing stable perception in various light conditions during the nighttime. Surprisingly, the results in the Vehicle with flare are less degraded. It shows that contrastive learning also contributes to the perception in flare scenes. Under the DiCo w\o L_fore settings, the model collapses in the driving scenes as shown in the column of Flir in <ref> and <ref>. In surveillance scenes, our model still performs well with the help of the disentangled representation. DiCo w\o L_fore performs suboptimal scores in Pedestrian-b, which proves that our disentangled representation can control the semantic consistency effectively in surveillance, despite being limited by the reference background. The unexpected degradation in Darkness indicates that the disentangled contrastive learning also contributes to the perception of the extreme environment. Such observation corresponds to the results of CUT <cit.> and the ablation results of LCI in Vehicles. Under the DiCo w\o L_back setting, as shown in <ref>, the performance of DiCo w\o L_back is closer to the CUT <cit.>, but a bit better with our hard negative examples mining strategy and LCI module. As expected, DiCo degrades the most in the Pedestrian-b and Darkness, in which the background predominates the whole image. Moreover, the FID scores during the training process are displayed in <ref>. It shows the effectiveness of each module in DiCo, and presents that the whole DiCo model is more stable in training and faster in convergence, compared to other variants . §.§ Limitations Although DiCo's performance is good in many surveillance scenes and has a good expansion in the driving scene. However, it still has some limitations. Firstly, it still results in some distortion in some small targets. Maybe high-resolution methods will help and they will be considered in our future work. Secondly, DiCo in the driving scene does not posse an apparent advantage over the former methods despite it being designed for surveillance scenes. The experiments on the driving scenes inspire us that disentangled representation is a general framework for describing the real world. Considering disentanglement in a more generalized view is our future work. Thirdly, Night2Day on complex scene still rely on the large-scale dataset. An apparent observation is that performance on BDD100k is distinctly better than the results on FLIR1k. We pursue to construct prior knowledge of the complex scenes and reduce dependencies on the large-scale dataset in our future work. § CONCLUSION This paper presents a novel solution to nighttime surveillance. It aims to translate images from night to day while maintaining semantic consistency. To achieve this goal, we present the novel Disentangled Contrastive learning (DiCo) method, which primarily consists of a learnable color invariant, a disentangled representation, and a contrastive learning strategy. DiCo outperforms other strong baselines in traditional image translation, which shows the effectiveness of our method and reasonability of task formulation for nighttime surveillance. The excellent visual performance of DiCo also confirms that Night2Day is an effective solution to nighttime perception. In addition, a new Night2Day dataset for surveillance is created to support the research, which contains various surveillance scenes and light conditions. In the future, we will persistently explore perception in extreme conditions and expand the boundary of human cognition. IEEEtran
http://arxiv.org/abs/2307.03998v1
20230708154349
Lightweight Improved Residual Network for Efficient Inverse Tone Mapping
[ "Liqi Xue", "Tianyi Xu", "Yongbao Song", "Yan Liu", "Lei Zhang", "Xiantong Zhen", "Jun Xu" ]
cs.CV
[ "cs.CV", "eess.IV" ]
Lightweight Improved Residual Network for Efficient Inverse Tone Mapping Liqi Xue, Tianyi Xu, Yongbao Song, Yan Liu, Lei Zhang, Xiantong Zhen, and Jun Xu This work was sponsored by the National Natural Science Foundation of China (No. 62002176, 62176068, and 12101334), CAAI-Huawei MindSpore Open Fund, the Natural Science Foundation of Tianjin (No. 21JCQNJC00030), and the Fundamental Research Funds for the Central Universities. Corresponding author: Xiantong Zhen ([email protected]) and Jun Xu ([email protected]). Liqi Xue, Tianyi Xu, Yan Liu, and Jun Xu are with the School of Statistics and Data Science, Nankai University, Tianjin 300071, China. Yongbao Song is with the School of Mathematical Science, Nankai University, Tianijn 300071, China. Lei Zhang and Xiantong Zhen are with the Computer Science College, Guangdong University of Petrochemical Technology, Maoming 525000, China. August 12, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The display devices like HDR10 televisions are increasingly prevalent in our daily life for visualizing high dynamic range (HDR) images. But the majority of media images on the internet remain in 8-bit standard dynamic range (SDR) format. Therefore, converting SDR images to HDR ones by inverse tone mapping (ITM) is crucial to unlock the full potential of abundant media images. However, existing ITM methods are usually developed with complex network architectures requiring huge computational costs. In this paper, we propose a lightweight Improved Residual Network (IRNet) by enhancing the power of popular residual block for efficient ITM. Specifically, we propose a new Improved Residual Block (IRB) to extract and fuse multi-layer features for fine-grained HDR image reconstruction. Experiments on three benchmark datasets demonstrate that our IRNet achieves state-of-the-art performance on both the ITM and joint SR-ITM tasks. The code, models and data will be publicly available at <https://github.com/ThisisVikki/ITM-baseline>. Inverse tone mapping, improved residual block, lightweight network, inference efficiency. § INTRODUCTION High dynamic range (HDR) images defined in Rec.2020 <cit.> exhibit clearer details in highlights and shadows, as well as smoother transitions on brightness and color, than the standard dynamic range (SDR) images with 8-bit color depth defined in Rec.709 <cit.>. Owing to these benefits, the manufacturers of television and mobile devices make a push to bring HDR contents to demanding consumers. Though HDR display devices allow more visually-pleasing contents in HDR images by Dolby Vision, HDR10, and HDR10+ technologies <cit.>, the SDR images in 8 bit-depth would be featureless when being directly broadcast on HDR display devices <cit.>. To present the SDR images closer to human perception on HDR display devices, it is essential to convert SDR images into comfortable HDR ones without color or information loss. This challenging problem is known as inverse tone mapping (ITM) <cit.>, which has been studied in a more general sense rather than expanding the luminance range of camera raw image files in linear color space <cit.>. Early image ITM methods mainly resort to global or local image processing operators for promising performance. Global ITM operators <cit.> usually utilize reverse tone mapping functions to extend the dynamic range of image pixels. But this would bring distorted details and uneven transitions between neighborhood pixels in different levels of brightness. Local ITM operators <cit.> expand the image bit depth in a spatially-varying manner. Unfortunately, these methods would fail to preserve the global consistency of luminance ranges across an image. Recently, deep neural networks have been employed to tackle the ITM task from a data-driven perspective <cit.>. These networks usually contain strong backbones with complex architectures, which may require huge computational costs for promising ITM performance. Besides, the methods of <cit.> simultaneously tackle the joint image super-resolution (SR) and ITM (joint SR-ITM) tasks by separating the base and detail components from the input image with extra image decomposition <cit.>. However, this would further increase the model complexity and computational costs of current joint SR-ITM methods over previous ITM ones. Despite their promising performance, the above-mentioned ITM methods suffer from two main limitations. Firstly, the complex model architectures obscure the core of the ITM problem, that is, “expanding the luminance range of a low dynamic range image to produce a higher dynamic range image” <cit.>. This problem of extending luminance range or color bit depth is similar to the tasks of image super-resolution <cit.> and video frame prediction <cit.>, all of which aim to increase the highly-correlated information of the input image at different aspects. Therefore, it is possible to tackle the ITM problem by simple and lightweight neural networks, as inspired by the concurrent works in image super-resolution <cit.> and video frame prediction <cit.>. Secondly, the huge computational costs also limits the prevalence of ITM methods from being deployed into edge devices. For example, to perform ITM on a 4K-resolution (3840×2160) image, Deep SR-ITM <cit.> needs 2.50M parameters, ∼1.05×10^4G FLOPs, and a speed of 777.95ms, while HDRTVNet <cit.> needs 37.20M parameters, ∼1.41×10^4G FLOPs, and a speed of 1513.43ms. In this paper, we leverage the popular residual learning recipe <cit.> to develop a simple and lightweight Improved Residual Network (IRNet) for efficient ITM performance. Specifically, we propose an Improved Residual Block (IRB) with simple modifications of the residual block <cit.> for fine-grained feature extraction and fusion. On network design, we also adopt the plain residual learning framework to avoid complex multi-branch architecture <cit.>. Experiments on three benchmark datasets, including our newly collected one, show that our IRNet is very efficient and outperforms ITM methods. As shown in Figure <ref>, our IRNet only needs ∼0.13M parameters and ∼0.22×10^4G FLOPs at a speed of 398.33ms to process a 4K-resolution image, which outperforms state-of-the-art methods on the ITM task. On the HDRTV1K dataset <cit.>, our IRNet exceeds AGCM+LE <cit.> on visual quality and PSNR by 0.59dB, but only has one tenth of parameter amounts (∼0.13M v.s. ∼1.41M). Besides, our IRNet also achieves superior performance to Deep SR-ITM <cit.> and JSI-GAN <cit.> on joint SR-ITM. In summary, our main contributions are three-fold: * We develop a lightweight Improved Residual Network (IRNet) for efficient image inverse tone mapping (ITM). Our IRNet is built upon a new Improved Residual Block (IRB) customized from the popular residual block for fine-grained feature extraction and fusion. * We collect a new test set for ITM, , ITM-4K, that has 160 4K-resolution images of versatile scenes with ground truth HDR images. It serves as a good supplementary to HDRTV1K <cit.> that has 117 test images. * Experiments on the HDRTV1K dataset <cit.>, our new ITM-4K test set, and the test set in <cit.> show that our lightweight IRNet is efficient and achieves impressive quantitative and qualitative results on the ITM and joint SR-ITM tasks. Comprehensive ablation studies also validate the effectiveness of our model design. The rest of this paper is organized as follows. In <ref>, we summarize the related work. In <ref>, we present the proposed Improved Residual Network (IRNet). In <ref>, we perform experiments to validate the efficiency of our IRNet on ITM and joint SR-ITM. In <ref>, we conclude this paper. § RELATED WORK §.§ Inverse Tone Mapping The inverse tone mapping (ITM) task aims to transform a standard dynamic range (SDR, usually in 8-bit) image into a high dynamic range (HDR, usually in 16-bit) image. This problem is ill-posed due to the information loss in the luminance ranges of SDR images. Early explorations on the ITM task can be divided into global and local ITM operators. While the global ITM operators equally apply linear expansion <cit.>, cross-bilateral filtering <cit.>, or a gamma-based expansion <cit.> to all the pixels or patches of an input SDR image, the local ITM operators <cit.> reconstruct highlight regions or expand the luminance ranges of each pixel or patch according to the local information around it. Previous works show that global ITM operators <cit.> could avoid undesired artifacts, but result in rough details and unnatural transitions due to the ignorance of local detail reconstruction. On the contrary, local ITM operators <cit.> implemented adaptively on small areas would fail to capture the global consistency of luminance ranges. To deal with the issues of locally undesired artifacts and global luminance consistency raised by the early methods mentioned above, many recent ITM methods <cit.> shift to utilize the advancements of deep convolutional neural networks (CNNs). Early CNN-based methods <cit.> merge low dynamic range (LDR) images captured under multiple exposure settings to produce an SDR image. Meanwhile, the work of <cit.> presents a multi-branch CNN to implement ITM both on global and local perspectives. Then, the method of <cit.> introduces a feature masking strategy to address the problem of undesired artifacts emerged during the image reconstruction. Recently, the physical principle of HDR image formation is also incorporated into the designing of ITM CNNs <cit.>. For example, HDRTVNet <cit.> contains of an adaptive global color mapping network, a local enhancement network, and a highlight generation network. Despite their promising performance, most of these methods require huge parameter amounts and computational costs, which hinders them from being deployed into resource-constrained edge devices. In this paper, we aim to develop a lightweight yet efficient ITM network. §.§ Joint Super-Resolution and Inverse Tone Mapping Joint Super-Resolution and Inverse Tone Mapping (joint SR-ITM) aims to simultaneously increase the spatial resolution and dynamic range of an input low-resolution and standard dynamic range (LR-SDR) image. Deep convolutional neural networks have also been applied to tackle the joint SR-ITM task <cit.>. Considering that the luminance ranges of different image areas should be expanded adaptively, the method of <cit.> firstly decomposes an SDR image into a low-frequency structure component and a high-frequency detail component, and then processes the two components by two different but correlated network branches. The separation is implemented by guided-filtering <cit.>, which is widely used in image smoothing <cit.>. This framework is also employed in the subsequent work of JSI-GAN <cit.>. To tackle multi-frame SDR inputs, Lecouat <cit.> reformulated the joint SR-ITM task as an optimization problem to fuse multiple LR-SDR raw image bursts in different exposures into an HR-HDR image. Tan <cit.> developed a two-branch network to fuse a series of LR-LDR dynamic images into an HR-HDR one by estimating the motion cues by a deformable module <cit.>. Though with appealing performance, the image decomposition based methods usually require multi-branch network architectures for the joint SR-ITM task, which, however, usually implies a considerable growth of parameter amounts and computational burden to tackle parallel feature extraction and elaborate interaction. In this paper, we propose a lightweight ITM network for inference efficiency, inspired by the merits of lightweight image super-resolution networks <cit.>. §.§ Efficient Image Restoration For the goal of inference efficiency, network compression and acceleration techniques are exploited to reduce the computational burden and memory consumption of image restoration methods <cit.>. One popular solution is employing Laplacian pyramid <cit.> to decompose the input image into a low-resolution base layer consuming the majority of computations and several high-resolution detail layers requiring a few computations <cit.>. Bilateral grid learning <cit.> is also utilized to learn approximate operators on the downsampled images and execute the learned operators to the original images. Other inference strategies like recursive learning <cit.> and look-up table <cit.> are also exploited to accelerate image restoration networks. Instead of developing new methods, some recent works accelerate existing restoration networks by model slimming <cit.> or input-adaptive inference <cit.>. In this paper, we develop an efficient ITM network that can well process a 4K-resolution SDR image with ∼134K parameters and 0.4 seconds. § PROPOSED METHOD §.§ Motivation In the scene-referred workflow <cit.>, an HDR raw image in camera color space (usually in 16-bit color depth) will be tone mapped to an SDR RGB image in display-referred color space (usually in 8-bit color depth). This process is usually implemented in a camera imaging pipeline containing multiple image processing operations, during which different pixels usually undergo different compression strengths on dynamic ranges to produce visually pleasing image contrasts <cit.>. The task of inverse tone mapping (ITM) aims to increase the dynamic range of light intensity (or luminance) in an SDR image. An SDR image in 8-bit depth can display a maximum of around 16.7 million shades of color, while an HDR image in 10-bit depth can display a maximum of around 1.07 billion shades of color, allowing it to exhibit more colors with better visual quality <cit.>. To better understand the luminance difference between SDR and HDR images, in Figure <ref> (a), we visualize the maximum and minimum luminance values of 117 HDR images from the test set of <cit.>, as well as the luminance values at the corresponding positions of the paired SDR images. We observe that there are obvious gaps between the maximum values of HDR images and the values at the corresponding positions of the paired SDR images, whilst slight differences between the minimum values of the HDR images and the values at the corresponding positions of the paired SDR images. This indicates that high-luminance values change more greatly than low-luminance ones. Besides, the luminance values of different HDR images also show distinct gaps when compared to those values at the corresponding positions in the paired SDR images. For promising ITM performance, the ITM methods <cit.> suffer from complex network backbones with huge parameter amounts and computational costs. To implement efficient ITM, in this paper, we propose to develop a simple, lightweight, and efficient Improved Residual Network (IRNet) by slightly modifying the residual block <cit.>. As shown in Figure <ref> (b), we concatenate the intermediate feature map F_1 after the LeakyReLU in the residual block with the fused feature of F_in and F_2 (more details will be presented in <ref>). Our IRNet shows clear improvements, especially in the bright area near the sun, over that without using the feature map F_1 on the image “028” from the HDRTV1K test set <cit.>, as shown in Figure <ref> (b). Along the highlighted lines, the green line of our IRNet enjoys closer approximation to the blue line of the “Ground Truth” HDR image than the red line of our IRNet without using the intermediate feature F_1 (denoted as “IRNet w/o F_1”). In Figure <ref> (c), we plot the ratios of luminance values of the highlighted lines by our IRNet and “IRNet w/o F_1”, which also validates that our IRNet achieves better approximation to the “Ground Truth” than the IRNet without F_1. This validates the effectiveness of our IRB over the residual block for ITM. Adaptive luminance extension is also important for the ITM task. For this goal, many joint SR-ITM methods <cit.> performed image or feature decomposition to extract and fuse multi-scale feature maps. However, these ITM networks with decomposition techniques often suffer from complex network structures with heavy computational costs (Table <ref>). For efficiency consideration, we design our IRNet as a simple and lightweight network by employing the popular residual block <cit.> as a proper backbone for our IRNet. The promising results in Figure <ref> (b) by our IRNet without using F_1 motivates us to further improve our IRNet for better ITM performance. §.§ Proposed Improved Residual Network Our IRNet first extracts the initial feature map using a 1×1 convolution layer (instead of 3×3 one to reduce the parameter amounts). Then we cascade n Improved Residual Blocks (IRBs) proposed for fine-grained feature extraction and fusion. The details of our IRB block will be introduced later. To boost the ITM performance, each IRB is followed by a Contrast-aware Channel Attention (CCA) layer <cit.>. We also use a skip connection to sum the feature maps before the IRB block and after the CCA layer. Improved Residual Block (IRB). The proposed IRB block is built upon the residual block <cit.>, which achieves great success in many computer vision tasks <cit.>. As shown in Figure <ref> (a), the residual block <cit.> contains two 3 × 3 convolution layers with an activation function (here we replace ReLU by LeakyReLU) between them, the output feature is added by the input feature F_in and activated by another LeakyReLU function. Built upon the residual block, our IRB block is designed to keep our IRNet as simple as possible with better ITM performance. This is feasible by fully exploiting the multi-layer feature maps within the IRB block. To this end, given the input feature F_in∈ℝ^H× W× C, our IRB first refines it by a 3×3 convolution layer and a LeakyReLU activation function. The extracted feature F_1∈ℝ^H× W× C/2 is further refined in our IRB by a second 3×3 convolution layer to output the feature F_2∈ℝ^H× W× C: F_1 = LeakyReLU(Conv_3 × 3(F_in)), F_2 = Conv_3× 3(F_1 ). Then our IRB uses a skip connection and a Conv_1×1 to fuse F_in and F_2 and obtain the fusion feature F_fuse: F_fuse = Conv_1×1(F_in+F_2). Finally, different from the residual block, our IRB explicitly concatenates the intermediate feature F_1 with the fusion feature F_fuse to produce the output feature F_out as follows: F_out=Conv_1×1(Concat(F_fuse,F_1)). We visualize the structure of our IRB block in Figure <ref> (b). Compared with the original residual block, our IRB well extracts and utilizes the multi-layer features, which correspond to spatially adaptive luminance areas for ITM. As shown in Figure <ref> (a), compared with the IRNet w/o F_1, our IRNet restores the luminance of HDR image closer to the ground truth, especially in the highlight regions. Even though popular encoder-decoder frameworks like U-net <cit.> or Uformer <cit.> can be utilized here to extract strong multi-scale features, this would bring significant growth on parameter amounts and computational costs <cit.>. Through a simple modification to the residual block, the proposed IRB serves as a lightweight building block in our IRNet for efficient ITM performance. The mean feature map along the channel dimension could reflects the luminance information of that feature <cit.>. In Figure <ref> (b), we visualize the mean feature maps of F_in, F_1, F_2, F_fuse, and F_out extracted by our IRNet and “IRNet w/o F_1”. One can see that the mean feature map of F_1 extracted by our IRNet exhibits higher luminance in the sky area around the sun than that of “IRNet w/o F_1”. Due to the lack of luminance information by the intermediate feature F_1, “IRNet w/o F_1” produces stronger contrasts at the input feature F_in of IRB blocks and darker luminance around the sun in the output feature F_out, than our IRNet using F_1 in our IRB block. Contrast-aware Channel Attention (CCA). To preserve image details, we utilize a CCA layer <cit.> after each IRB block. As shown in Figure <ref> (c), the CCA layer is consisted of contrast computation, two 1×1 convolution layers interleaved with a ReLU function, a sigmoid function, and a skip connection between the input and output features to help gradient propagation. Given the input X=[x_1,...,x_C]∈ℝ^H× W× C, the contrast is computed as follows: z_c = H_GC(x_c) = √(1/HW∑_(i,j)∈ x_c (x_c^i,j-∑_(i,j)∈ x_c x_c^i,j)^2) + 1/HW∑_(i,j)∈ x_c x_c^i,j, c=1,....,C. After the i-th (i=1,...,n-1) IRB block and CCA layer, the output feature is added to the input feature F_in^i by a skip connection, and F_in^n+1 is the final feature that will be inputted to the next convolution layers as follows: F_in^i+1=F_in^i+CCA(IRB(F_in^i)). After extracting n scales of fine-grained feature maps, we concatenate them for multi-scale feature fusion, which is implemented by a sequence of 1×1 convolution layer, a LeakyReLU activation function, and a 3×3 convolution layer. Finally, we reconstruct the output HDR image using a 3×3 convolution layer. The overall architecture of the proposed IRNet is shown in Figure <ref> (d). To apply the proposed IRNet to the joint SR-ITM task, we further add a Pixel Shuffle operation <cit.> after the final 3×3 convolution layer of our IRNet to make it feasible for super-resolution. The Pixel-Shuffle contains two 3×3 convolution layers interleaved with a ReLU function. The first convolution layer reduces the channel dimension of the feature map from C to 3s^2, where s is the upsampling factor, while the second convolution layer reconstructs the 3-channel HR-HDR image via upsampling the feature map by a factor of s. §.§ Implementation Details Here, we set the channel dimension of the feature map F_in as C=64. The number of IRB blocks n is set as n=2 for the ITM task and n=5 for the joint SR-ITM task. We use Kaiming initialization <cit.> to initialize the parameters of our IRNet. To optimize these parameters, we adopt Adam optimizer <cit.> with β_1 = 0.9 and β_2 = 0.999 to minimize an ℓ_1 loss function. The learning rate η is initialized as 5×10^-4 and degrades to 1×10^-11 by cosine annealing schedule with warm restart <cit.> in every 60 epochs. The batch size is set as 16. We train the models of our IRNet for 200 epochs on an NVIDIA V100 GPU with 32GB memory. § EXPERIMENTS In this section, we evaluate the performance of comparison methods and our IRNet on the ITM and joint SR-ITM tasks. We first introduce the used datasets and metrics. Then we present the the comparison results on ITM and joint SR-ITM, respectively. Finally, we conduct a series of ablation experiments to study the components of our IRNet. §.§ Dataset and Metrics Training set. In our experiments, we use the recently published HDRTV1K dataset <cit.> to evaluate the comparison methods. This dataset contains 1,235 pairs of 8-bit SDR and 10-bit HDR images for training and 117 pairs of images for testing. We crop each image in the training set into 30 256×256 image patches. For data augmentation, we randomly flip the cropped patches horizontally or vertically, rotate these patches by 90°, 180°, or 270°. To perform joint SR-ITM on the HDRTV1K dataset, which is originally developed only for ITM, we downsample the SDR images by a factor of s=4 to obtain the low-resolution (LR) SDR images, similar to <cit.>. The high-resolution (HR) and HDR images from the HDRTV1K dataset can still be used as the training targets. Test sets. On the ITM task, we evaluate the comparison methods on three datasets: the test set of HDRTV1K <cit.>, our newly collected ITM-4K dataset (for high-resolution images), and the test set in <cit.>. On the joint SR-ITM task, we evaluate the comparison methods on the test set of HDRTV1K <cit.>. The details of these test sets are summarized as follows: * HDRTV1K <cit.> contains 117 test SDR images of size 3840×2160×3, with paired HDR images. For joint SR-ITM, we downsample the SDR images by a factor of 4 to generate the LR-SDR test images. * ITM-4K contains 160 pairs of SDR and HDR images of size 3840×2160×3. These images are extracted from 9 HDR10 videos collected from https://4kmedia.org4kmedia.org. The corresponding SDR videos are generated through YouTube similar to <cit.>. We display 12 typical scenes from the 160 test images in Figure <ref>. In Figure <ref>, we also visualize the distribution of the 160 SDR images in our ITM-4K dataset and the 117 SDR test images in HDRTV1K <cit.> using t-SNE <cit.>. One can see that our ITM-4K dataset contains diverse scenes similar yet supplementary to the test set of HDRTV1K <cit.>. * The test set in <cit.>. This dataset contains 28 test images, 12 of which are overlapped with the training set of HDRTV1K <cit.> and the test set of our ITM-4K. Thus, we use the remaining 16 images to evaluate the ITM methods. Note that although this dataset is used for joint SR-ITM task, the test set provides the SDR images of the same sizes with the corresponding HDR images, which can be used to evaluate ITM methods. We do not use this test set for the joint SR-ITM task due to its overlap with the training set of HDRTV1K <cit.>. Metrics. We evaluate the performance of different methods on ITM and joint SR-ITM in terms of PSNR, SSIM <cit.>, LPIPS <cit.>, and HDR-VDP3 <cit.>. PSNR is used to evaluate the closeness of the output image to the corresponding ground truth image. SSIM <cit.> and LPIPS <cit.> evaluate the structural and perceptual similarity, respectively, of the output image to the corresponding ground truth image. HDR-VDP3 <cit.> is a widely used metric to evaluate the quality of HDR images <cit.>, and we use its prediction of “quality” (Q) here. §.§ Results on Inverse Tone Mapping Comparison methods. For our IRNet, we set n=2 and C=64, and denote it as “IRNet-2 (64c)”. We compare it with four ITM methods of HDRNet <cit.>, CSRNet <cit.>, Ada-3DLUT <cit.>, and HDRTVNet <cit.>. The methods of Pixel2Pixel <cit.> and CycleGAN <cit.> are also evaluated as two generative baselines for ITM. As suggested in <cit.>, we also modify the joint SR-ITM methods of Deep SR-ITM <cit.> and JSI-GAN <cit.> for the ITM task, by setting the stride of the first convolution layer as 2 to make them feasible for the ITM task. This manner reduces their computational costs while not degrading the ITM performance. Objective results. The comparison results on the test set of HDRTV1K <cit.> are summarized in Table <ref>. One can see that our IRNet-2 (64c) outperforms the second best method, , AGCM+LE, by 0.59dB, 0.0011, and 0.3 in terms of PSNR, SSIM, and LPIPS, respectively. Note that our IRNet-2 (64c) has 134.73K parameters, fewer than all the other comparison methods except CSRNet (36.49K) and AGCM (35.25K). But these two methods suffer from clear performance gap to our IRNet-2 (64c) in terms of all evaluation metrics. On HDR-VDP3, our method is slightly (0.03) lower than the best method AGCM+LE. But AGCM+LE requires 1410K parameters, 6228.31G FLOPs, and 3114.09G MACs to process a 4K-resolution SDR image at a speed of 691.30ms, much larger than those of our IRNet-2 (64c). Besides, our IRNet-1 (48c), , the IRNet with a single IRB block and C=48, only needs 49.3K parameters to achieve competitive results with the second best method of AGCM+LE. We further evaluate our IRNet-2 and other methods on our ITM-4K dataset and the 16 SDR images in the test set of <cit.>. As shown in Table <ref>, our IRNet-2 (64c) still achieves better results than other comparison methods on PSNR and HDR-VDP3. In summary, our IRNet achieves efficient ITM performance with a lightweight backbone. Visual quality is an important criterion to evaluate the performance of ITM methods, since human are the final reviewers of the image quality. For the purpose of visualization, the HDR images are generated from HDR10 videos and stored in the 16-bit PNG format. The comparison results of visual quality by different methods on three test sets are shown in Figure <ref>. We observe that most comparison methods suffer from a certain degree of color bias, especially near the light source. Our IRNet achieves closer results to the ground truth images than other methods, with more correct colors and color contrasts. In addition, our IRNet achieves better PSNR and SSIM results than the other comparison methods. All these results demonstrate that our IRNet is very effective on ITM. Running speed is the actual wall-clock time of evaluating model efficiency on SDR images. We calculate the running time of comparison methods on 4K-resolution (3840×2160×3) images. As shown in Table <ref>, our IRNet-2 (64c) is faster than the second and third best methods, , AGCM+LE and HDRTVNet, by a gap of 292.97ms and 1115.10ms, respectively. Meanwhile, IRNet-1 (48c) reduces the running time of IRNet-2 (64c) from 398.33ms to 166.91ms with guaranteed performance. Although faster than our IRNet-2, the methods of HDRNet, CSRNet, Ada-3DLUT, and AGCM suffer from obvious performance degradation on quantitative metrics. §.§ Results on Joint SR-ITM Comparison methods. Here, we set n=5 and C=64 in our IRNet, and denote it as “IRNet-5 (64c)”. We compare it with two SR methods, , EDSR <cit.> and RFDN <cit.>, two cascaded two stage SR-ITM methods, , “HDRTVNet+RFDN” (sequentially performing ITM by HDRTVNet and SR by RFDN) and “RFDN+HDRTVNet” (vice versa), and two joint SR-ITM methods, , Deep SR-ITM <cit.> and JSI-GAN <cit.>. For the cascaded SR-ITM methods, we choose RFDN <cit.> and HDRTVNet<cit.> since they are methods on SR and ITM, respectively. Objective results. The comparison of numerical results are summarized in Table <ref>. It can be seen that the two SR methods still achieve reasonable performance in terms of objective metrics. By first performing SR and then ITM, the cascaded method achieves better results on image quality metrics, but requires heavy computational costs, , 14783.55G FLOPs and 7391.58G MACs to process an LR-SDR image of size 960×540. Of course, first performing ITM and then SR significantly reduces the computational costs, and the performance on evaluation metrics suffers from a huge degradation as well. Besides, compared with Deep SR-ITM and JSI-GAN, our IRNet-5 (64c) achieves the best PSNR results (0.38dB higher than the second best method “RFDN+HDRTVNet”) and comparable results on the other metrics, but with the least requirements on parameter amounts, computational costs, and inference time. These observations demonstrate that our IRNet is a lightweight and efficient backbone that can achieve performance on the joint SR-ITM task. Visual quality. In Figure <ref>, we qualitatively compare the visual results of different methods on the HDRTV1K test set <cit.> modified for joint SR-ITM (please refer to <ref> A). One can see that all these methods obtain promising visual results on the presented scenes. The method of “HDRTVNet+RFDN” produces blurry edges around the lighting area. Besides, the images output by “HDRTVNet+RFDN”, “RFDN+HDRTVNet”, Deep SR-ITM <cit.> and JSI-GAN <cit.> suffer from the color shift problem to some extent. By fully exploiting multi-layer features for fine-grained image reconstruction, our IRNet-5 (64c) not only accurately restores the image colors, but also well increases the image details during the SR process. These results validate that, though being lightweight with the fewest parameter amounts and computational costs, the proposed IRNet is very efficient on the joint SR-ITM task. Running speed. The comparison results of running speed on the downsampled images (960×540×3) are summarized in Table <ref>. It can be seen that our IRNet is faster than other comparison methods. Note that when comparing with “RFDN+HDRTVNet”, our IRNet-5 achieves comparable performance with only 4.08% of its running time. These results validate the efficiency of our IRNet on joint SR-ITM. §.§ Ablation Study To study in detail the working mechanism of our IRNet, we present comprehensive ablation experiments of our IRNet on ITM. Specifically, we assess: 1) how to extract the intermediate feature F_1 in our IRB? 2) how does the number of IRB blocks affect our IRNet? 3) how does the channel dimension C in IRB influence our IRNet? 4) how does the CCA layer boost our IRNet? All variants of our IRNet are trained and evaluated on the training set and test set of HDRTV1K <cit.>, respectively. 1) How to extract the intermediate feature F_1 in our IRB? The IRB in our IRNet is modified from the residual block (RB). To validate the effectiveness of our IRB, we first evaluate our IRNet by replacing the IRB blocks by the RB blocks (using LeakyReLU instead of ReLU for fair comparison). The results listed in the first two rows of Table <ref> show that our IRNet with the IRB block achieves much better performance than our IRNet with the original RB block. Besides, we design several variants of our IRB block (“IRB”) and study how they influence our IRNet on ITM. We first remove the intermediate feature F_1 to verify its importance in our IRB, which is denoted as “IRB w/o F_1”. Then we study where to extract the intermediate feature F_1, which can be put before the first convolution layer (take F_1 as F_in), after the activation layer (our IRB), before the addition operation (take F_1 as F_2). The results are summarized in Table <ref>. One can see that our IRNet with the original IRB achieves the best PSNR and SSIM results. By removing the feature F_1, the variant of our IRNet achieves clear drop on PSNR and SSIM, but similar LPIPS and HDR-VDP3 results. If we use the input feature F_in of IRB or the feature after the second convolution layer F_2 as the intermediate feature F_in, the variants of our IRNet suffer from clear drop on PSNR, but with a little difference on SSIM and LPIPS. All these results validate the effectiveness of utilizing the feature after the activation function as the intermediate feature for our IRB to achieve promising ITM performance. 2) How does the number of IRB blocks affect our IRNet? In our IRNet, we use two IRB blocks for ITM and five IRB blocks for joint SR-ITM. Here, we vary the number of IRB blocks to study how it influences our IRNet. The results are listed in Tables <ref> and <ref>, respectively. It can be seen that our IRNet achieves promising performance with 1∼4 IRB blocks on SSIM, LPIPS, and HDR-VDP3. Our IRNet with two IRB blocks achieves the best PSNR results among all choices. Similarly, our IRNet with five IRB blocks achieves the best PSNR and SSIM results on joint SR-ITM, while that with six IRB blocks achieves the best LPIPS and HDR-VDP3 results. To reduce the parameter amounts, we use two and five IRB blocks in our IRNet for ITM and joint SR-ITM, respectively. 3) How does the channel dimension C in IRB influence our IRNet? To answer this question, we perform experiments on our IRNet with different number of channels in the IRB block. The results of our IRNet-1 and IRNet-2 on ITM and those of our IRNet-5 on joint SR-ITM are shown in the Table <ref>, Table <ref> and Table <ref>, respectively. For ITM, our IRNet-1 using one IRB achieves the best PSNR and SSIM results when C=48 and with 49.30K parameters, while our IRNet-2 using two IRBs achieves the best PSNR and SSIM results when C=64 and with 134.73K parameters. For joint SR-ITM, our IRNet-5 using five IRBs achieves the best PSNR results when C=64 and with 468.19K parameters. Our IRNet-5 with C=96 achieves better SSIM, LPIPS, and HDR-VDP3 results, but suffers from a huge growth of parameter amounts. Thus, we set C=48 and C=64 in our IRNet-1 and IRNet-2, respectively for ITM, and C=64 in our IRNet-5 for joint SR-ITM. 4) How does the CCA layer boost our IRNet? Our IRNet uses one CCA layer after each IRB block to refine the feature maps. We remove the first CCA layer between two IRB blocks in our IRNet-2. The results on ITM are shown in Table <ref>. One can see that our IRNet-2 without the first CCA layer suffers from a clear performance drop on PSNR. This demonstrates that the CCA layer is important to our IRNet-2 on ITM. § CONCLUSION In this paper, we developed a lightweight and efficient inverse tone mapping (ITM) network. The proposed Improved Residual Network (IRNet) is mainly consisted of Improved Residual Blocks (IRB) modified from the popular residual block and Contrast-aware Channel Attention (CCA) layers. The proposed IRB block is able to fuse multi-layer features extracted by different convolution layers for fine-grained ITM. We also collected a new ITM-4K test set containing 160 versatile 4K-resolution SDR images. Experiments on three benchmark datasets demonstrated that, our IRNet outperforms the state-of-the-art methods on the ITM task with only ∼0.13M parameters and ∼0.22×10^4G FLOPs per 4K image. Further experiments on the joint SR-ITM task also showed the advantages of our IRNet over the comparison methods on the objective metrics, the computational efficiency, and most importantly, the image quality such as color depth restoration. plain
http://arxiv.org/abs/2307.04812v1
20230710180255
Probing single electrons across 300 mm spin qubit wafers
[ "Samuel Neyens", "Otto Zietz", "Thomas Watson", "Florian Luthi", "Aditi Nethwewala", "Hubert George", "Eric Henry", "Andrew Wagner", "Mohammad Islam", "Ravi Pillarisetty", "Roza Kotlyar", "Kent Millard", "Stefano Pellerano", "Nathan Bishop", "Stephanie Bojarski", "Jeanette Roberts", "James S. Clarke" ]
quant-ph
[ "quant-ph", "cond-mat.mes-hall" ]
Intel Corp., 2501 NE Century Blvd, Hillsboro, OR 97124, USA *These authors contributed equally to this work †Corresponding authors: [email protected]; [email protected] Building a fault-tolerant quantum computer will require vast numbers of physical qubits. For qubit technologies based on solid state electronic devices <cit.>, integrating millions of qubits in a single processor will require device fabrication to reach a scale comparable to that of the modern CMOS industry. Equally importantly, the scale of cryogenic device testing must keep pace to enable efficient device screening and to improve statistical metrics like qubit yield and process variation. Spin qubits <cit.> have shown impressive control fidelities <cit.> but have historically been challenged by yield and process variation. In this work, we present a testing process using a cryogenic 300 mm wafer prober <cit.> to collect high-volume data on the performance of industry-manufactured spin qubit devices at 1.6 K. This testing method provides fast feedback to enable optimization of the CMOS-compatible fabrication process, leading to high yield and low process variation. Using this system, we automate measurements of the operating point of spin qubits and probe the transitions of single electrons across full wafers. We analyze the random variation in single-electron operating voltages and find that this fabrication process leads to low levels of disorder at the 300 mm scale. Together these results demonstrate the advances that can be achieved through the application of CMOS industry techniques to the fabrication and measurement of spin qubits. Probing single electrons across 300 mm spin qubit wafers James S. Clarke† August 12, 2023 ======================================================== Silicon quantum dot spin qubits <cit.> have recently demonstrated single- and two-qubit fidelities well above 99% <cit.>, satisfying thresholds for error correction <cit.>. Today, integrated spin qubit arrays have reached sizes of six quantum dots <cit.> with larger quantum dot platforms in 1D <cit.> and 2D <cit.> configurations also being demonstrated. To realize practical applications with spin qubit technology, physical qubit count will need to be increased dramatically <cit.>. This will require fabricating spin qubit devices with a density, volume, and uniformity comparable to those of classical computing chips, which today contain billions of transistors. The spin qubit technology has inherent advantages for scaling due to the qubit size (∼100 nm), as well as, in the case of Si-based devices, a native compatibility with complementary metal-oxide-semiconductor (CMOS) manufacturing infrastructure. It has therefore been posited that manufacturing spin qubit devices with the same infrastructure as classical computing chips can unlock spin qubits' potential for scaling and provide a path to building fault-tolerant quantum computers with the technology. The scaling of classical chips according to Moore's Law has depended on significant advancements in process variation <cit.> as well as density and speed. For spin qubits today, process variation and yield are significant challenges. It has not yet been clearly shown that CMOS manufacturing infrastructure can bring the same improvements to variation and yield of quantum devices as have been made for classical devices. Spin qubits have been made with hybrid fabrication flows, where industry-standard techniques are interleaved with research techniques such as e-beam lithography and/or liftoff <cit.>. More fully industry-compatible devices in Si-MOS have also been demonstrated <cit.> but are currently limited by high levels of disorder due to the qubits being formed directly at the Si/SiO_2 interface. Spin qubits hosted in epitaxial group-IV heterostructures offer reduced disorder <cit.> but are less straightforward to integrate in an industry process, due to the 300 mm SiGe epitaxy and reduced thermal budget compared to CMOS. In addition to fabrication challenges, the bottleneck of cryogenic electrical testing presents a barrier to scaling any solid state quantum technology, from spin qubits to superconducting <cit.> and topological <cit.> qubits. To improve process variation and yield in quantum devices, process changes must be combined with statistical measurements. This requires wafer-scale datasets of device performance measured at low temperature. Traditional test systems that cool down one device at a time introduce significant overhead through dicing, die attaching, bonding, and thermal cycling devices. This overhead limits the number of devices per wafer that can be tested to sample wafer-scale trends. One solution is device multiplexing, using either on-chip <cit.> or off-chip <cit.> circuitry to increase the sample capacity of a cryostat. Both approaches come with limitations. With off-chip multiplexing, the packaging time is still linear in the number of devices; with on-chip multiplexing, the area of the wafer being sampled is limited to a single die. By contrast, the standard technique in the semiconductor test industry is full wafer probing. This approach provides maximal flexibility, as all devices on the wafer are simultaneously accessible for electrical measurement. For quantum devices, wafer-scale probing requires additional cooling hardware to reach the required temperatures. For spin qubits based on Si/SiGe quantum dots, accessing the single electron operating regime typically requires temperatures ≲4 K. Only recently has wafer probing at such low temperatures become possible. In this work we present two advancements. First, we develop a 300 mm cryogenic probing process to collect high volume data on spin qubit devices across full wafers. Second, we optimize an industry-compatible process to fabricate spin qubit devices on Si/SiGe heterostructures, combining low process variation with a low disorder host material. These two advancements are mutually reinforcing: the development of full-wafer cryogenic test capabilities enables the optimization of the complex 300 mm fabrication process, and the optimization of the fabrication process improves device reliability to enable significantly deeper automated measurements across wafers. As we will show, together these culminate in the automated probing of single electrons in spin qubit arrays across 300 mm wafers. The spin qubit devices studied here are fabricated in Intel's D1 factory where the company's CMOS logic processes are developed. The host material is a Si/Si_0.7Ge_0.3 heterostructure <cit.> grown on 300 mm Si wafers. Fig. <ref>a shows an optical image of a completed spin qubit wafer. The quantum dots are defined by a planar architecture with two gate layers, one passive layer for screening/depletion and one active layer for controlled accumulation <cit.>. All patterning is done with optical lithography. The quantum dot gate patterning is done in a single pass with extreme ultraviolet (EUV) lithography, allowing us to explore gate pitches from 50-100 nm. The fabrication of all device sub-components is based on fundamental industry techniques of deposition, etch, and chemical-mechanical polish <cit.>. As we will demonstrate, this approach leads to high yield and low process variation across the 300 mm wafer. The cryogenic wafer prober (cryo-prober) we use <cit.> was manufactured by Bluefors and AEM Afore and was developed in collaboration with Intel. The cryo-prober can cool 300 mm wafers to a base temperature of 1.0 K at the chuck and an electron temperature of 1.6 ± 0.2 K (see Extended Data Fig. <ref>) in ∼2 hrs. Fig. <ref> shows an overview of the wafer measurement process. After cooldown, thousands of spin qubit arrays and test structures on the wafer are available for measurement. An individual device is aligned to the probe pins using the wafer stage control and a machine vision algorithm. The wafer is brought into contact with the probe pins to electrically connect device pads to voltage sources and current and voltage detectors at room temperature. Measurements are taken with these instruments to extract a variety of metrics. These measurements are repeated on many devices across a wafer to generate wafer-scale statistics. The entire process, from alignment to device measurement, is fully automated and programmable, speeding up device data collection by several orders of magnitude compared to the measurement of singular devices in a cryostat. The mask set used here produces many different device types on each wafer, including fully integrated spin qubit arrays and test structures. These test structures are designed to emulate sub-components of the complete devices and aid in both troubleshooting and targeting specific processes within the fabrication flow. All structures have the same pad design to match the probe pin array, allowing many different structures to be measured in situ. Switching among device types simply requires changes in software or minor changes at the electronics rack. Fig. <ref>a-c shows examples from the range of devices we test with the cryo-prober. These include gate line resistance test structures, Hall bar structures, and spin qubit arrays containing 3 to 12 quantum dots. For each case, the active device pads are highlighted and schematics of the measurement configuration are shown in Fig. <ref>a-c. The performance of all these structures is improved through process optimization, guided by feedback from the cryo-prober. Improvements in gate line resistance across multiple wafers are shown in Fig. <ref>d. The DC gate line resistance, including both gate and interconnect layer, is an important factor in RF signal delivery during qubit control. Here gate line resistance is reduced through optimization of the gate fabrication process with normal-conducting materials and through the introduction of superconducting materials to the stack. Validating the superconducting process in particular is made possible by the 1.6 K base temperature of the cryo-prober. Carrier mobility is another important metric for spin qubits. In the case of Si/SiGe devices, mobility is a direct measure of the quality of the Si quantum well where qubits are defined and provides a target for optimizing the heterostructure growth recipe. While a magnetic field is needed to measure mobility most accurately, we can generate a reasonable estimate to compare the quantum well quality of different wafers (see Methods for details). Estimated carrier mobility across multiple wafers is shown in Fig. <ref>e. These measurements show a significant increase in the median mobility with a change in the epitaxial growth process designed to reduce defect density. We also observe a similar mobility distribution before and after isotopic purification of the quantum well to ^28Si, confirming epitaxial quality is maintained with the purified growth precursor. For quantum dot spin qubit arrays, process optimization involves many factors, including gross yield, quantum dot confinement, device stability, and voltage variation. To optimize these factors, we iterate through a wide variety of changes to the fabrication flow, including but not limited to fixed charge in the gate stack, thermal budget, etch impacts, and the integration of a screening gate layer. Through all these changes, a simple but useful metric for wafer quality is the cross-wafer spread in threshold voltage (), the voltage required to turn on and off current with a particular gate. Fig. <ref>f shows  distributions for 15 wafers, highlighting three versions of the device stack: two intermediate stacks and the optimized stack. For each stack, ∼4,000 data points are shown. Before the process is optimized,  distributions show large spread both within and between wafers. By comparison, the optimized stack shows tight  distributions that are consistent from wafer to wafer. Additionally, quantum dot confinement can be characterized qualitatively through collection of “barrier-barrier scans,” a 2D sweep of the barrier gate voltages that define each quantum dot. These scans reveal the point of low tunnel coupling to source and drain where Coulomb blockade can occur <cit.>. Fig. <ref>g shows examples of these measurements from each of the three stacks featured in Fig. <ref>f. The intermediate stacks show significant disorder and/or instability in these measurements. By comparison, the optimized stack shows clean confinement with the barrier gates and stable current throughout the length of the scan. After process optimization, we characterize the optimized process flow with measurements on 12-quantum-dot (12QD) devices. Measurements are again fully automated to maximize the speed and consistency of data collection (see Methods). The 12QD design is comprised of a linear array of twelve quantum dots with four opposing sensor dots isolated by a center screening gate. An in-line SEM image of this device with a schematic of the measurement configuration is shown in Fig. <ref>b. Quantum dots on both the qubit side and the sensor side are defined by three gates each: one plunger gate to control the electron number on the dot, and one barrier gate on each side to tune the tunnel coupling to the neighboring dot or charge reservoir. The array of twelve quantum dots can be operated as qubits in a variety of spin encodings, including single spin qubits <cit.> (in a 12-qubit array) or exchange-only qubits <cit.> (in a 4-qubit array). Depending on the spin qubit encoding, an optional micromagnet layer can be added to the device and the center screening gate can supply microwave electric fields to control the qubits with electron dipole spin resonance. As in a CMOS logic process, improving qubit yield is a necessary part of scaling up quantum processors, as larger systems will depend on an increasing number of qubit components to function. To analyze the yield of this fabrication flow, we test 232 12QD devices on a wafer. These tests cover a map of 58 die across the wafer and include four nominally identical devices per die. We exclude the outer-most ring of die at the edge of the wafer as these are not targeted in all steps of fabrication. We calculate component yield for ohmic contacts, gates, quantum dots, and full 12QD devices. These yield metrics are summarized in Table <ref>. Both ohmic contact and gate yield are 100%. The large number of gates tested and working on this wafer (>10,000) highlights the consistency of the gate fabrication process. Quantum dot yield is 99.8%, which further emphasizes the reliability of electrostatic gate control. Lastly, the full device yield, including the linear array of 12 quantum dots and the 4 charge sensors, is 96%. (See Methods for more details.) Fig. <ref>c shows a summary of gate  values collected on 12QD devices across a wafer. The distributions are highly consistent across the 25-gate array. We also observe a systematic shift in median  for the two outer-most gates in the array. The symmetry of this effect suggests it is electrostatic in nature, due to the proximity of the reservoir gates. While trends like this might be difficult to confirm through one-off device testing, they are readily observable with full-wafer statistics. The gate  distributions also contain information on process variation. The standard deviation of  for the 25 plunger and barrier gates ranges from 63 to 89 mV across the wafer. Standard deviation incorporates all causes of cross-wafer variation, including both random effects and systematic cross-wafer phenomena arising from processes like deposition and etch. To estimate the random variation in  across gates and devices, we follow a standard CMOS industry method of analyzing matched pair  differences <cit.>, calculated between mirror-symmetric pairs of gates. We subtract the mean from each gate-pair distribution to center them at zero and merge them into one distribution. The resulting distribution represents the random variation due to local contributions, factoring out systematic effects such as local geometry or cross-wafer processing phenomena. The resulting matched pair Δ distribution is plotted in Fig. <ref>d. The standard deviation of this distribution, reduced by a factor of √(2), is 58 mV, and represents the random component of  variation between gates due to local contributions. The measurements presented so far are all taken in the transport regime, where devices are operated as 1D transistors or many-electron quantum dots. Operating a device as a spin qubit processor requires tuning the electron occupancy to one electron (typically) per quantum dot. Accessing this regime can be challenging even for devices that perform well in the transport regime, since atomistic disorder that may be screened at many-electron occupation is laid bare at single-electron occupation. Confirming that devices can reliably reach this spin qubit operating point is therefore a crucial test of a spin qubit fabrication process. To characterize the single electron regime of these devices, we perform automated charge sensing measurements with each of the twelve quantum dots in the linear array. In each measurement, one quantum dot is tuned up on the qubit side and one on the sensor side. Changes in electron number are detected by modulating the voltage on an exterior screening gate and using lock-in detection of the charge sensor current at that frequency. A typical measurement is shown in Fig. <ref>a. In this 2D sweep, the horizontal axis is plunger voltage, and the vertical axis is the voltage of both barrier gates <cit.>. The sweep range is chosen to take each quantum dot from zero-electron to several-electron occupation along the plunger axis and from low tunnel rate (≪1 kHz) to high tunnel rate (≫1 GHz) along the barrier axis. Transition lines disappear at the bottom of the scan window where tunnel rate falls below the lock-in frequency (∼1 kHz) and at the top of the scan window where the lines become broadened by tunnel coupling energy. Charge sensing scans are taken for all 12 quantum dot sites in the linear array, across 58 die on the wafer, for a total of 696 quantum dot sites. The “success” of each charge sensing scan depends on multiple factors: the relevant sensor dot must yield, the sensing signal must be high relative to noise, and the charge sensor must remain stable throughout the length of the scan. Over the 696 scans taken on a wafer with 50 nm SiGe barrier, we find a 91% success rate in observing clear transitions (as gauged by eye). This success rate represents highly consistent device performance and is primarily limited by the measurement algorithm. We expect the charge sensing success rate can be improved by reducing electron temperature to reduce the low-frequency charge noise <cit.> that gives rise to charge sensor shifts. Improvements could also come from incorporating active feedback into the measurement loop to analyze data quality <cit.> and re-take measurements after charge sensor shifts occur. For further analysis on the 91% of successful scans on this wafer, we apply a numerical algorithm to detect transition curves in the 2D data and extract the coordinates for the first electron (1e) transition (see Methods). We define the “1e voltage” as the plunger voltage position of the 1e transition at the midpoint of the barrier voltage axis, indicated by the red star in Fig. <ref>a. We use the distance between the transition voltage and the left edge of the scan window to gain high confidence that these transitions represent the first electron in the quantum dot (see Methods). A summary of plunger and barrier voltages at the 1e transition is shown in Fig. <ref>b. These data represent the voltages needed to set the 1e charge state in individual sites of 12QD arrays, sampled across a 300 mm wafer. They therefore can reveal how process variation translates to variation in the spin qubit operating point. Improving variation in spin qubit operating voltage has multiple benefits. Lower 1e voltage variation makes for easier automation, as operating voltages are more predictable. Also, many proposals for large-scale spin qubit processors rely on sharing voltages among spin qubit lines to alleviate the interconnect bottleneck <cit.>. Such voltage-sharing schemes will require extremely low levels of variation in 1e voltages across large arrays. In the same way threshold voltage variation must be reduced in a transistor process, variation at the single electron regime must be improved to enable the grandest visions for spin qubit scaling. To analyze the variation in 1e transition voltage data, we repeat the same matched pair voltage difference analysis as above, taking differences between 1e voltages for mirrored pairs of plunger gates. The resulting distributions of voltage differences are shown in Fig. <ref>c-d for two wafers. The random variation in 1e voltage extracted from wafers with a 30 nm and 50 nm SiGe barrier are 59 mV and 60 mV, respectively. Both of these values closely agree with the random variation in gate , meaning the random variation of a transistor-like metric (gate ) is matched by the random variation of a quantum metric (1e voltage). This implies that these devices are not subject to significantly increased disorder at the single electron regime compared with the many electron regime. Also, while the 1e voltage variation is nearly the same between the two wafers, the variation in chemical potential is better reflected by the ratios between 1e voltage variation and 1e-2e addition voltage (Fig. <ref>e-f). These ratios are 1.0 ± 0.1 and 0.76 ± 0.08 for the 30 nm and 50 nm barrier wafer, respectively. The observation that the wafer with a deeper quantum well has a reduced ratio of this kind suggests that the 1e voltage variation is dominated by sources in the gate stack above the heterostructure. These sources could include charge defects (e.g., interface traps or fixed charge in the oxide), gate line edge roughness, gate work function variation, oxide thickness variation, or some combination. These possible sources of variation all have analogies in the transistor field and could be improved by borrowing similar strategies; for example, the impact of oxide charge defects could be reduced by decreasing the oxide thickness between the heterostructure and the gate <cit.>. The charge sensing data can also be used to benchmark the compatibility of these devices with voltage-sharing protocols <cit.>. One basic requirement for such schemes could be that all quantum dots in an array be tuned to the same electron number using the same voltage. From the 1e and 2e voltages obtained here, we estimate that a median of 63% of quantum dots per 12QD device could be set to n=1e with a common voltage. (See Methods for more detail and Extended Data Fig. <ref>.) While this result is still far from the level of uniformity needed to tune an ensemble of spin qubits to their operating point with shared voltages, the 1e voltage variation results in Fig. <ref> highlight the device metrics that must be further improved in order for voltage sharing protocols to be feasible in large spin qubit processors. To further assess variation at the single electron regime, we calculate the standard deviation of the difference between plunger and barrier voltages at the cutoff point of the 1e transition line <cit.>. Fig. <ref>g-h shows the distribution of this voltage difference across all gates and all devices tested on the wafer. We again compare datasets from two wafers, with a 30 nm and 50 nm SiGe barrier, respectively. The distributions for the two wafers have different means due to their different geometry, but their standard deviations are in close agreement at 0.12 (0.13) V for the 30 (50) nm barrier wafer. This standard deviation agrees with the values reported in Ref. <cit.> for six-dot devices with high exchange qubit fidelity <cit.>, confirming that the devices studied here can achieve low levels of disorder at the single electron regime while being fabricated in high volume with a 300 mm process. In conclusion, these results demonstrate a spin qubit fabrication process based on a low disorder host material (Si/SiGe) and all CMOS industry-compatible techniques to achieve low process variation. We present a novel measurement system, a 300 mm cryo-prober at 1.6 K, as a solution to the bottleneck of low-temperature quantum device testing. We use this system to characterize a variety of device types including test structures and fully integrated spin qubit arrays. We demonstrate charge sensing at the single electron regime across full wafers, directly probing the operating point of spin qubits and extracting statistics to characterize process variation at this regime. We observe that, for these devices, variation at the single-electron regime closely agrees with standard transistor variation metrics, pointing the way to strategies that could reduce variation in spin qubit control parameters. While these measurements do not yet directly characterize qubit performance, they test the basic electrostatics framework on which any spin qubit encoding relies. This baseline level of performance is a necessary but not sufficient condition for successful spin qubit implementation, yet achieving it in any device has been historically challenging across the spin qubit field. By leveraging the tools and techniques of the CMOS industry, we achieve a major leap forward in the yield and variation of spin qubit electrostatics, produced at industry scale and characterized with a high-volume measurement system to match. These results set a new standard for what can be achieved with spin qubit devices today and pave the way for significantly larger and more complex spin qubit arrays of the future. § METHODS §.§ Electron temperature measurement Electron temperature in the cryo-prober is measured from a charge stability diagram, using a transition line that is tuned to avoid tunnel rate broadening. This stability diagram is shown in Extended Data Fig. <ref>a. A 1D measurement of the transition line is then taken to extract the width of the transition line. The lock-in data is integrated with respect to swept voltage and subtracted by a linear background. The resulting data is then fit to the model for a temperature-broadened charge sensor transition <cit.> to extract an electron temperature of 1.6 ± 0.2 K. The processed data and theoretical fit are shown in Fig. <ref>b. The uncertainty is estimated from the uncertainty of the lever arm (0.08 ± 0.01), which is measured from bias triangles. §.§ Carrier mobility estimation Carrier mobility is estimated from measurements of channel resistance in 4-probe Hall bar devices at zero magnetic field. The mobility calculation depends on knowing the carrier density, so we approximate a fixed carrier density (4×10^11 cm^2/Vs) by measuring the device  and setting the gate voltage to V_T+Δ V where Δ V = eΔ n/c_g, e is the electron charge, Δ n is the approximated carrier density, and c_g is the estimated gate capacitance per area based on the gate stack. While the gate capacitance estimate is inexact, all the mobility estimations shown in Fig. <ref> are from wafers with nominally the same gate stack. Additional uncertainty comes from the unknown percolation density (n_p) at which the device first shows current. This leads to a systematic over-estimate of mobility by a factor of (1+n_p/Δ n), which we estimate to be at most ∼30%. While this is significantly less accurate than measurements made with magnetic field control, it is nevertheless a useful method for observing wafer-scale trends and comparing wafers with different heterostructure details and gate stack parameters held fixed. We note that all wafers contain a fraction of devices (10-20%) with significantly reduced mobility, as can be seen in Fig. <ref>e. This statistical phenomenon is confirmed with conventional Hall measurements and is not an artifact of the measurement method. Since a similar phenomenon is not observed in the quantum dot devices (manifesting in, e.g., anomalously high channel resistance), we attribute this to a discrete defect mode of the larger-area Hall bar devices. §.§ Yield analysis The component yield analysis present in Table <ref> uses the following definitions. Ohmic contact yield is defined as the fraction of contacts through which current in the Si quantum well can be linearly controlled. Gate yield is defined as the fraction of gates that can be used to turn on and pinch off their respective current channel. Quantum dot yield is defined as the fraction of quantum dot sites where a viable quantum dot tune-up point can be identified from barrier-barrier scans. Lastly, full device yield is defined as the fraction of devices where all sub-components (all ohmic contacts, gates, and quantum dots) yield. Out of 3,712 quantum dot sites tested and summarized in Table <ref>, the nine that fail to tune up are also observed to have anomalously low pinch-off voltage (<0.2 V) on at least one of the three gates defining that quantum dot. These nine sites are also confined to the charge sensor side, where gate geometry is most complex. This indicates that this small number of non-yielding quantum dots is due to the processing of the 0.3% most marginal gates as opposed to, e.g., quantum well defects. We interpret these edge cases on the charge sensor side to a known failure mode in the gate lithography process. We note that the paths to improving the robustness of this process to fix these extreme outlier cases are well understood. §.§ Automated device measurements After a device is contacted with the probes, each current channel in the device (including the qubit channel and the four charge sensor channels) is turned on with all gates over that channel at the same voltage. Once each channel's  is recorded, the gates of each channel are set to a fixed voltage relative to the channel . The qubit channel is then isolated from the sensor channels by reducing the center screening gate voltage until the cross-conductance between channels drops to zero (within the noise floor). The voltage of individual gates is then fine-tuned to set a roughly uniform carrier density across the channel. This is done through an iterative process where the transconductance of each gate is sampled and the voltage on that gate is increased (decreased) if the transconductance is above (below) a threshold value. This effectively sets the voltages of all gates so they are at roughly the same point on their pinch-off curves relative to their . The  data for all gates are extracted from pinch-off curves taken with a source-drain bias of 1 mV.  is identified as the voltage where current crosses 1 nA. The voltages needed to tune up a quantum dot at each site are identified by setting each plunger gate to a fixed voltage relative to its  and varying the barrier gate voltages about their individual  values in a 2D sweep (a barrier-barrier scan). A phenomenological 2D function is fitted to this data to extract the corner point, which combined with the plunger voltage is used to define the “tune-up” parameters for the quantum dot site. The charge sensing measurements shown in Fig. <ref> are taken with one quantum dot tuned up on the qubit side. The closest charge sensor to that quantum dot is also tuned up, and neighboring charge sensor dots are pinched off with their respective plunger gates. To generate the charge sensing measurement, the plunger voltage is swept at a fixed range relative to its , and the two barrier gate voltages are stepped simultaneously. The barrier gates are stepped over the same voltage interval but with separate voltage values. The step values of each barrier gate are defined relative to that gate's individual “tune-up” voltage extracted from the barrier-barrier scan. In the example shown in Fig. <ref>a, the barrier voltage range displayed on the vertical axis is the voltage of the left barrier gate. Charge sensing measurements can also be taken on double quantum dots. The three barrier gates that define each double quantum dot are first set to a fixed voltage relative to their individual  values. The plunger gate voltages for each dot are then swept to generate a 2D charge stability diagram. While these scans are not analyzed quantitatively in this work, a demonstration of this type of measurement can be seen in Extended Data Fig. <ref>. We note that the overall device measurement rate is predominately set by the speed of measurement hardware. Significant gains can therefore be made by implementing faster hardware (e.g., arbitrary waveform generators) and higher-bandwidth amplification (e.g., cryogenic amplifiers <cit.>) without any further changes to the tune-up procedure. §.§ Charge sensing transition curve analysis Transition line coordinates are extracted from charge sensing measurements using the following procedure. The raw lock-in amplifier data is first filtered with a first-order Gaussian filter to remove slowly-varying features. A maximum filter is then used to identify features of high signal in the pre-filtered data. An algorithm is then used to convert the set of “maximum points” into a set of “curve segments.” Curve segments are found by searching for groupings of maximum points that satisfy the following criteria: each point in the curve segment must be the closest maximum point to its nearest neighbor; the slope between each pair of neighboring points must be within a target window; and the set of points must span a minimum specified “length” in the vertical direction. Overlapping curve segments are then merged into transition curves. Transition curves are then further filtered to remove outlier curves and ordered by their coordinate means. The first and second transition curve generated from this algorithm are identified with the 1-electron and 2-electron transition, respectively. An example of the entire sequence is shown in Extended Data Fig. <ref>. The “1e (2e) voltage” is defined as the plunger voltage at which the 1e (2e) transition line crosses the midpoint of the barrier voltage axis. The 1e-2e addition voltage is calculated as the difference between these voltages. We note that in some cases (15%), the 1e (2e) transition in the scan window does not cross the midpoint of the barrier voltage axis, in which case no 1e (2e) transition voltage is extracted from that scan. §.§ 1e transition validation To validate that the 1e voltages we report are actually the first electron in the quantum dot, we extract the margin between the 1e transition voltage and the left edge of the scan window and compare it to the distribution of addition voltages between the 1e and 2e transitions. To have high confidence that the first transition represents the first electron, we require this “scan margin” be >2 times the typical addition voltage. For the 50 nm SiGe barrier wafer characterized in Fig. <ref>b, 98% of 1e voltage data points have a scan margin value above this threshold, giving us high confidence that the 1e transition data summarized in Fig. <ref>b is actually single-electron data. See Extended Data Fig. <ref> for histograms of the 1e-2e addition voltage and 1e scan margin data from this wafer. §.§ Voltage sharing analysis To estimate the proportion of quantum dots in each 12QD device that could be set to single-electron occupation with shared voltages, we analyze the 1e voltage and 2e voltage data from the 50 nm SiGe barrier wafer and search for a common voltage that best divides the 1e and 2e voltage distributions for each 12QD device. In this scheme, any 1e voltage value above the common voltage corresponds to n=0e, and any 2e voltage value below the common voltage corresponds to n≥2e. The remaining instances correspond to quantum dots tuned to n=1e. For each device, the optimal common voltage is found by minimizing the number of instances where n=0e or n≥2e. Extended Data Fig. <ref> shows a histogram of 1e and 2e voltage data points shifted relative to their assigned device-level common voltage. A scatter plot also shows the proportion of quantum dots in each category of electron number for all 12QD devices. We note that the data used in this analysis comes from measurements of quantum dots tuned one at a time and that this method does not take into account the individualized setpoints of other gates in the array during measurements. Nevertheless, we believe it gives a reasonable estimate of the success rate of using shared voltages across a device to set a common charge state. § DATA AVAILABILITY The data that support the findings of this study are available from the corresponding authors upon reasonable request. 43 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Zwanenburg et al.(2013)Zwanenburg, Dzurak, Morello, Simmons, Hollenberg, Klimeck, Rogge, Coppersmith, and Eriksson]zwanenburg:2013 author author F. A. Zwanenburg, author A. S. Dzurak, author A. Morello, author M. Y. Simmons, author L. C. L. Hollenberg, author G. Klimeck, author S. Rogge, author S. N. Coppersmith, and author M. A. Eriksson, title title Silicon quantum electronics, https://doi.org/10.1103/RevModPhys.85.961 journal journal Rev. Mod. Phys. volume 85, pages 961 (year 2013)NoStop [Devoret and Martinis(2004)]devoret:2004 author author M. H. Devoret and author J. M. Martinis, title title Implementing qubits with superconducting integrated circuits, https://doi.org/10.1007/s11128-004-3101-5 journal journal Quantum Information Processing volume 3, pages 163 (year 2004)NoStop [Das Sarma et al.(2015)Das Sarma, Freedman, and Nayak]dassarma:2015 author author S. Das Sarma, author M. Freedman, and author C. Nayak, title title Majorana zero modes and topological quantum computation, https://doi.org/10.1038/npjqi.2015.1 journal journal npj Quantum Inf. volume 1, pages 15001 (year 2015)NoStop [Zhang et al.(2018)Zhang, Li, Cao, Xiao, Guo, and Guo]zhang:2018 author author X. Zhang, author H.-O. Li, author G. Cao, author M. Xiao, author G.-C. Guo, and author G.-P. Guo, title title Semiconductor quantum computation, https://doi.org/10.1093/nsr/nwy153 journal journal National Science Review volume 6, pages 32 (year 2018)NoStop [Burkard et al.(2023)Burkard, Ladd, Pan, Nichol, and Petta]burkard:2023 author author G. Burkard, author T. D. Ladd, author A. Pan, author J. M. Nichol, and author J. R. Petta, title title Semiconductor spin qubits, https://doi.org/10.1103/RevModPhys.95.025003 journal journal Rev. Mod. Phys. volume 95, pages 025003 (year 2023)NoStop [Xue et al.(2022)Xue, Russ, Samkharadze, Undseth, Sammak, Scappucci, and Vandersypen]xue:2022 author author X. Xue, author M. Russ, author N. Samkharadze, author B. Undseth, author A. Sammak, author G. Scappucci, and author L. M. K. Vandersypen, title title Quantum logic with spin qubits crossing the surface code threshold, https://doi.org/10.1038/s41586-021-04273-w journal journal Nature volume 601, pages 343 (year 2022)NoStop [Noiri et al.(2022)Noiri, Takeda, Nakajima, Kobayashi, Sammak, Scappucci, and Tarucha]noiri:2022 author author A. Noiri, author K. Takeda, author T. Nakajima, author T. Kobayashi, author A. Sammak, author G. Scappucci, and author S. Tarucha, title title Fast universal quantum gate above the fault-tolerance threshold in silicon, https://doi.org/10.1038/s41586-021-04182-y journal journal Nature volume 601, pages 338 (year 2022)NoStop [Mills et al.(2022)Mills, Guinn, Gullans, Sigillito, Feldman, Nielsen, and Petta]mills:2022 author author A. R. Mills, author C. R. Guinn, author M. J. Gullans, author A. J. Sigillito, author M. M. Feldman, author E. Nielsen, and author J. R. Petta, title title Two-qubit silicon quantum processor with operation fidelity exceeding 99%, https://doi.org/10.1126/sciadv.abn5130 journal journal Sci. Adv. volume 8 (year 2022)NoStop [Weinstein et al.(2023)Weinstein, Reed, Jones, Andrews, Barnes, Blumoff, Euliss, Eng, Fong, Ha, Hulbert, Jackson, Jura, Keating, Kerckhoff, Kiselev, Matten, Sabbir, Smith, Wright, Rakher, Ladd, and Borselli]weinstein:2023 author author A. J. Weinstein, author M. D. Reed, author A. M. Jones, author R. W. Andrews, author D. Barnes, author J. Z. Blumoff, author L. E. Euliss, author K. Eng, author B. H. Fong, author S. D. Ha, author D. R. Hulbert, author C. A. C. Jackson, author M. Jura, author T. E. Keating, author J. Kerckhoff, author A. A. Kiselev, author J. Matten, author G. Sabbir, author A. Smith, author J. Wright, author M. T. Rakher, author T. D. Ladd, and author M. G. Borselli, title title Universal logic with encoded spin qubits in silicon, https://doi.org/10.1038/s41586-023-05777-3 journal journal Nature volume 615, pages 817 (year 2023)NoStop [Pillarisetty et al.(2019)Pillarisetty, George, Watson, Lampert, Thomas, Bojarski, Amin, Caudillo, Henry, Kashani, Keys, Kotlyar, Luthi, Michalak, Millard, Roberts, Torres, Zietz, Krähenmann, Zwerver, Veldhorst, Scappucci, Vandersypen, and Clarke]pillarisetty:2019 author author R. Pillarisetty, author H. C. George, author T. F. Watson, author L. Lampert, author N. Thomas, author S. Bojarski, author P. Amin, author R. Caudillo, author E. Henry, author N. Kashani, author P. Keys, author R. Kotlyar, author F. Luthi, author D. Michalak, author K. Millard, author J. Roberts, author J. Torres, author O. Zietz, author T. Krähenmann, author A. M. Zwerver, author M. Veldhorst, author G. Scappucci, author L. M. K. Vandersypen, and author J. S. Clarke, title title High volume electrical characterization of semiconductor qubits, in https://doi.org/10.1109/IEDM19573.2019.8993587 booktitle 2019 IEEE International Electron Devices Meeting (IEDM), editor edited by editor IEEE (year 2019) pp. pages 31.5.1–31.5.4NoStop [Terhal(2015)]terhal:2015 author author B. M. Terhal, title title Quantum error correction for quantum memories, https://doi.org/10.1103/RevModPhys.87.307 journal journal Rev. Mod. Phys. volume 87, pages 307 (year 2015)NoStop [Philips et al.(2022)Philips, Madzik, Amitonov, de Snoo, Russ, Kalhor, Volk, Lawrie, Brousse, Tryputen, Paquelet Wuetz, Sammak, Veldhorst, Scappucci, and Vandersypen]philips:2022 author author S. G. J. Philips, author M. T. Madzik, author S. V. Amitonov, author S. L. de Snoo, author M. Russ, author N. Kalhor, author C. Volk, author W. I. L. Lawrie, author D. Brousse, author L. Tryputen, author B. Paquelet Wuetz, author A. Sammak, author M. Veldhorst, author G. Scappucci, and author L. M. K. Vandersypen, title title Universal control of a six-qubit quantum processor in silicon, https://doi.org/10.1038/s41586-022-05117-x journal journal Nature volume 609, pages 919 (year 2022)NoStop [Mills et al.(2019)Mills, Zajac, Gullans, Schupp, Hazard, and Petta]mills:2019 author author A. R. Mills, author D. M. Zajac, author M. J. Gullans, author F. J. Schupp, author T. M. Hazard, and author J. R. Petta, title title Shuttling a single charge across a one-dimensional array of silicon quantum dots, https://doi.org/10.1038/s41467-019-08970-z journal journal Nat. Commun. volume 10, pages 1063 (year 2019)NoStop [Volk et al.(2019)Volk, Zwerver, Mukhopadhyay, Eendebak, van Diepen, Dehollain, Hensgens, Fujita, Reichl, Wegscheider, and Vandersypen]volk:2019 author author C. Volk, author A. M. J. Zwerver, author U. Mukhopadhyay, author P. T. Eendebak, author C. J. van Diepen, author J. P. Dehollain, author T. Hensgens, author T. Fujita, author C. Reichl, author W. Wegscheider, and author L. M. K. Vandersypen, title title Loading a quantum-dot based “qubyte” register, https://doi.org/10.1038/s41534-019-0146-y journal journal npj Quantum Inf. volume 5, pages 29 (year 2019)NoStop [Mortemousque et al.(2021)Mortemousque, Chanrion, Jadot, Flentje, Ludwig, Wieck, Urdampilleta, Bäuerle, and Meunier]mortemousque:2021 author author P.-A. Mortemousque, author E. Chanrion, author B. Jadot, author H. Flentje, author A. Ludwig, author A. D. Wieck, author M. Urdampilleta, author C. Bäuerle, and author T. Meunier, title title Coherent control of individual electron spins in a two-dimensional quantum dot array, https://doi.org/10.1038/s41565-020-00816-w journal journal Nat. Nanotechnol. volume 16, pages 296 (year 2021)NoStop [Borsoi et al.(2022)Borsoi, Hendrickx, John, Motz, van Riggelen, Sammak, de Snoo, Scappucci, and Veldhorst]borsoi:2022preprint author author F. Borsoi, author N. W. Hendrickx, author V. John, author S. Motz, author F. van Riggelen, author A. Sammak, author S. L. de Snoo, author G. Scappucci, and author M. Veldhorst, title title Shared control of a 16 semiconductor quantum dot crossbar array, https://doi.org/10.48550/arXiv.2209.06609 (year 2022)NoStop [Wecker et al.(2014)Wecker, Bauer, Clark, Hastings, and Troyer]wecker:2014 author author D. Wecker, author B. Bauer, author B. K. Clark, author M. B. Hastings, and author M. Troyer, title title Gate-count estimates for performing quantum chemistry on small quantum computers, https://doi.org/10.1103/PhysRevA.90.022305 journal journal Phys. Rev. A volume 90, pages 022305 (year 2014)NoStop [Gidney and Ekerå(2021)]gidney:2021 author author C. Gidney and author M. Ekerå, title title How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits, https://doi.org/10.22331/q-2021-04-15-433 journal journal Quantum volume 5, pages 433 (year 2021)NoStop [Kuhn et al.(2011)Kuhn, Giles, Becher, Kolar, Kornfeld, Kotlyar, Ma, Maheshwari, and Mudanai]kuhn:2011 author author K. J. Kuhn, author M. D. Giles, author D. Becher, author P. Kolar, author A. Kornfeld, author R. Kotlyar, author S. T. Ma, author A. Maheshwari, and author S. Mudanai, title title Process technology variation, https://doi.org/10.1109/TED.2011.2121913 journal journal IEEE Transactions on Electron Devices volume 58, pages 2197 (year 2011)NoStop [Li et al.(2020)Li, Stuyck, Kubicek, Jussot, Chan, Mohiyaddin, Elsayed, Shehata, Simion, Godfrin, Canvel, Ivanov, Goux, Govoreanu, and Radu]li:2020 author author R. Li, author N. I. D. Stuyck, author S. Kubicek, author J. Jussot, author B. T. Chan, author F. A. Mohiyaddin, author A. Elsayed, author M. Shehata, author G. Simion, author C. Godfrin, author Y. Canvel, author T. Ivanov, author L. Goux, author B. Govoreanu, and author I. P. Radu, title title A flexible 300 mm integrated Si MOS platform for electron- and hole-spin qubits exploration, in https://doi.org/10.1109/IEDM13553.2020.9371956 booktitle 2020 IEEE International Electron Devices Meeting (IEDM), editor edited by editor IEEE (year 2020) pp. pages 38.3.1–38.3.4NoStop [Ha et al.(2022)Ha, Ha, Choi, Tang, Schmitz, Levendorf, Lee, Chappell, Adams, Hulbert, Acuna, Noah, Matten, Jura, Wright, Rakher, and Borselli]ha:2022 author author W. Ha, author S. D. Ha, author M. D. Choi, author Y. Tang, author A. E. Schmitz, author M. P. Levendorf, author K. Lee, author J. M. Chappell, author T. S. Adams, author D. R. Hulbert, author E. Acuna, author R. S. Noah, author J. W. Matten, author M. P. Jura, author J. A. Wright, author M. T. Rakher, and author M. G. Borselli, title title A flexible design platform for Si/SiGe exchange-only qubits with low disorder, https://doi.org/10.1021/acs.nanolett.1c03026 journal journal Nano Lett. volume 22, pages 1443 (year 2022)NoStop [Ansaloni et al.(2020)Ansaloni, Chatterjee, Bohuslavskyi, Bertrand, Hutin, Vinet, and Kuemmeth]ansaloni:2020 author author F. Ansaloni, author A. Chatterjee, author H. Bohuslavskyi, author B. Bertrand, author L. Hutin, author M. Vinet, and author F. Kuemmeth, title title Single-electron operations in a foundry-fabricated array of quantum dots, https://doi.org/10.1038/s41467-020-20280-3 journal journal Nat. Commun. volume 11, pages 6399 (year 2020)NoStop [Zwerver et al.(2022)Zwerver, Krähenmann, Watson, Lampert, George, Pillarisetty, Bojarski, Amin, Amitonov, Boter, Caudillo, Correas-Serrano, Dehollain, Droulers, Henry, Kotlyar, Lodari, Luthi, Michalak, Mueller, Neyens, Roberts, Samkharadze, Zheng, Zietz, Scappucci, Veldhorst, Vandersypen, and Clarke]zwerver:2022 author author A. M. J. Zwerver, author T. Krähenmann, author T. F. Watson, author L. Lampert, author H. C. George, author R. Pillarisetty, author S. A. Bojarski, author P. Amin, author S. V. Amitonov, author J. M. Boter, author R. Caudillo, author D. Correas-Serrano, author J. P. Dehollain, author G. Droulers, author E. M. Henry, author R. Kotlyar, author M. Lodari, author F. Luthi, author D. J. Michalak, author B. K. Mueller, author S. Neyens, author J. Roberts, author N. Samkharadze, author G. Zheng, author O. K. Zietz, author G. Scappucci, author M. Veldhorst, author L. M. K. Vandersypen, and author J. S. Clarke, title title Qubits made by advanced semiconductor manufacturing, https://doi.org/10.1038/s41928-022-00727-9 journal journal Nat. Electron. volume 5, pages 184 (year 2022)NoStop [Deelman et al.(2016)Deelman, Edge, and Jackson]deelman:2016 author author P. W. Deelman, author L. F. Edge, and author C. A. Jackson, title title Metamorphic materials for quantum computing, https://doi.org/10.1557/mrs.2016.28 journal journal MRS Bulletin volume 41, pages 224 (year 2016)NoStop [Scappucci et al.(2021)Scappucci, Taylor, Williams, Ginley, and Law]scappucci:2021 author author G. Scappucci, author P. J. Taylor, author J. R. Williams, author T. Ginley, and author S. Law, title title Crystalline materials for quantum computing: Semiconductor heterostructures and topological insulators exemplars, https://doi.org/10.1557/s43577-021-00147-8 journal journal MRS Bulletin volume 46, pages 596 (year 2021)NoStop [Kotlyar et al.(2022)Kotlyar, Premaratne, Zheng, Corrigan, Pillarisetty, Neyens, Zietz, Watson, Luthi, Borjans, Lampert, Henry, George, Bojarski, Roberts, Matsuura, and Clarke]kotlyar:2022 author author R. Kotlyar, author S. Premaratne, author G. Zheng, author J. Corrigan, author R. Pillarisetty, author S. Neyens, author O. Zietz, author T. Watson, author F. Luthi, author F. Borjans, author L. Lampert, author E. Henry, author H. George, author S. Bojarski, author J. Roberts, author A. Y. Matsuura, and author J. S. Clarke, title title Mitigating impact of defects on performance with classical device engineering of scaled Si/SiGe qubit arrays, in https://doi.org/10.1109/IEDM45625.2022.10019382 booktitle 2022 International Electron Devices Meeting (IEDM), editor edited by editor IEEE (year 2022) pp. pages 8.4.1–8.4.4NoStop [Ward et al.(2013)Ward, Savage, Lagally, Coppersmith, and Eriksson]ward:2013 author author D. R. Ward, author D. E. Savage, author M. G. Lagally, author S. N. Coppersmith, and author M. A. Eriksson, title title Integration of on-chip field-effect transistor switches with dopantless Si/SiGe quantum dots for high-throughput testing, https://doi.org/10.1063/1.4807768 journal journal Appl. Phys. Lett. volume 102, pages 213107 (year 2013)NoStop [Bavdaz et al.(2022)Bavdaz, Eenink, van Staveren, Lodari, Almudever, Clarke, Sebastiano, Veldhorst, and Scappucci]bavdaz:2022 author author P. L. Bavdaz, author H. G. J. Eenink, author J. van Staveren, author M. Lodari, author C. G. Almudever, author J. S. Clarke, author F. Sebastiano, author M. Veldhorst, and author G. Scappucci, title title A quantum dot crossbar with sublinear scaling of interconnects at cryogenic temperature, https://doi.org/10.1038/s41534-022-00597-1 journal journal npj Quantum Inf. volume 8, pages 86 (year 2022)NoStop [Paquelet Wuetz et al.(2020)Paquelet Wuetz, Bavdaz, Yeoh, Schouten, van der Does, Tiggelman, Sabbagh, Sammak, Almudever, Sebastiano, Clarke, Veldhorst, and Scappucci]paqueletwuetz:2020 author author B. Paquelet Wuetz, author P. L. Bavdaz, author L. A. Yeoh, author R. Schouten, author H. van der Does, author M. Tiggelman, author D. Sabbagh, author A. Sammak, author C. G. Almudever, author F. Sebastiano, author J. S. Clarke, author M. Veldhorst, and author G. Scappucci, title title Multiplexed quantum transport using commercial off-the-shelf CMOS at sub-kelvin temperatures, https://doi.org/10.1038/s41534-020-0274-4 journal journal npj Quantum Inf. volume 6, pages 43 (year 2020)NoStop [Schäffler(1997)]Schaffler:1997 author author F. Schäffler, title title High-mobility Si and Ge structures, https://doi.org/10.1088/0268-1242/12/12/001 journal journal Semicond. Sci. Tech. volume 12, pages 1515 (year 1997)NoStop [Zajac et al.(2015)Zajac, Hazard, Mi, Wang, and Petta]zajac:2015 author author D. M. Zajac, author T. M. Hazard, author X. Mi, author K. Wang, and author J. R. Petta, title title A reconfigurable gate architecture for Si/SiGe quantum dots, https://doi.org/10.1063/1.4922249 journal journal Appl. Phys. Lett. volume 106, pages 223507 (year 2015)NoStop [Hu(2009)]hu:2009 author author C. Hu, @noop title Modern Semiconductor Devices for Integrated Circuits (publisher Pearson, year 2009)NoStop [Kouwenhoven et al.(1997)Kouwenhoven, Marcus, McEuen, Tarucha, Westervelt, and Wingreen]kouwenhoven:1997 author author L. P. Kouwenhoven, author C. M. Marcus, author P. L. McEuen, author S. Tarucha, author R. M. Westervelt, and author N. S. Wingreen, title title Electron transport in quantum dots, in https://doi.org/10.1007/978-94-015-8839-3 booktitle Mesoscopic Electron Transport, series NATO Science Series E, Vol. volume 345, editor edited by editor L. L. Sohn, editor L. Kouwenhoven, and editor G. Schön (publisher Springer Dordrecht, year 1997) pp. pages 105–214NoStop [Loss and DiVincenzo(1998)]loss:1998 author author D. Loss and author D. P. DiVincenzo, title title Quantum computation with quantum dots, https://doi.org/10.1103/PhysRevA.57.120 journal journal Phys. Rev. A volume 57, pages 120 (year 1998)NoStop [DiVincenzo et al.(2000)DiVincenzo, Bacon, Kempe, Burkard, and Whaley]divincenzo:2000 author author D. P. DiVincenzo, author D. Bacon, author J. Kempe, author G. Burkard, and author K. B. Whaley, title title Universal quantum computation with the exchange interaction, https://doi.org/10.1038/35042541 journal journal Nature volume 408, pages 339 (year 2000)NoStop [Borselli et al.(2015)Borselli, Eng, Ross, Hazard, Holabird, Huang, Kiselev, Deelman, Warren, Milosavljevic, Schmitz, Sokolich, Gyure, and Hunter]borselli:2015 author author M. G. Borselli, author K. Eng, author R. S. Ross, author T. M. Hazard, author K. S. Holabird, author B. Huang, author A. A. Kiselev, author P. W. Deelman, author L. D. Warren, author I. Milosavljevic, author A. E. Schmitz, author M. Sokolich, author M. F. Gyure, and author A. T. Hunter, title title Undoped accumulation-mode Si/SiGe quantum dots, https://doi.org/10.1088/0957-4484/26/37/375202 journal journal Nanotechnology volume 26, pages 375202 (year 2015)NoStop [Connors et al.(2020)Connors, Nelson, Qiao, Edge, and Nichol]connors:2020 author author E. J. Connors, author J. J. Nelson, author H. Qiao, author L. F. Edge, and author J. M. Nichol, title title Low-frequency charge noise in Si/SiGe quantum dots, https://doi.org/10.1103/PhysRevB.100.165305 journal journal Phys. Rev. B volume 100, pages 165305 (year 2020)NoStop [Ziegler et al.(2022)Ziegler, McJunkin, Joseph, Kalantre, Harpt, Savage, Lagally, Eriksson, Taylor, and Zwolak]ziegler:2022 author author J. Ziegler, author T. McJunkin, author E. S. Joseph, author S. S. Kalantre, author B. Harpt, author D. E. Savage, author M. G. Lagally, author M. A. Eriksson, author J. M. Taylor, and author J. P. Zwolak, title title Toward robust autotuning of noisy quantum dot devices, https://doi.org/10.1103/PhysRevApplied.17.024069 journal journal Phys. Rev. Applied volume 17, pages 024069 (year 2022)NoStop [Veldhorst et al.(2017)Veldhorst, Eenink, Yang, and Dzurak]veldhorst:2017 author author M. Veldhorst, author H. G. J. Eenink, author C. H. Yang, and author A. S. Dzurak, title title Silicon CMOS architecture for a spin-based quantum computer, https://doi.org/10.1038/s41467-017-01905-6 journal journal Nat. Commun. volume 8, pages 1766 (year 2017)NoStop [Li et al.(2018)Li, Petit, Franke, Dehollain, Helsen, Steudtner, Thomas, Yoscovits, Singh, Wehner, Vandersypen, Clarke, and Veldhorst]li:2018 author author R. Li, author L. Petit, author D. P. Franke, author J. P. Dehollain, author J. Helsen, author M. Steudtner, author N. K. Thomas, author Z. R. Yoscovits, author K. Singh, author S. Wehner, author L. M. K. Vandersypen, author J. S. Clarke, and author M. Veldhorst, title title A crossbar network for silicon quantum dot qubits, https://doi.org/10.1126/sciadv.aar3960 journal journal Sci. Adv. volume 4 (year 2018)NoStop [Boter et al.(2022)Boter, Dehollain, van Dijk, Xu, Hensgens, Versluis, Naus, Clarke, Veldhorst, Sebastiano, and Vandersypen]boter:2022 author author J. M. Boter, author J. P. Dehollain, author J. P. G. van Dijk, author Y. Xu, author T. Hensgens, author R. Versluis, author H. W. L. Naus, author J. S. Clarke, author M. Veldhorst, author F. Sebastiano, and author L. M. K. Vandersypen, title title Spiderweb array: A sparse spin-qubit array, https://doi.org/10.1103/PhysRevApplied.18.024053 journal journal Phys. Rev. Applied volume 18, pages 024053 (year 2022)NoStop [DiCarlo et al.(2004)DiCarlo, Lynch, Johnson, Childress, Crockett, Marcus, Hanson, and Gossard]dicarlo:2004 author author L. DiCarlo, author H. J. Lynch, author A. C. Johnson, author L. I. Childress, author K. Crockett, author C. M. Marcus, author M. P. Hanson, and author A. C. Gossard, title title Differential charge sensing and charge delocalization in a tunable double quantum dot, https://doi.org/10.1103/PhysRevLett.92.226801 journal journal Phys. Rev. Lett. volume 92, pages 226801 (year 2004)NoStop [Vink et al.(2007)Vink, Nooitgedagt, Schouten, Vandersypen, and Wegscheider]vink:2007 author author I. T. Vink, author T. Nooitgedagt, author R. N. Schouten, author L. M. K. Vandersypen, and author W. Wegscheider, title title Cryogenic amplifier for fast real-time detection of single-electron tunneling, https://doi.org/10.1063/1.2783265 journal journal Appl. Phys. Lett. volume 91, pages 123512 (year 2007)NoStop § AUTHOR CONTRIBUTIONS S. N., O. Z., and T. W. designed the automated measurements. S. N., O. Z., and A. N. performed the measurements. F. L. contributed to the measurement software. H. G., E. H., A. W., and M. I. fabricated the devices. S. N. and O. Z. analyzed the data. R. P., R. K., and S. P. contributed to the data analysis. O. Z., R. P., and K. M. enabled the cryo-prober installation. N. B., S. B., J. R., and J. S. C. supervised the project. S. N. and O. Z. wrote the manuscript with input from all authors. § EXTENDED DATA
http://arxiv.org/abs/2307.04619v1
20230710150729
Learning Fine Pinch-Grasp Skills using Tactile Sensing from Real Demonstration Data
[ "Xiaofeng Mao", "Yucheng Xu", "Ruoshi Wen", "Mohammadreza Kasaei", "Wanming Yu", "Efi Psomopoulou", "Nathan F. Lepora", "Zhibin Li" ]
cs.RO
[ "cs.RO" ]
Surface magnon spectra of nodal loop semimetals László Oroszlány August 12, 2023 =============================================== empty empty This work develops a data-efficient learning from demonstration framework which exploits the use of rich tactile sensing and achieves fine dexterous bimanual manipulation. Specifically, we formulated a convolutional autoencoder network that can effectively extract and encode high-dimensional tactile information. Further, we developed a behaviour cloning network that can learn human-like sensorimotor skills demonstrated directly on the robot hardware in the task space by fusing both proprioceptive and tactile feedback. Our comparison study with the baseline method revealed the effectiveness of the contact information, which enabled successful extraction and replication of the demonstrated motor skills. Extensive experiments on real dual-arm robots demonstrated the robustness and effectiveness of the fine pinch grasp policy directly learned from one-shot demonstration, including grasping of the same object with different initial poses, generalizing to ten unseen new objects, robust and firm grasping against external pushes, as well as contact-aware and reactive re-grasping in case of dropping objects under very large perturbations. Moreover, the saliency map method is employed to describe the weight distribution across various modalities during pinch grasping. The video is available online at: https://youtu.be/4Pg29bUBKqshttps://youtu.be/4Pg29bUBKqs. § INTRODUCTION Dexterous robot manipulation has the capability to work across a range of tasks and environments. However, enabling dexterous manipulation in robots, particularly in a manner that is comparable to human capabilities, remains an unsolved challenge. Currently, numerous studies utilize visual feedback to enable robots to perform dexterous manipulation tasks such as box flipping <cit.>, object rotating <cit.>, and door opening <cit.>. However, these visual-based methods have limitations, as the visual data could be influenced by occlusion and lighting variations. Consequently, it is very important to investigate how to incorporate tactile information for the enhancement of dexterous manipulation in robotic systems. Tactile sensing plays a vital role in capturing detailed information about contact surfaces, including the distribution of contact forces and their variations during force-sensitive tasks – which is indispensable for achieving dexterous handling of lightweight objects with irregular surfaces, shapes, and deformable properties. Especially during close-range interaction between hands and objects, visual occlusion restricts the ability to perceive detailed information of the contact surfaces, during which tactile sensors become valuable for providing essential information of these unseeable surfaces. Integrating tactile sensing into motor learning of dexterous grasping can enhance the rich and precise sensing of surface contacts and interaction dynamics, provide irreplaceable and direct feedback when manipulating objects <cit.>, and enable more robust and precise manipulation tasks. It is crucial to explore how robots can leverage this information to achieve human-comparable dexterous manipulation abilities. The canonical hardware for robot manipulation incorporates Force/Torque sensors that can only measure the 6-degree-of-freedom (DoF) wrench at each end-effector. Soft optical-based tactile sensors can provide abundant and discriminative contact information by quantifying the deformation of the soft materials using a camera system <cit.>. Currently, several soft tactile sensors have been developed, including TacTip <cit.>, DigiTac <cit.>, Gelsight <cit.>, and DIGIT <cit.>. However, how to use high-dimensional data from tactile sensors for robot dexterous grasping remains open research. The complex and non-trivial deformation of soft tactile sensors during dexterous grasping tasks presents a considerable challenge. Human can deal with soft contacts, quuckly adapt to new tasks, and produce skills of dual-arm coordination for manipulating objects. Learning from Demonstration (LfD) offers an intuitive, efficient method for acquiring human skills through synchronized tactile information, encoding rich state-action mapping and enabling robots to learn human sensorimotor skills while responding to tactile and proprioceptive feedback. In addition, the common issue of accumulating compounding errors during dexterous manipulation task execution in LfD can be mitigated by utilizing rich tactile information as feedback. The challenge involves effectively extracting features from sensory data and integrating them with proprioceptive states for sample-efficient human dexterous manipulation behavior learning. This work is motivated to develop an effective LfD framework that leverages rich tactile sensing to learn dexterous sensorimotor skills. Our approach focuses on achieving one-shot LfD of fine pinch grasp, using high-dimensional contact information from tactile sensors and a limited amount of real data. The contributions are summarized as follows: * A novel feature extraction approach to encapsulate essential features from tactile sensing data, which are then fused with robot proprioceptive states and tactile image difference, thus resulting in a low-dimensional latent space representation that significantly enhances the learning process of fine grasping skills. * An effective LfD framework that integrates tactile sensory input and robot proprioceptive state, which enables the robot to efficiently acquire feedback-driven dexterous grasping skills through a single demonstration. The proposed framework is validated by pinch grasp tasks on a dual-arm setup equipped with TacTips sensors <cit.> and has achieved the successful retrieval of a small, cylindrical object on a table using one-shot demonstration. Our experimental results show that the policy, learned from one-shot human demonstration data, can achieve stable grasping of unseen objects with different diameters, masses, and materials. Furthermore, the robustness of the framework against external disturbances has been validated, with the learned policy demonstrating stable grasping under external disturbance, as well as the capacity to autonomously execute successful re-grasping in case of a large external force that pushes off the object. We applied saliency map analysis <cit.> and revealed how the learned policy uses different sensory modalities in a variable way throughout the dexterous pinch grasp process, and demonstrate the capability and effectiveness of our proposed network to efficiently learn features of the high-dimensional data and autonomously segment the long-horizon data into several distinct fine-skills for execution. § RELATED WORKS During robotic grasping, tactile sensors can provide rich contact information which is not easily accessed via visual information, thereby playing a crucial role in enhancing the dexterous grasping capabilities <cit.>. Prior research on robotic pinch grasp has primarily focused on either force analysis and planning to achieve force closure <cit.> or the development of specialized grippers <cit.>. Soft deformable tactile sensors have the ability to perform contact-rich interactions with the environment and manipulate delicate objects safely <cit.>. With optical-based tactile sensors, the orientation of the contact surface can be inferred from the tactile image, enabling stabilization of the pinch grasp by rolling the sensor on the contact surface and applying desired grasping forces <cit.>. The study in <cit.> proposed a novel tactile sensor capable of measuring and localizing distributed forces that enables the robot hand to grasp deformable soft objects. One open question is how to extract useful information from high-dimensional tactile images. The works in <cit.> estimate 6D contact wrenches from tactile images and the estimated wrenches that can be used as feedback to the grasping controllers within the classical control theory. Deep neural networks can also be used to process tactile images. The works in <cit.> show that contact poses can also be detected from tactile images, which was then combined with goal-driven methods to achieve non-prehensile object stable pushing. The works in <cit.> introduce Autoencoder networks <cit.> to compress the high-dimensional tactile images into low-dimensional latent vectors which can be used for several down-stream tasks, such as object classification. Moreover, although deformable tactile sensors facilitate area contact, potentially improving grasp stability and protecting delicate objects, the dynamics of the deformable sensor cannot be neglected. The work proposed in  <cit.> combines 3D geometry of the tip of a deformable tactile sensor with robot proprioceptive action to learn the tactile sensor membrane dynamics and predict the deformation conditioned on robot action. Data-driven method can be used to learn the dynamics and combined with the Model Predictive Control (MPC) methods to achieve tactile servoing <cit.>. Insights from human intrinsic understanding may prove valuable in leveraging deformable sensors to achieve dexterous dual-arm manipulation tasks. LfD is an intuitive and effective way to learn human skills from collected demonstrations, which is very helpful for tasks requiring high-level skills, such as intricate coordination between two arms. By segmenting the collected motion data, the work proposed in <cit.> generates a set of motion primitives to complete tasks. Additionally, human can use their senses to accomplish different tasks this can be used to investigate how the multi-sensory data can jointly together and help with the manipulation task <cit.>. § METHODS §.§ System Overview Teleoperation through a physical robot is a viable approach for generating real demonstration data that can be executed on a physical system, and it was shown to be effective to perform fine dexterous grasping <cit.>. As shown in Fig. <ref>, the overall architecture incorporates a teleoperation system for the collection of human demonstration data, and a dual-arm setup for executing pinch grasp tasks. The teleoperation system consists of two haptic devices (Force Dimension Sigma 7) for human operators to control the dual-arm robot <cit.>. The dual-arm robot system includes two Franka Emika Panda arms each with a TacTip <cit.> installed on the end-effector of each arm. The Tactips capture contact information between end-effector and objects as 2D tactile images. Task-Space Sequential Equilibrium and Inverse Kinematics Optimization (SEIKO) runs in the backend to guarantee the guarantee of physical constraints and safety of the dual-arm robot <cit.>. The Learning from Demonstration (LfD) framework (see Fig. <ref>) is composed of two distinct networks: 1) a Convolutional AutoEncoder (CAE) network to extract the latent features from tactile images; 2) a Behaviour Cloning (BC) network to learn the policy of dexterous dual-arm grasping with tactile sensing from human demonstrations. §.§ Demonstration Dataset of Bimanual Manipulation In our implementation, the haptic devices allow operators to adjust the 6D pose simultaneously, providing an intuitive way to demonstrate bimanual grasping skills on a dual-arm robot. During the demonstration, a human operator teleoperates the dual-arm robot to complete the grasping task by sending Cartesian commands to the two end-effectors via two haptic devices. The human demonstration data are recorded automatically during the entire grasping. §.§ Tactile Feature Extraction The Tactip used in this work is an optical tactile sensor with a soft hemispherical tip, which was 3D-printed in one piece combining an elastic skin with 330 rigid white markers (pins) <cit.>. When the soft tip deforms during contact with objects, the white pins start to move away from their initial positions. The displacement of these pins reflects the complex deformation of the soft surface. An inner camera captures and projects the displacement to an array of white pins on a black background in the image plane, as shown in Fig. <ref>. Raw tactile RGB images are firstly resized to 256×256 pixels using linear interpolation and converted to grayscale images, which are then cropped using a circle mask and converted to binary images by thresholding. A median filter is applied to denoise the binary images. We propose to use a self-supervised learning method – convolutional autoencoder network to extract robust features that can represent the contact properties from the preprocessed tactile images. Eight convolutional layers are used in the CAE network to extract the spatial information represented by the displacement of the pins. The structure of CAE network is shown in Fig. <ref>. The CAE network consists of an encoder and a decoder, formulated as follows: [ g_Θ(·): 𝒳ℋ; f_Φ(·): ℋ𝒳̂ ]. The encoder g_Θ(·) projects each tactile image γ_t in the high-dimensional input space 𝒳 (256×256) to 16 feature maps γ_l in the low-dimensional latent space ℋ (16×16), then the decoder f_Φ(·) reconstructs that image from the same feature maps to the output space 𝒳̂ (256×256). The binary cross-entropy loss function is used as the reconstruction loss between the input images 𝒳 and the reconstructed images 𝒳̂ to update the network parameters via back-propagation: [ L_CAE(γ_t, γ_p) = -(γ_tlogγ_p) + (1-γ_t)log(1-γ_p); γ_l = g_Θ (γ_t), γ_p = f_Φ(γ_l) ], where γ_p is the reconstructed image by the decoder network. §.§ Behavior Cloning Network We propose and design a novel BC network to learn the behaviors of coordinated manipulation skills of bimanual grasping from human demonstration data. Dexterous bimanual grasping skills can be considered into two categories: (1) adaptive interaction with different objects, and (2) dual-arm motion coordination. To capture these skills, we have designed the input to our network to include encoded tactile feature maps, tactile image differences, and the robot's proprioceptive state. The encoded feature maps and tactile image differences capture the human-object interaction skills. The robot's proprioceptive state, on the other hand, offers insights into the coordination of movements between both arms. These inputs collectively serve to reflect the complexity and adaptability of dexterous grasping skills. Following this idea, we use the encoded tactile feature maps l_t, the proprioceptive state ϕ_t, and the tactile image difference e_t as input to the BC network to represent and learn fine human skills. The discrete-time state-action pair set G = {(s_0, a_0), (s_1,a_1), ..., (s_t,a_t),...} is created to train the BC network, where s_t = (l_t, ϕ_t, e_t) denotes the robot state and a_t denotes the Cartesian commands of the two arms at time t. Using such data of multiple modalities as input to train a network requires a well-crafted embedding structure. A common way of fusing a 2D feature map and a 1D feature vector is to flatten the 2D feature map into a 1D vector and concatenate the flattened vector and the 1D feature vector. However, we found that the flattening projection results in the loss of spatial correlation of tactile information. In this work, we specifically tile the proprioceptive state of robots and the tactile image difference to match the dimension of the tactile feature maps, so as to keep the spatial information of the encoded tactile feature maps. We then concatenate the tactile feature maps, the tiled proprioceptive state maps, and the tactile image difference on each feature channel, as shown in Fig. <ref>. The convolutional layers in the BC network first filter the input feature maps (46×16×16) to a feature map (1×8×8), which is then flattened and fed into a fully connected network (FCN). The FCN network outputs a vector â∈^12 as the predicted Cartesian pose commands of the two arms, including 3D position and 3D orientation for each arm. The loss function used to train the BC network consists of two parts, which are formulated as: [ L_BC(a,â) = a - â^2 + d - d̂^2; â = ψ(l, ϕ, e;Φ_conv, Φ_fcn) ] where a∈^12 is the Cartesian pose commands of the two arms from the human demonstration dataset, and â∈^12 is the predicted Cartesian pose command by the BC network ψ(·;Φ_conv,Φ_fcn), parameterized by Φ_conv and Φ_fcn; l, ϕ and e denote the tactile feature maps, the proprioceptive state maps and the tactile image difference, respectively. The second term d - d̂^2 is added to learn the dual-arm coordination skills from human demonstrations, where d∈ ^3 is the relative position between the two end-effectors, and d̂∈ ^3 is the predicted relative position between the two end-effectors by the BC network. § EXPERIMENTS AND RESULTS §.§ Experimental Setup and Data Collection We validate the performance of LfD with tactile sensing for robot dexterous manipulation on the challenging tasks: object retrieval from the desk using dual-arm pinch grasp. We have designed a comparative study involving two different configurations to show that our method can outperform the vanilla BC baseline qualitatively and quantitatively. During dexterous grasping, external vision can be easily occluded by the end-effector, potentially leading to inaccurate object estimation. Therefore, our experiments operate without using external visual sensors. By default, the starting position of the object lies between two robot hands, and the whole demonstration is represented in the task space. This assumption allows us to have an interface to connect with a 6D pose estimation from an external camera (e.g., as in <cit.>) and then position the initial poses of two hands around the target object before starting our control policy. We collected two demonstrations for each task. The human demonstration dataset collected in the grasping task includes three main components: the Cartesian commands, the proprioceptive states, and the tactile feedback (i.e., tactile images provided by the TacTip sensors). The Cartesian commands and the proprioceptive states of the two arms are collected at a frequency of 1000 Hz. Two Tactips record the tactile image pairs at a frequency of 60 Hz. For each demonstration, about 1000 tactile images are recorded. Before using the collected dataset to train the networks, several pre-processing methods are used to process the raw data. The proprioceptive states of the two arms and the tactile images, collected at different sampling rates, are synchronized using a linear interpolation method to align their timestamps. A median filter is then applied to smooth the Cartesian commands a_t, i.e., the 6D poses of two end-effectors. For raw tactile images, the structural similarity index measure (SSIM) <cit.> is used to quantify the difference between the current frame and the original frame. §.§ Design of Validation Tasks §.§.§ Learning grasping vial The human demonstrator performs teleoperation of dual-arm robots to grasp a plastic vial (a test tube with Φ=15.65mm) that is horizontally placed on the table. A Behaviour Cloning (BC) network is trained using the garthered demonstration data, and the trained policy is tested on dual-arm robots to validate its generalisation on unseen initial poses. During the evaluative phase, we positioned the test tube between the end-effectors to evaluate the performance of the learned policy given variations in the starting position, specifically alterations of up to ±20 degrees and displacements of up to ±2 centimetres in the objects' locations. §.§.§ Generalization to unseen objects To evaluate the generalisability of the trained policy to unseen objects with a variation of radius, weight, or even materials (e.g., soft and fragile objects), a set of test experiments have been conducted using multiple objects of different radii ranging from 11.7mm to 28.6mm. §.§.§ Robustness against external disturbance We also validate the robustness of the trained policy against external disturbances. We applied random external pushes from the left, right, up, and down directions on the grasped object to test if the two arms can coordinate their end-effectors' poses to ensure the balance of the object. §.§.§ Re-grasping capability The re-grasping experiments are conducted to test if the trained controller is contact-awareness and can perceive the loss of contact with the object in order to make necessary adjustments according to the tactile feedback and react to grasping failures. After the successful normal grasping, we severely pushed the object away to break its static equilibrium, and the object dropped down between two end-effectors again. §.§ Network Evaluation Our proposed model is developed using PyTorch <cit.>. For the training of the Convolutional AutoEncoder (CAE), a dataset comprising 1500 tactile images obtained from the TacTip sensor during the demonstration was utilized as a demonstration. A representation of reconstruction performance is shown on the validation set is shown in Fig. <ref>. The trained CAE exhibits a satisfactory reconstruction quality, with a Mean Squared Error (MSE) loss of 0.015 and a Structural Similarity Index Measure (SSIM) of 0.934. The model training process, which involved 100 iterations, was completed in approximately two hours using an NVIDIA 1080 GPU. In the case of the Behaviour Cloning (BC) network, the model was trained for 1000 iterations and the training process takes approximately 5 minutes. §.§ Results of Grasping Tasks The BC network trained on human demonstration data is deployed on a real dual-arm robot to verify its performance by the designed tasks. A set of snapshots showing grasping a tube from the tube holder and the table is shown in Fig. <ref>. In both grasping tasks, the learned control policy achieved 100% success rate, even when the initial poses of the tube are different from their original pose in the demonstration. The dual-arm robot can make prompt adjustments and enable stable dexterous grasping by learning from only one demonstration. In the process of lifting the object, the dual-arm robot achieves stable grasping by constantly twiddling the “fingertips” (tips of TacTip sensors) and adjusting the object to the central position. The process of retrieving an object from the table and adjusting its pose to maintain balance requires very fine movements and interactions supported by rich tactile information, where a 6-axis force/torque information is not sufficient to discern different contact situations in this scenario. We evaluate the robustness of the learned policy against external disturbances. It can be seen from Fig. <ref> that the dual-arm robot can make a proper adjustment to adapt to pushes. Although the pose of the two-arm robot in contact with the object was changed each time while being pushed, the dual-arm robot can always fine-adjust the object reactively to the center of the fingertips (Tactip sensors), roll and move the object to the desired position. Compared with the manually programmed behavior, this serves as a feedback policy that has been successfully acquired from human dexterity skills, which enables the dual-arm robot to autonomously adjust the posture and ensure a stable grasp quickly. It is noteworthy that such active rolling adjustment has not been explicitly demonstrated by any separate trials, but rather, this behavior was successfully captured by the rich tactile data during one-shot demonstration of pick-lift grasping. To examine the reaction in the presence of an unknown situation, i.e., grasping failures, the learned policy demonstrated contract-awareness of the falling object, i.e., loss of contact according to the tactile feedback, and thus controls the robot to restart the grasping process, which was not explicitly programmed or demonstrated by the prior LfD data. The result of the regrasping experiments in Fig. <ref> shows that the tactile-based control learned from human demonstrations is very effective in performing robotic dexterous bimanual manipulation tasks autonomously and quickly without the need for explicit manual programming or complex planning. The policy also achieves successful grasping of previously unseen objects, as shown in Fig <ref>. Although the test objects have a variety of sizes and weights compared with the object used in the demonstration, the policy can still perform stable grasping. The experiment results show that the trained policy can generalize to unseen objects with similar cylindrical shapes but with different sizes and weights. §.§ Comparison Study We conducted a comparison study to validate that successful grasping is achieved by the active use of tactile sensing. Besides training a BC network using the structure shown in Fig. <ref>, we also train two different BC networks for comparison. The first one has exactly the same BC network structure but with frozen input of the tactile images, meaning that the image input stays unchanged during both the training and testing. The second one has an FCN structure and uses the poses of two end-effectors (both positions and orientation) as the input to train the network. The proposed BC network demonstrates convergence to a loss of 0.04 on the testing set. In contrast, the network employing frozen tactile information achieves convergence with a loss of 0.5, while the FCN converges to a loss value of 1. These results prove that the effective integration of tactile information significantly enhances the convergence rate and leads to a reduced loss value in the final model. We also compared all the grasping performances on the real dual-arm robot. As shown in Fig. <ref>, both BC network structures without using the tactile information failed in grasping the tube: robot arms failed to approach the object from its initial pose, and instead, they bypassed the object and moved towards the desired end-poses, showing no contact-awareness. The experimental results indicate that tactile feedback plays an essential role in providing contact information for initiating contacts, generating appropriate adjustments, lifting and retrieving to the desired target locations, enabling the dual-arm robot to perform very fine and dexterous contact-rich skills. §.§ Interpretability To explicitly show how much different modalities influence the entire operation, we use the saliency map method for calculating the weight distribution. The procedure for calculating this distribution is formulated as follows: [ W_i = N(I)/N(I) + N(J), W_j = N(J)/N(I) + N(J) ], where W_i and W_j are the weight distributions of each modality. N(·) represents the normalization process. I is importance of the tactile information that is calculated by adding all the absolute value of weight that the learned policy distributed to tactile features. J is calculated in the same way by adding all the the absolute value of weight that are distributed to robot proprioceptive state features. The comprehensive process of dexterous pinch grasping can be subdivided into four primary stages: pre-grasp, pressing, rolling and lifting, and stabilization. Each of these stages utilizes tactile feedback in a distinct manner. In Fig <ref>, the weight changes during the complete dexterous pinch grasping process are depicted. Initially, as the end-effector moves toward the objects without any contact deformation on the tactile sensor, the weight of the robot's proprioceptive state exceeds that of the tactile information. When the tactile sensor comes into contact with the desk and is prepared for a pre-grasp pose, the weight of the tactile information increases (stage A). As the end-effector advances towards the object and initiates contact, the weight attributed to the tactile information increase, exceeding that of the proprioceptive state (stage B). During the roll and lift phase, the weight of the tactile information initially decreases, subsequently achieving equilibrium with the proprioceptive state (stage C). This indicates that during the lifting phase, the learned policy necessitates tactile information for successful in-hand manipulation and proprioceptive information for effective dual-arm coordination. Finally, upon successfully lifting the tube, the weight reverts to the tactile information, facilitating the stabilization of the tube (stage D). § CONCLUSION AND FUTURE WORK The presented Learning from Demonstration (LfD) framework showed successful skill transfer from humans to robots with minimal one-trial of real robot data, with the use of rich tactile sensing at robot's fingertips. Through our journey of exploring how to best utilize the new generation of compliant tactile sensors, we have developed the presented encoding methods that can effectively extract and capture high-dimensional contact sensing from soft tactile sensors, together with the fusion with proprioceptive feedback. The interesting outcome is to confirm the possibility to learn from real robot data directly, without the need of high computation or big data, if the right data is used. Our comparison studies showed that without the use of tactile sensing, dexterous motor skills cannot be learned by one-shot demonstrations with traditional robot sensing which is rather limited. Our proposed approach overcomes the traditional limitations of one-shot learning method, through the use of tactile and proprioceptive information for extracting useful information and mapping it into fine-grained motor skills. This approach is shown to be robust in the presence of external pushes and is able to perform re-grasp the object if it drops, which was not shown in the one-shot demonstration and emerges as the natural outcome of sensorimotor skills through state-action mapping. The ability to learn from real data/hardware and a single demonstration is very attractive to gain a wider range of machine learning approaches in the real world, where tasks can hard be simulated and only a small amount of data is available. Meanwhile, one apparent limitation is that one-shot learning is apriori trained on a specific task and object, and it can be generalised and robust only around neighbourhood situations within a category of similar tasks: generalization applies to new/unseen objects that are similar to the demonstrated object of certain variations. The advantage of having only one demonstration comes with the trade-off that when a very different object grasping is needed, then at least one demonstration data is needed. Another limitation is that the robot's performance is based on blind grasping and re-grasping, and has not yet utilised external visual perception. In the future, integration of the current framework with stereo vision could extend the versatility and dexterity of object manipulation. Overall, our proposed LfD framework provides an attractive solution for learning from one demonstration with tactile sensing and supports broad real-world applications in robotics with data scarcity. IEEEtran
http://arxiv.org/abs/2307.04126v1
20230709084201
Compactness of sequences of warped product circles over spheres with nonnegative scalar curvature
[ "Wenchuan Tian", "Changliang Wang" ]
math.DG
[ "math.DG" ]
[4] thmTheorem[section]
http://arxiv.org/abs/2307.04946v1
20230711002138
DDGM: Solving inverse problems by Diffusive Denoising of Gradient-based Minimization
[ "Kyle Luther", "H. Sebastian Seung" ]
cs.CV
[ "cs.CV", "cs.LG", "eess.IV" ]
DDGM: Solving inverse problems by Diffusive Denoising of Gradient-based Minimization Vincenzo Vitelli October 2023 ==================================================================================== Inverse problems generally require a regularizer or prior for a good solution. A recent trend is to train a convolutional net to denoise images, and use this net as a prior when solving the inverse problem. Several proposals depend on a singular value decomposition of the forward operator, and several others backpropagate through the denoising net at runtime. Here we propose a simpler approach that combines the traditional gradient-based minimization of reconstruction error with denoising. Noise is also added at each step, so the iterative dynamics resembles a Langevin or diffusion process. Both the level of added noise and the size of the denoising step decay exponentially with time. We apply our method to the problem of tomographic reconstruction from electron micrographs acquired at multiple tilt angles. With empirical studies using simulated tilt views, we find parameter settings for our method that produce good results. We show that high accuracy can be achieved with as few as 50 denoising steps. We also compare with DDRM and DPS, more complex diffusion methods of the kinds mentioned above. These methods are less accurate (as measured by MSE and SSIM) for our tomography problem, even after the generation hyperparameters are optimized. Finally we extend our method to reconstruction of arbitrary-sized images and show results on 128 × 1568 pixel images. § INTRODUCTION A linear inverse problem is defined by a known measurement operator A. Given observed data y, the goal is to recover x by “explaining” the data, Ax ≈ y. Traditionally one minimizes the reconstruction error ‖ Ax - y‖^2, often by some kind of gradient descent. When the condition number of A is large, the inverse problem is said to be “ill-posed.” The true minimum of the reconstruction error is a bad solution because it tends to amplify noise. Better results can often be obtained by early stopping of the gradient descent. Another possibility is to formulate a prior probability distribution for x, and find the best x by maximizing the posterior probability, treating the reconstruction error as a log likelihood. Recently, it has been shown that neural nets trained to denoise images can be incredibly successful at generating images when used in a diffusion process <cit.>. Another exciting application of these denoising nets would be as priors for solving inverse problems. Although we have no direct access to the x that gave rise to the data y, we assume that we have access to images that are statistically like x, i.e., samples from the prior probability distribution P(x) are available. If a net is trained to denoise these samples, it effectively learns something about the prior distribution, and should be helpful for reconstructing the unknown x that gave rise to the data y. We propose a simple method of doing this. The method augments classical gradient-based minimization of the reconstruction error with denoising by the pretrained net. The only perhaps nonintuitive aspect of our method is that noise is also added back in before subsequent denoising. As far as we know, our simple method is novel. Unlike <cit.>, our method does not require a singular value decomposition (SVD) to run. Unlike <cit.> our method does not require backpropagating through the denoiser. And finally, unlike <cit.> and the previous methods our method does not couple the number of gradient updates to the number of denoiser updates. We'll see that we require an order of magnitude fewer denoiser updates than gradient updates, so our method is fast. We also show that accuracy is also superior, when measured by standard metrics such as MSE or SSIM. We compare our method to denoising diffusion restoration models (DDRM) and a variant of diffusion posterior sampling (DPS) on the inverse problem of tomographic reconstruction from tilt series electron micrographs, a popular technique in biological imaging <cit.>. 2D images of a specimen with a slab geometry are acquired at multiple tilt angles, and a 3D image is inferred by solving the linear inverse problem (Fig. 1). The problem is highly ill-posed, because the angles span a limited range, typically (-60^∘, +60^∘). For simplicity, we will study the problem of reconstructing a 2D image from 1D projections. The generalization to reconstructing a 3D image from 2D projections is conceptually straightforward and will be discussed elsewhere, because implementing a 3D denoising net is somewhat more involved. The generalization of our method to other kinds of inverse problems is very natural. However we have not explored such applications here, because each inverse problem will require some tuning of annealing schedules. This seems to be the case for diffusion methods more generally. We had to extensively tune other methods to achieve performance even competitive with a traditional (non-neural) gradient descent method. On another note, electron micrographs can be extremely large, and this is the case for biomedical images more generally. Another contribution of this paper is a novel patch-based diffusion process that enables denoisers trained on small patches to handle arbitrarily large images, either for generating image or solving inverse problems. In related work, GANs were used to synthesize images resembling electron micrographs of brain tissue <cit.>, but the GANs were not applied to inverse problems. § DIFFUSIVE DENOISING OF GRADIENT-BASED MINIMIZATION (DDGM) Comment/* */ ParameterParameter We assume that a network ϵ_θ has already been trained to denoise images x. We discuss the training objective later in Eq. <ref>. Our diffusion method for inverse problems is given in Algorithm <ref>. We take K gradient descent steps on the reconstruction error ‖ Ax-y‖^2. Then we add noise to x. Then we denoise x using the net. This process is repeated N times with a noise level that decays exponentially. We note that the K gradient descent steps on the reconstruction error is essentially a classical algorithm, and by itself already yields some sort of solution. It might be tempting to simply denoise this with our net, but we had little success with this. We speculate this is because the the output of algebraic reconstruction is not simply the real image corrupted by Gaussian noise, which is the only kind of corruption that the net has been trained to remove. The trick is to add Gaussian noise to x before applying the denoising net. If we add enough noise, our denoiser appears to improve x in some ways, at the expense of making it blurry. If this process is repeated with a decaying noise level, we will see that x becomes both sharp and accurately produces features of the true image. § EXPERIMENTS Dataset We downloaded two volumes of size 1k × 10k × 1k from the center of a publicly available 3D image dataset acquired by FIB-SEM from a fly brain <cit.>. We chose the location of these sub-volumes to avoid stitching artifacts that are present in the full dataset. The voxel sizes at MIP-1 resolution are 16 × 16 × 16 nanometers. We used one volume for training, and the other for validation and testing. The dataset was normalized to have zero mean and unit variance computed over all the training set pixels. Training the denoising network We train a U-Net to denoise 128× 128 images corrupted by adding randomly rescaled Gaussian noise σϵ to a clean image x. The elements of the vector ϵ are drawn from a Gaussian distribution with zero mean and unit variance. The scalar σ, or noise level, is chosen from a LogUniform distribution, i.e., logσ is uniformly distributed in the interval [log(0.03), log(30.0)]. The network output is denoted by ϵ̂_θ(x+σϵ), where θ are the network parameters and x+σϵ is the corrupted image. The network is trained to predict the unscaled noise ϵ, i.e., we minimize the mean squared error ∇_θ‖ϵ - ϵ̂_θ(x+σϵ) ‖^2 using the Adam optimizer with default PyTorch parameters. We train for 380,000 gradient updates which took 20 hours using 8 NVIDIA 3090 GPUs with a batch size of 64 (8 images per GPU). Our U-Net style architecture has 128 feature maps at all 5 levels of resolution, Group Normalization <cit.>, residual connections within blocks, and 6 convolutional layers in each block. The net has 8 million parameters in total. Following the work of <cit.>, the net is not conditioned on the noise level, unlike many other models in the diffusion model literature <cit.>. Rather, a single unconditional net is trained to denoise at any noise level. Above the target task was characterized as noise prediction. However, it is more intuitive to flip the sign and think of the target task as denoising. If we regard the output of the net as -ϵ̂_θ, the net is trained to predict a direction in image space that is denoising. Traditionally, a denoising autoencoder is trained to predict a clean image in one step. Our net might be called a residual denoising autoencoder, since it predicts the direction of the difference between the clean and noisy image. This is suitable for the iterative diffusion method that will be introduced below. Note that the net is not trained to predict the magnitude of the denoising step, since the target is the unscaled noise ϵ. Later on, our diffusion procedure will rescale the denoising direction -ϵ̂_θ appropriately. Our networks are trained using PyTorch <cit.> and PyTorch Lightning <cit.>. §.§ Unconditional generation Our ultimate goal is to solve an inverse problem, i.e., generate an image that explains the data. However, unconditional generation of images turns out to be invaluable for evaluating the quality of the prior learned by the denoising network, and for adjusting the parameters of the diffusion schedule. We find a simple exponential decay works well enough with 50 diffusion steps. Specifically we initialize σ_1= 30.0 and x_1 = σ_1 ϵ_1 where ϵ_1 ∼𝒩(0,1). We iterate the following for 50 steps to generate images unconditionally: σ_n = σ_1 ((1-α)^2 + αβ)^(n-1)/2 x_n+1← x_n - ασ_n ϵ_θ(x_n) + √(αβ)σ_n ϵ_n We set α=0.183 and β=0.5 as constants. This schedule is motivated by the simple exponential-decay schedule proposed in <cit.> (discussed before their more sophisticated schedule in their Algorithm 1). Our results are shown in <ref>. This shows that this denoiser is indeed quite powerful and should be very helpful as a prior for solving inverse problems. §.§ Simulated tomographic tilt series We extract random 2D image patches of size 128 × 128 from these volumes to train and evaluate our network. We use the Astra Toolbox to simulate 128 uniformly spaced tilt views over the range (-60^∘, +60^∘). Each tilt view is a 1D projection of the original image (128 pixels wide). Each of these 128 views are concatenated to form a 128 × 128 dimensional data vector y called the sinogram. This is then corrupted with Gaussian noise of magnitude 4.1 which gives a signal-to-noise ratio of 10 to 1. The tilt views are a linear function of the images. y = Ax + σ_y ·ϵ In this setting, the number of observed variables (the dimension of y) equals the number of variables we are trying to infer (the dimension of x) which in this setting is 128^2. The matrix A is highly ill-conditioned however with over half of the singular values being smaller than 0.1 × the largest singular value of A. We will see that simply performing gradient descent on ‖ Ax - y ‖^2 can recover some structure but inevitably misses a significant portion of critical information. We use a validation set of 128 images of size 128 × 128 to tune the hyperparameters of all reconstruction methods. We report mean squared errors and SSIM metrics on a different test set of 128 images <cit.>. We use TorchMetrics to compute SSIM <cit.>. While the test set is small, the standard error on our measurements is still sufficiently low that we can see clear differences between all methods. §.§ Using the prior for tomographic reconstructions We run Alg. <ref> on our simulated tilt series. We report quantitative results in Tab. <ref>. We report all methods using the absolute best (meaning lowest MSE on a validation set) settings found and the best settings subject to the number of denoiser evaluations being 50. All qualitative figures shown use the best settings for any number of denoiser evals. For the best settings, we found σ_1=3.0, σ_N=0.03, N=150, K=15 and λ = 9e-5. For the best settings at 50 denoiser evaluations, we set σ_1=3.0, σ_N=0.03, N=50. We set K=25 and λ = 9e-5. We found that performance of our method was still quite high at just 50 denoiser evaluations. In <ref> we show two example reconstructions given by our method. Since our method is stochastic, we may end up with different results each time. Ideally these variations would be small, as we would ideally only have one unique solution. For challenging patterns, we occasionally see meaningfully different outputs of the network. In <ref> we show single individual reconstructions generated by our method compared to three other reconstruction methods which we'll discuss in the next section. Now we'll discuss how we arrived at this setting of parameters. Step size λ for gradient of reconstruction error We found the largest λ for which gradient descent causes the reconstruction error ‖ Ax -y ‖^2 to decrease. For larger λ, the error explodes. The value of λ is held constant for our whole algorithm's process. We kept this value λ = 9e-5 for all experiments. Future work may explore tuning this parameter as well. Initial noise level σ_1 The initial noise level we use, σ_1=3.0, is actually 10× lower than what we used for unconditional generation σ_1=30.0. Empirically we found that for fixed N, lowering the starting σ improved reconstruction performance slightly (Appendix). Intuitively, after just a few gradient iterations ∇_x ‖ Ax-y‖^2 the reconstruction x already bears some similarity to a real image from the training set. Therefore x+σϵ may resemble a clean image + Gaussian noise image for relatively low levels of Gaussian noise. We do not have an explanation for why starting with lower noise levels is actually better for MSE however. Ending noise level σ_N We choose the smallest noise level the network was trained on, which in this case was σ_N=0.03. This is an imperceptibly small level of noise. We did not vary this choice in the experiments. Number of gradient updates K per iteration This parameter is important. When the number of denoiser evaluations N=50, we found the optimal value of K=25. When the number of denoiser evaluations N=150, we found the optimal value of K=15 though the MSE differences were very slight between K=15 and K=25. Interestingly these values are less than the optimal value of K=100 when doing simple algebraic reconstruction (Eq. <ref>). But we found that the total number of gradient iterations, the product NK, was quite large. NK=1250 when N=50 and NK=2250 when N=150. §.§ Comparisons We compare our method to three other methods, DDRM <cit.>, a variant of DPS which we call DPS_* <cit.>, and a non-neural algebraic reconstruction method. Mean squared error and SSIM are computed between the recovered image x and the ground truth x_true. Results from a test set of 128 images are provided in Tab. <ref>. We evaluate the neural methods with two different settings: one where we allow any number of denoiser evaluations and one where we only allow 50 denoiser evaluations (as this setting is much faster). We find in both cases our method significantly outperforms both DDRM and DPS_* in terms of MSE and SSIM. We find that DPS_* in particular benefits from a large number of denoiser evaluations, but even after 1000 evaluations, its error is far worse than our method with just 50 denoiser evaluations. In this setting of 50 denoiser evaluations, we actually found that the DPS_* method was outperformed even by the simple non-neural algebraic method. Algebraic reconstruction The simplest method does not use a neural network. We just perform K steps of gradient descent on the squared error between predicted tilt views Ax and the measured tilt views y x ← x - λ∇_x ‖ Ax - y ‖^2 Early stopping is used as an implicit regularizer. We initialize x=0. We set λ=9e-5 to be the λ which gives rise to the fastest decrease in objective value. This means we have one hyperparameter, the number of gradient steps K. We find that K=100 gives the lowest MSE between true and generated reconstructions x on our validation set. We show the validation set reconstruction errors as we vary K in the Appendix. We show the test set values for K=100 in Table <ref> and show reconstructions in Fig. <ref> and Fig. <ref> Denoising Diffusion Restoration Models (DDRM) We refer the reader to Eq. 7 and 8 of <cit.> for the full description of the algorithm, which relies on the SVD of the projection operator A. We make note that computing the SVD of A is simple enough for 128 × 128 images (see Appendix for singular values), but more thought would be required to apply this method to our 128 × 1568 pixel images in Fig. 1 due to memory constraints. This method treats different singular values of the measurement operator differently depending on the level of noise at each step of the diffusion process. We note that the method appears to recommend setting the initial noise level to be larger than the largest non-zero singular value of A^†. In our case this would imply setting σ_init≈ 1 / 10^-5 = 10^5. Our network has only been trained on noise levels up to 30 however. To proceed with the DDRM method, we therefore set all singular values of A which are smaller than 1 / 30.0 to zero and initialize our DDRM diffusion process at σ=30.0. We keep η_b = 1.0 as used in the paper and tune their η parameter and the number of diffusion steps N. We find that η=1.0 and N=10 provided the minimal mean squared reconstruction error on a validation set of images. We show the validation set reconstruction errors as we vary η and N in the Appendix. We show the test set values in Table <ref> and show reconstructions in Fig. <ref>. The reconstructions were notably blurry for all settings of parameters we tried (see examples in Fig. 4). This is not inconsistent with the recoveries shown by <cit.>. We also observed that unlike many diffusion models, using a surprisingly small number of steps (10 was optimal) gave higher performance than more steps. Diffusion Posterior Sampling (DPS_*) We compare to a variant of the DPS method proposed in <cit.>, which we'll call DPS_*. We cannot use our networks with their exact diffusion schedule, as they use a diffusion method which learns the variances at each step, and they operate in the "variance preserving" regime (as opposed to the "variance exploding regime" we work in which means the variance of our patterns grows with increasing noise level). However we make several modifications and perform extensive parameter tuning in an attempt to get this method working for our reconstruction problem. We first apply their key insight, line 7 of their Algorithm 1, to our pre-existing diffusion schedule. Specifically we add a normalized gradient term to the diffusion step of (Eqn. <ref>): x_n+1 = x_n - ασ_n ϵ_θ(x_n) + √(αβ)σ_n ϵ_n - ζ∇_x_n‖ A (x_n - σ_n ϵ_θ(x_n)) - y ‖^2/‖ A (x_n - σ_n ϵ_θ(x_n)) - y ‖ If we stick with our pre-existing diffusion schedule (so setting α=0.183 and β=0.5 and N=50, the schedule used to produce the images in Fig. <ref>) then there is only one parameter to tune: ζ. We tune this parameter and show the results in the Appendix. We are unable to set ζ large enough to make the reconstructions match the data (the MSE is always worse than for the simple algebraic reconstruction method). There is a slight technical detail here. We are operating in the variance exploding regime, meaning our x_n are related to their x'_n via x_n ≈√(1+σ_n^2) x'_n so it may be more appropriate to rescale their gradients ∇_x by 1/√(1+σ_n^2). Therefore we also compare a rescaled version of DPS: x_n+1 = x_n - ασ_n ϵ_θ(x_n) + √(αβ)σ_n ϵ_n - ζ∇_x_n‖ A (x_n - σ_n ϵ_θ(x_n)) - y ‖^2/√(1+σ_n^2)‖ A (x_n - σ_n ϵ_θ(x_n)) - y ‖ We find this moderately improves MSEs so we use the rescaled version in the rest of our experiments. We explore what happens when we lower the starting noise level of our diffusion process, lowering α so that σ_N is still 0.03 and N is still 50. We find that lowering the starting noise level helps substantially but does not let us acheive even the MSE given by the classic algebraic reconstruction method. We explain this result as follows: our problem is highly ill-conditioned and if we use our pre-existing diffusion schedule, we only allow 50 gradient steps which simply is not enough for the data to strongly influence the reconstructions. In their paper, they iterate for 1000 steps, so we modify N=1000 and α such that σ_N=0.03 and perform more experiments, tuning the coefficient ζ. The experiments are rather slow at this point since each image requires backpropagation through our denoiser 1000 times. However we found that σ_1=3.0 and ζ=0.1 gave optimal performance in this setting. §.§ Arbitrary-sized image reconstruction For this algorithm to have practical utility in connectomics, we must by able to reconstruct arbitrarily large images. Naively one could try running Algorithm 1 in patches then stitching the outputs together. However, the algorithm is inherently random and it is not obvious how to avoid seams in that scenario. One idea is to attempt to modify the work of <cit.> to the setting of inverse problem solving. We take a conceptually simpler approach. Instead we modify the denoiser network itself to run in patches, then we smoothly blend the denoised together. Mathematically, we convolve the denoiser outputs with a 2D bump function. The details are provided in the Appendix. We use this method to produce the 128 × 1568 pixel reconstruction shown in Fig <ref>. § DISCUSSION Future directions In this paper we focused on a particular inverse problem, limited angle computed tomography. It would be interesting to explore application of our method to other inverse problems, especially non-linear ones. Since we do not require SVD, our method is at least well-equipped in principle to solve non-linear inverse problems such as those considered by the DPS method. Another line of work should consider annealing the step size or number of gradient steps inside each loop. In our algorithm, the effective strength of the prior decays exponentially over time, while the data-term does not change. Surprisingly we did not need to anneal the data-driven term for our application, but other applications may benefit from such an annealing. Another interesting line of work would be the use of a preconditioner in the gradient updates with the idea of reducing the number of gradient evaluations at each iteration. Currently the gradient updates are the slow component of our algorithm. Limitations A notable drawback of this method and related methods is the sensitivity to parameters. This work was aided by the fact that we have a ground truth by which we could tune the parameters. However, in the real world, one will typically use tomography to infer the 3D structure of an object that no other method can. This means there is no ground truth on which to tune the parameters, or more generally, evaluate the method. Another limitation regards our evaluation method. We have relied on MSE and SSIM, but these might encourage blurry reconstructions in uncertain image regions. Future evaluations should explore additional quantitative metrics. Potential negative impacts One concerning outcome of this line of work is the tendency of the networks to hallucinate or eliminate real biological structures. We have observed that the reconstructions usually look very realistic, even when they are incorrect. For scientific applications, such hallucinations can be very concerning. One must take great care to validate any systems that derive scientific results from methods such as ours which use powerful priors to guide data-driven reconstructions. plainnat 21 urlstyle [Chung et al.(2022)Chung, Sim, Ryu, and Ye]chung2022improving Hyungjin Chung, Byeongsu Sim, Dohoon Ryu, and Jong Chul Ye. Improving diffusion models for inverse problems using manifold constraints. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL <https://openreview.net/forum?id=nJJjv0JDJju>. [Chung et al.(2023)Chung, Kim, Mccann, Klasky, and Ye]chung2023diffusion Hyungjin Chung, Jeongsol Kim, Michael Thompson Mccann, Marc Louis Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. In The Eleventh International Conference on Learning Representations, 2023. URL <https://openreview.net/forum?id=OnD9zGAGT0k>. [Falcon and The PyTorch Lightning team(2019)]falcon2019lightning William Falcon and The PyTorch Lightning team. PyTorch Lightning, March 2019. URL <https://github.com/Lightning-AI/lightning>. [Graikos et al.(2022)Graikos, Malkin, Jojic, and Samaras]graikos2022diffusion Alexandros Graikos, Nikolay Malkin, Nebojsa Jojic, and Dimitris Samaras. Diffusion models as plug-and-play priors. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL <https://openreview.net/forum?id=yhlMZ3iR7Pu>. [Ho et al.(2020)Ho, Jain, and Abbeel]ho2020denoising Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:0 6840–6851, 2020. [Jain(2017)]jain2017adversarial Viren Jain. Adversarial image alignment and interpolation. arXiv preprint arXiv:1707.00067, 2017. [Jalal et al.(2021)Jalal, Arvinte, Daras, Price, Dimakis, and Tamir]jalal2021robust Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alex Dimakis, and Jonathan Tamir. Robust compressed sensing MRI with deep generative priors. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021. URL <https://openreview.net/forum?id=wHoIjrT6MMb>. [Kadkhodaie and Simoncelli(2021)]kadkhodaie2021stochastic Zahra Kadkhodaie and Eero P Simoncelli. Stochastic solutions for linear inverse problems using the prior implicit in a denoiser. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021. URL <https://openreview.net/forum?id=x5hh6N9bUUb>. [Kawar et al.(2021)Kawar, Vaksman, and Elad]kawar2021snips Bahjat Kawar, Gregory Vaksman, and Michael Elad. SNIPS: Solving noisy inverse problems stochastically. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021. URL <https://openreview.net/forum?id=pBKOx_dxYAN>. [Kawar et al.(2022)Kawar, Elad, Ermon, and Song]kawar2022denoising Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming Song. Denoising diffusion restoration models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL <https://openreview.net/forum?id=kxXvopt9pWK>. [Mastronarde and Held(2017)]mastronarde2017automated David N Mastronarde and Susannah R Held. Automated tilt series alignment and tomographic reconstruction in imod. Journal of structural biology, 1970 (2):0 102–113, 2017. [Nicki Skafte Detlefsen et al.(2022)Nicki Skafte Detlefsen, Jiri Borovec, Justus Schock, Ananya Harsh, Teddy Koker, Luca Di Liello, Daniel Stancl, Changsheng Quan, Maxim Grechkin, and William Falcon]nicki2022torchmetrics Nicki Skafte Detlefsen, Jiri Borovec, Justus Schock, Ananya Harsh, Teddy Koker, Luca Di Liello, Daniel Stancl, Changsheng Quan, Maxim Grechkin, and William Falcon. TorchMetrics - Measuring Reproducibility in PyTorch, February 2022. URL <https://github.com/Lightning-AI/torchmetrics>. [Paszke et al.(2019)Paszke, Gross, Massa, Lerer, Bradbury, Chanan, Killeen, Lin, Gimelshein, Antiga, et al.]paszke2019pytorch Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019. [Scheffer et al.(2020)Scheffer, Xu, Januszewski, Lu, Takemura, Hayworth, Huang, Shinomiya, Maitlin-Shepard, Berg, et al.]scheffer2020connectome Louis K Scheffer, C Shan Xu, Michal Januszewski, Zhiyuan Lu, Shin-ya Takemura, Kenneth J Hayworth, Gary B Huang, Kazunori Shinomiya, Jeremy Maitlin-Shepard, Stuart Berg, et al. A connectome and analysis of the adult drosophila central brain. Elife, 9:0 e57443, 2020. [Song et al.(2023)Song, Vahdat, Mardani, and Kautz]song2023pseudoinverseguided Jiaming Song, Arash Vahdat, Morteza Mardani, and Jan Kautz. Pseudoinverse-guided diffusion models for inverse problems. In International Conference on Learning Representations, 2023. URL <https://openreview.net/forum?id=9_gsMA8MRKQ>. [Song et al.(2020)Song, Sohl-Dickstein, Kingma, Kumar, Ermon, and Poole]song2020score Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. [Song et al.(2022)Song, Shen, Xing, and Ermon]song2022solving Yang Song, Liyue Shen, Lei Xing, and Stefano Ermon. Solving inverse problems in medical imaging with score-based generative models. In International Conference on Learning Representations, 2022. URL <https://openreview.net/forum?id=vaRCHVj0uGI>. [Wang et al.(2023)Wang, Yu, Yu, and Zhang]wang2023unlimited Yinhuai Wang, Jiwen Yu, Runyi Yu, and Jian Zhang. Unlimited-size diffusion restoration. arXiv preprint arXiv:2303.00354, 2023. [Wang et al.(2004)Wang, Bovik, Sheikh, and Simoncelli]Wang2004ssim Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 130 (4):0 600–612, 2004. 10.1109/TIP.2003.819861. [Wu and He(2018)]wu2018group Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018. [Xu et al.(2020)Xu, Januszewski, Lu, Takemura, Hayworth, Huang, Shinomiya, Maitin-Shepard, Ackerman, Berg, et al.]xu2020connectome C Shan Xu, Michal Januszewski, Zhiyuan Lu, Shin-ya Takemura, Kenneth J Hayworth, Gary Huang, Kazunori Shinomiya, Jeremy Maitin-Shepard, David Ackerman, Stuart Berg, et al. A connectome of the adult drosophila central brain. BioRxiv, pages 2020–01, 2020. [Silversmith et al.(2021)Silversmith, Collman, Kemnitz, Wu, Castro, Falk, Roat, Macrina, Perlman, shangmu, Halageri, Gunn, Jagannathan, Hoag, Turner, and Dorkenwald]silversmith2021cloudvolume William Silversmith, Forrest Collman, Nico Kemnitz, Jingpeng Wu, Manuel Castro, Ben Falk, Chris Roat, Thomas Macrina, Eric Perlman, shangmu, Akhilesh Halageri, Pat Gunn, Sridhar Jagannathan, Austin Hoag, Nicholas Turner, and Sven Dorkenwald. seung-lab/cloud-volume: Zenodo release v1, November 2021. URL <https://doi.org/10.5281/zenodo.5671443>. [Wu et al.(2021)Wu, Silversmith, Lee, and Seung]wu2021chunkflow Jingpeng Wu, William M Silversmith, Kisuk Lee, and H Sebastian Seung. Chunkflow: hybrid cloud processing of large 3d images by convolutional nets. Nature Methods, 180 (4):0 328–330, 2021. § DATASET DETAILS AND SPLITS We download data using the Cloud Volume Python client <cit.> to access to the Janelia Fly Hemibrain dataset <cit.>. For the training set, we download a contiguous 10 gigavoxel volume at MIP-1 resolution from corner (x,y,z) = (10750, 6500, 9000) to (x,y,z) = (11750,16500,10000). For the validation and test sets, we extract randomly located patches from a contiguous volume that extends from corner (x,y,z)=(12250,6500,9000) to (x,yz)=(13250,16500,10000). The whole volume from which these subvolumes were downloaded can be viewed interactively in 3D by visiting the following Neuroglancer link <https://hemibrain-dot-neuroglancer-demo.appspot.com/#! § MORE UNCONDITIONAL GENERATIONS < g r a p h i c s > More unconditional generations with our diffusion model. We generate these images using the algorithm and hyperparameters described in Section 3.3 § MORE RECONSTRUCTIONS FROM OUR METHOD < g r a p h i c s > More tomographic reconstructions on validation set images from our model. Tilt views were simulated as described in Section 3.2. We generate these reconstructions using our method with the best-performing hyperparameters described in Table 1 of the paper. < g r a p h i c s > Ground truth corresponding to reconstructions from Fig. <ref> § SINGULAR VALUES OF THE PROJECTION MATRIX < g r a p h i c s > Singular values of the matrix A in the equation y=Ax+σ_y ϵ. This matrix A implements the forward projection operator, returning projections of the images from the angular range (-60^∘,+60^∘). We can see that this matrix is highly ill-conditioned, with singular values spanning a range from 10^2 down to 10^-5s. § PARAMETER TUNING §.§ Diffusion Denoising of Gradient Minimization We compute performance of our method on the validation set of 128 images as we modify various hyperparameters. < g r a p h i c s > Tuning K=number of gradient steps per iteration and starting σ with 50 diffusion steps, σ_N=0.03 (the final noise level). We set λ=9e^-5. The MSE between the predicted measurements Ax and the observed data y decreases monotically as K increases. However, the more important metrics, the MSE between reconstruction x and the ground truth x_true reaches its minimum at K=25. Similarly the SSIM between reconstruction x and the ground truth x_true reaches its maximum at K=25. < g r a p h i c s > Tuning K=number of gradient steps per iteration and starting σ with 100 diffusion steps, σ_N=0.03 (the final noise level). We set λ=9e^-5. < g r a p h i c s > Tuning K=number of gradient steps per iteration and starting σ with 150 diffusion steps, σ_N=0.03 (the final noise level). We set λ=9e^-5. Using 150 diffusion iterations (compared to 50 as in Fig <ref> improves MSE and SSIM between reconstruction x and ground truth x_true slightly. Notably the MSE between the predicted measurements Ax and the observed data y (right figure) is higher in this setting than when the number of diffusion iterations is 50. §.§ Algebraic Reconstruction < g r a p h i c s > Vary K=number of gradient steps taken. The only other hyperparameter is λ=9e^-5. §.§ Denoising Diffusion Restoration Models < g r a p h i c s > Varying η and number of diffusion iterations for DDRM. We also tried niter=5 but the performance was substantially worse than niter=10 and does not fit on these charts. §.§ Diffusion Posterior Sampling Besides the hyperparameter ζ governing the gradient step sizes, we find that DPS has an extreme sensitivity to the details of the noise schedule. In particular both the number of diffusion steps and the precise noise levels used have a large impact on the ultimate reconstruction performance. This makes somes sense as the number of gradient steps is tied to the number of diffusion steps in this algorithm. More performance could perhaps be achieved by a more extensive grid search over diffusion schedule parameters, but even evaluating a singular parameter configuration with 1000 diffusion steps (a single point in Fig. <ref>) takes over 1 hour so performing a grid search would require significant time investment. Furthermore, we struggled to find any setting of parameter < g r a p h i c s > Varying γ (from Eq. 5 and 6 of the main text) and comparing the unscaled and rescaled versions of Diffusion Posterior Sampling. We keep the diffusion schedule from the paper that we found gave high quality unconditional generations. This schedule is described in Eq. 2 of the main text, but in brief we use 50 iterations with an exponentially decaying noise schedule. < g r a p h i c s > We were able to improve performance of DPS by modifying the diffusion noise schedule. In particular, we try different starting noise level σ_1, and explore performance of the rescaled version of DPS for various γ. Note that we choose our schedule according to σ_n = σ_1 (σ_N/σ_1)^(n-1)/(N-1) so that we keep the number of diffusion steps fixed when we decrease the starting noise level (the spacing between noise levels just decreases as we decrease the starting noise level). In this figure, we still use 50 total denoiser evaluations. < g r a p h i c s > We evaluate DPS now with 1000 diffusion iterations. We choose our schedule according to σ_n = σ_1 (σ_N/σ_1)^(n-1)/(N-1). We vary γ and the starting σ_1. Interestingly the data error terms (right plot) Ax -y are nearly identical for all configurations. § ARBITRARY SIZED RECONSTRUCTION We modify the noise-prediction network itself to run in patches, then we smoothly average them together. This is the Approach diagrammed in Figure 1 of <cit.>. Mathematically we run: ϵ_patchified(x)[i,j] = ∑_u,v=1^∞ B[i-su,j-sv] ϵ_θ(x[su:su+p,sv:sv+p])[i-su,j-sv]/∑_u,v B[i-su,j-sv] where s=96 is the stride, and p=128 is the patch size, that we use in the experiments. The bump function we use is the product of two 1D bump functions B[x, y] = b(2x/p-1) · b(2y/p-1) and each 1D bump function is given by: b(u) = 1-exp(-1/max{1-u^2,0.2}) if |u| < 1 0 otherwise These decay smoothly to 0.2 as x→ +p/2 and x → -p/2. With this overlap fraction, pixels are on averaged processed 1.8× by the network so this method is approximately 1.8x slower than just running a larger patch through our network. Noise level distribution during training § EXPERIMENTAL DETAILS Dataset We download two 10GB volumes at MIP-1 resolution from the center of the fly hemibrain dataset, one for training and one for testing. There are notable stitching artifacts between blocks which we avoid in our volumes. The voxel sizes at MIP-1 resolution are 16 × 16 × 16 nanometers. Network architecture We use a 2D residual symmetric U-Net architecture with 4 downsampling layers. Our network is not conditioned on the noise level, similar to Song. More details are provided in the appendix. We train it on patches of size 128 × 128. We use Group Normalization. Denoiser training Diffusion sampling To sample with our noise-predictor model we pick a series of noise levels γ_1 > γ_2 > ... > γ_M and apply the following updates x_i+1 : = x_i - γ_i^2 - γ_i+1^2/γ_iϵ̂_θ(x_i) + √(γ_i^2 - γ_i+1^2) z_i where z_i ∼𝒩(0,1) To generate samples unconditionally, i.e. not solving an inverse problem, we initialize x_1 ∼ N(0,σ_1^2). We choose our noise scales using an exponentially decaying schedule γ_m = γ_1 e^-α m/N σ_1 controls the amplitude of the noise and β controls the sharpness and the rate of decay of the noise schedule. β > 1 gives rise to a relatively quick drop in noise followed by a slow decay. These are chosen purely empirically for the unconditional setting. We sample from our noise-predictor network ϵ̂ in two different regimes. First, we sample in the uncondtional regime (so normal diffusion) just to confirm we have indeed trained a quality denoiser. Second, we sample where instead of initializeing with gaussian noise, we initialize the process with the output of a classical reconstruction plus a modest amount of noise. § DENOISING AND DIFFUSION PRIORS FOR LINEAR INVERSE PROBLEMS §.§ Linear inverse problems Ill conditioning. Can have immense. §.§ Denoising priors Before getting into diffusion, it is much simpler to discuss denoising priors. Simple attempts. Fundamental challenge: denoising has been trained on IID Gaussian noise. §.§ Diffusion priors Previous works tend to make use of the of score-based perspective on diffusion models. Score-based diffusion models use a mean squared error denoising objective to train a score function which estimates s_θ(x_σ, σ) ≈∇ x_σ P(x_σ). This score function is trained over several orders of magnitude of noise scales. To generate samples unconditionally and generate samples unconditionally with annealed Langevin dynamics. x_σ_i+1← x_σ + α s_θ(x_σ, σ) + √(2 ασ)ϵ α and σ are annealed to zero in this process. Solving an inverse problem can be formalized as sampling from ∇ P(x|y). Many methods therefore proceed by using Baye's rule to write P(x_i|y) = P(y|x_i) P(x_i) / P(y). A natural idea is to replace log P(x_i) with log P(x_i|y) in the hopes that samples will instead be generated from P(x|y). x_σ_i+1← x_σ + α [s(x_σ, σ) + ∇ x_σlog P(y | x_σ)] + √(2 ασ)ϵ Challenge #1: intractable gradients It is tempting to replace ∇ x_σlog P(y | x_σ) with ∇_x_σlog P(y | x_σ) = A^⊤ (y - Ax)/λ^2 where λ is a hyperparameter controlling the strength. Unfortunately this is a mistake. Theoretically, P(y | x_σ) = ∫_x_0 P(x_σ|x_0) P(y | x__0) dx_0 . Emperically, this approximation can be off by several orders of magnitude. Have very negative consequences for sampling trajecory. Naive: ∇_x_σlog P(y | x_σ) ≈γ· A^⊤ (y - Ax) One of the earlier approximations was made by Jalal et al: ∇_x_σlog P(y | x_σ) ≈γ‖∇_x log P(x_σ)‖/‖ A^⊤ (y - Ax)‖(A^⊤ (y - Ax)) A later approximation was made by ... who used: Chung et al ∇_x_σlog P(y | x_σ) ≈γ1/‖ y - A x̂_0(x) ‖(∇_x ‖ y - Ax̂_0(x) ‖^2 ) We note here this is some confusion as the algorithm in the paper which instead writes ∇_x_σlog P(y | x_σ) = A^⊤ (y - Ax)/σ^2 + γ_i^2, but in line 144 of their code, this update rule is what is used, which has γ_i as an adaptive (and not fixed) hyperparameter. A number of works take a different strategy which relies on using SVD. Large magnitude singular values listen to the data. These are signifcantly more complicated so we do not write the equations. Small amplitude singular values are infereed through a diffusion process. Downsides 1) requires you to actually compute an SVD and 2) requires even more mathematical machinery on top of already complex diffusion. Still an approximation. Challenge #2: coupling diffusion schedule to gradient updates This is especially problematic for ill-conditoined probelms. Can require 100's of gradient updates. Setting gradient too large can lead to an exponential blow up At the end of the day, there are two properties shared by all such previousmethods that we will modify. 1) The diffusion process is initialized with random gaussian noise, in the same manner. 2) The diffusion updates themselves are modified. >
http://arxiv.org/abs/2307.05900v1
20230712041146
On Compatible Transfer Operators in Nonsymmetric Algebraic Multigrid
[ "Ben S. Southworth", "Thomas A. Manteuffel" ]
math.NA
[ "math.NA", "cs.NA" ]
On Compatible Transfer Operators in Nonsymmetric Algebraic Multigrid Ben S. Southworth and Thomas A. Mantueffel Received: date / Accepted: date ==================================================================== The standard goal for an effective algebraic multigrid (AMG) algorithm is to develop relaxation and coarse-grid correction schemes that attenuate complementary error modes. In the nonsymmetric setting, coarse-grid correction Π will almost certainly be nonorthogonal (and divergent) in any known inner product, meaning Π > 1. This introduces a new consideration, that one wants coarse-grid correction to be as close to orthogonal as possible, in an appropriate norm. In addition, due to non-orthogonality, Π may actually amplify certain error modes that are in the range of interpolation. Relaxation must then not only be complementary to interpolation, but also rapidly eliminate any error amplified by the non-orthogonal correction, or the algorithm may diverge. This note develops analytic formulae on how to construct “compatible” transfer operators in nonsymmetric AMG such that Π = 1 in any standard matrix-induced norm. Discussion is provided on different options for norm in the nonsymmetric setting, the relation between “ideal” transfer operators in different norms, and insight into the convergence of nonsymmetric reduction-based AMG. § BACKGROUND Algebraic multigrid (AMG) is a fixed-point iterative method to solve large sparse linear systems A𝐱=𝐛, based on two-parts, relaxation and coarse-grid correction. These two parts are designed to attenuate complementary error modes, together resulting in rapid convergence. Relaxation takes the form of a standard fixed-point iteration, 𝐱_k+1 = 𝐱_k + M^-1(𝐛 - A𝐱_k), where M^-1 is some relatively easy to compute approximation to A^-1. Coarse-grid correction is a subspace correction defined by interpolation and restriction operators, R and P, 𝐱_k+1 = 𝐱_k + P(R^*AP)^-1R^*(𝐛 - A𝐱_k). Here, R restricts the residual to a subspace, a surrogate coarse-grid operator, RAP, is inverted in the subspace, and P interpolates the coarse-grid solution as a residual correction on the fine grid. If the coarse-grid operator, 𝒦 := RAP, is too big to invert directly, the algorithm is called recursively and AMG is applied to (approximately) solve (RAP)𝐱_c = R𝐫. In this work, we focus on the CF-splitting form of AMG, where DOFs are partitioned into C-points and F-points, and we assume that R and P have an identity over the C-point block, that is, A = [ A_ff A_fc; A_cf A_cc ], R = [ Z; I ], P = [ W; I ]. As discussed later, this is not technically necessary, but makes the analysis more tractable and practical. In analyzing AMG methods, one typically considers bounding error propagation in some norm. Error propagation of relaxation and coarse-grid correction, respectively, take the forms 𝐞_k+1 = (I - Q^-1A)𝐞_k, 𝐞_k+1 = (I - P(R^*AP)^-1R^*A)𝐞_k := (I - Π)𝐞_k, where Q≈ A is easy to invert, and Π denotes the projection corresponding to coarse-grid correction. For symmetric positive definite matrices, letting R P implies Π_A = 1. In the nonsymmetric setting, the A-norm is not well-defined, and it is invariably the case that Π > 1 in any known inner product. We refer to the property of Π∼𝒪(1), independent of problem size or mesh spacing as being a “stable” coarse-grid correction <cit.>. Constructing R and P such that Π is stable is a fundamental difficulty with nonsymmetric AMG, and the motivation for this work. The standard goal for an effective AMG algorithm is to develop relaxation and coarse-grid correction schemes that attenuate complementary error modes. However, in the nonsymmetric setting there is a more fundamental question of whether relaxation and coarse-grid correction are even convergent in some reasonable norm. In practice, coarse-grid correction in the nonsymmetric setting will almost certainly be nonorthogonal (and divergent) in any known inner product, meaning Π > 1. This introduces a new consideration, that one wants coarse-grid correction to be as close to orthogonal as possible, in an appropriate norm, and, moreover, emphasizes that relaxation and coarse-grid correction must be complementary. Traditionally, relaxation is effective at attenuating error not in the range of interpolation. For nonsymmetric problems, relaxation must also quickly attenuate error that is amplified by the non-orthogonal coarse-grid correction. In particular, Π may amplify certain error modes that are in the range of interpolation, and relaxation must eliminate this error rapidly, or the algorithm may diverge. Here we focus primarily on the first point: how to construct R and P such that Π is close to one, which reduces the importance of the second. Formally connecting relaxation to coarse-grid correction in a unified framework for nonsymmetric AMG is an eventual goal. A number of works have considered nonsymmetric AMG and Petrov-Galerkin coarse-grid operators, where R≠ P. Much of this work has been aggregation-based <cit.>, but recently several classical CF-AMG variants have been proposed for nonsymmetric problems as well <cit.>. In all cases, theory or heuristics are derived motivating what properties a “good” restriction and interpolation operator should have, and then the two transfer operators are constructed largely independently (nonsymmetric AMG based on approximate ideal restriction (AIR) <cit.> is probably the closest method to considering compatible R and P; there some compatibility of R and P is taken into consideration in the algorithm design, but not directly during the construction of operators). Recent theory has shed light on the importance of a stable coarse-grid correction in nonsymmetric AMG and developed abstract conditions for stability based on considering the action of R and P in appropriate bases <cit.>. However, to the best of our knowledge, at no point has a formal framework been developed to build R and P together such that they are, in some sense, compatible. It has been recognized that for difficult SPD problems, the choice and construction of coarse grids, relaxation, and interpolation must be done in conjunction, leading to the development of compatible relaxation techniques <cit.>. Here, we develop a similar framework for compatible transfer operators in nonsymmetric AMG. We begin by developing background theory on projections in <Ref>. Given an M-induced inner product for SPD matrix M, and restriction or interpolation operator, we refer to a compatible restriction/interpolation operator as one such that Π_M = P(RAP)^-1RA_M = 1 in the induced norm. <Ref> develops necessary and sufficient conditions on R and P such that Π_M = 1 in any M-induced matrix norm (including M = I ↔ℓ^2-norm), followed by closed forms for compatible interpolation and restriction operators for the common choices of M = I, M=√(A^*A), and M = A^*A, and a relation between the “ideal” multigrid transfer operators in different norms. Further observations are provided in <Ref>, including observations on the compatibility of relaxation with non-orthogonal projections in AIR. Several novel AMG methods were explored based on the theory developed herein, but the numerical results were not consistently superior compared with current state of the art and, thus, are not provided. The hope is that this work may lay initial groundwork for future algorithmic or theoretical developments in nonsymmetric AMG. § LINEAR ALGEBRA AND PROJECTIONS Given any linear operator in finite dimensions we have the following. Let A be an m× n matrix. Use the following notation (A) = , A^* = , (A) = , ((A)) = . The following follow from the four fundamental subspace theorem: ((A)) = ((A^*), (A) ⊥(A^*), ((A)) = ((A^*), (A^*) ⊥(A). Let M be SPD and define the M-inner product and associated symbols x, y _M = M x, y , x _M^2 = M x, x , x ⊥_M y ⇔x, y _M =0 A x, y _M = x, A^†y _M, A^† = M^-1 A^* M. For any subspace , let ^⊥_M = { x  :  x, y _M =0   ∀y ∈}. It is clear that ^⊥_M = M^-1^⊥, and we have the following generalization: (A) ⊥_M (A^†),(A^†) ⊥_M (A). §.§ Projections The operator Π is a projection if Π^2 = Π. Then, (I-Π) is also a projection and (I-Π) = (Π),(I-Π)) = (Π). For any SPD M we have Π_M = sup_x≠ 0Π x_M/ x _M,Π_M = I-Π_M. Consider the following representation of Π. Let n_c =dim((Π)). Construct n_c× n matrices V and U such that (Π) = (V), (Π)^⊥ = (U)= (Π^*), V^* V = V^*M V = I (Π) = (V), (Π)^⊥_M = (U) = (Π^†), U^* U = U^*M U = I. Then, Π = V(U^*V)^-1 U^*, Π = V(U^†V)^-1U^†= V(U^*MV)^-1 U^*M, Π = (U^*V)^-1 , Π_M = (U^*MV)^-1 . This last line takes a bit of work, but it is straightforward. Next, let B_R, B_P be any n_c× n_c nonsingular matrices. Define R = UB_R , P = VB_P. Then, we can also write Π = P(R^*P)^-1 R^*, Π = P(R^*MP)^-1 R^*M, Π = B_P (R^*P)^-1B_R^*, Π_M = B_P (R^*MP)^-1B_R^*. §.§ Orthogonal projections A projection is orthogonal in the M-inner product if Π = Π^†. In this case, Πx ⊥_M (I-Π)x, and Π_M = 1.0. To see this, write Π x, (I-Π) x_M = x, Π^† (I-Π)x_M = x, Π(I-Π)x _M = 0, and, for every x, x _M^2 = Π x + (I-Π)x _M^2 = Π x_M^2 + (I-Π)x_M^2 ≥Π x_M^2. For x∈(Π), we have x_M = Π x, which yields the result. We are now in a position to prove two simple lemmas. A projection, Π, is orthogonal in the M-inner product if and only if (Π) ⊥_M (Π) Assume Π = Π^†. Recall that (Π) = (I-Π). If Π = Π^†, then, for every x,y, Πx, (I-Π)y _M = x, Π^†(I-Π)y _M = x, Π(I-Π)y _M = 0. Thus, (Π) ⊥_M (Π). Now, assume (Π) ⊥_M (Π). For every x,y, 0= Πx, (I-Π)y _M = x, Π^†(I-Π)y _M = x, Π^†-Π^†Π)y _M , which implies, for every y, Π^†y = Π^†Πy If y ∈(Π), then Π^† y = 0. Thus, (Π) ⊂(Π^†). By a dimensionality argument, (Π) =(Π^†). Similar to above, for every x,y, 0= Πx, (I-Π)y _M = (I-Π^†)Πx, y _M = (Π-Π^†Π) x, y _M. This implies, for every y, Πx = Π^†Πx. Let x ∈(Π). Then, x = Π^† x. Thus, (Π) ⊂(Π^†). A dimensionality, argument yields (Π)=(Π^†). Thus, Π = Π^†, which completes the proof. The following are equivalent, all corresponding to an orthogonal projection: [ [ Π = Π^†; Π = M^-1 Π^* M; MΠ = Π^* M; MΠ = (MΠ)^*; ] [ (Π) ⊥_M (Π) = (Π^*)^⊥; (MΠ) ⊥ (Π^*)^⊥; (MΠ) = (Π^*); (Π) ⊥_M (I-Π); (MΠ) ⊥ (I-Π) ] ] The condition that will be of must use later in this paper is (MΠ) = (Π^*). §.§ Non-orthogonal projections Projections are geometric in nature. To that end, define the minimal canonical angle, θ_min, between subspaces 𝒳,𝒴, as cos(θ_min^[𝒳,𝒴]_M) = sup_𝐱∈𝒳, 𝐲∈𝒴|⟨𝐱,𝐲⟩_M|/𝐱_M𝐲_M, for SPD M and inner product, ⟨·,·⟩_M. Note that θ_min^[𝒳,𝒴]_M = θ_min^[𝒳^⊥_M,𝒴^⊥_M]_M and <cit.> Π^*_M^2 = Π_M^2 = 1/sin^2(θ_min^[ℛ(Π),ker(Π)]_M) = 1 + ^2(θ_min^[ℛ(Π),ker(Π)]_M). In the case of an orthogonal projection, θ_min^[ℛ(Π),ker(Π)]_M = π/2 and (θ_min^[ℛ(Π),ker(Π)]_M) = 0. For non-orthogonal projections, the sup over 𝐱∈ℛ(Π)^⊥_M can be seen as a measure of the non-orthogonality. For SPD matrix M and projection Π, Π_M^2 = 1 + sup_x∈(Π^†)Πx_M^2/ x_M^2 = 1 + sup_x∈(Π)^⊥_MΠx_M^2/ x_M^2 Furthermore, sup_𝐱∈ℛ(Π)^⊥_MΠ𝐱_M/𝐱_M = (θ_min^[ℛ(Π),ker(Π)]_M). Recall, Π_M = (I-Π)_M. Given x, let x = x_1 + x_2 where x_1 ∈(Π) = (I-Π) and x_2 ∈(Π)^⊥_M = (Π^†). Note that x_1⊥_M x_2. Then, I-Π_M^2 = sup_x (I-Π)x, (I-Π) x_M/ x,x _M = sup_x (I-Π)x_2, (I-Π) x_2_M/ x_1,x_1 _M + x_2,x_2 _M = sup_xx_2_M^2 - 2Π x_2, x_2_M+ Π x_2_M^2/ x_1_M^2 + x_2_M^2 = sup_xx_2_M^2 + Π x_2_M^2/ x_1_M^2 + x_2_M^2 = 1 + sup_x_2 ∈(Π)^⊥_MΠ x_2 _m^2/x_2_M^2. The relation to the cotangent of the minimum angle between ℛ(Π) and ker(Π) follows from (<ref>). This completes the proof. § COMPATIBLE TRANSFER OPERATORS IN AMG Let A be an n× n, nonsingular, possibly nonsymmetric, matrix. Let R and P be n× n_c interpolation and restriction matrices. Consider the projection corresponding to coarse-grid correction, Π = P(R^*AP)^-1 R^*A. If A is SPD, then Π is orthogonal in the A-inner product and, thus, Π_A = 1.0. If A is nonsymmetric, then A does not yield an inner product, and it is unclear which norm to measure this in. In this section, we examine how construct “compatible” transfer operators R and P such that Π orthogonal in an inner product based on various M. To begin, we take the CF-splitting approach to AMG, where interpolation and restriction are defined as in (<ref>). We want to define F-points blocks Z and W such that Π is orthogonal in the M-inner product. Appealing to <Ref> and <Ref>, this is equivalent to satisfying (Π) ⊥_M (Π). With Π defined as in (<ref>), (Π) = (A^*R)^⊥. We then have equivalent conditions (Π) ⊥_M (A^*R)^⊥ ⟺ M(Π) ⊥(A^*R)^⊥ ⟺ M(P)= (A^*R). We can formalize this in the following lemma. Let Π= P(R^*AP)^-1 R^*A. Π is M-orthogonal if we can find n_c× n_c, nonsingular matrices, B_P and B_R, such that MPB_P = A^*RB_R. In the case of R and P in the CF-splitting form of AMG (<ref>), (<ref>) corresponds to given M, find {Z,W} and {B_R, B_P} such that M [ [ W; I ]] B_P = A^* [ [ Z; I ]] B_R. Note, in principle we only need B_P or B_R, but depending on M, working with one may be more convenient than the other. The following subsections we consider the natural examples of M=I (ℓ^2-norm), M= A^*A, and M= (A^*A)^1/2. §.§ M=I (ℓ^2-norm) Appealing to (<ref>) we seek B_P and B_R such that PB_P = A^*RB_R. Expanding in block form yields [ [ W; I ]] B_P = [ [ A_ff^*Z+ A_cf^*; A_fc^*Z+ A_cc^* ]]B_R, This can be accomplished by setting B_P = (A_fc^*Z+ A_cc) and B_R = I to get the condition WA_fc^*Z+ WA_cc^* - A_ff^*Z- A_cf^* = 0. Taking the adjoint to work with non-transpose submatrices of A, and rearranging to solve for W given Z or Z given W, yields the two (equivalent) conditions on W and Z such that Π is I-orthogonal: (Z^*A_fc+ A_cc)W^* = ZA_ff+ A_cf, Z^*(A_ff - A_fcW) = A_ccW^* -A_cf. Note that the case of classical ideal restriction, Z^*= -A_cfA_ff^-1 <cit.>, yields W=0. §.§ M=A^*A Appealing to (<ref>) we seek B_R and B_P such that A^*APB_P = A^*RB_R⇔ APB_P = RB_R. Expanding in block form yields [ [ A_ffW+A_fc; A_cfW + A_cc ]] B_P = [ [ Z; I ]]B_R, In this case, choose B_P = I and B_R = (A_cfW + A_cc) to arrive at the condition ZA_cfW + ZA_cc - A_ffW - A_fc = 0. Rearranging to solve for W given Z or Z given W, yields the two (equivalent) conditions on W and Z such that Π is A^*A-orthogonal, (A_ff- ZA_cf ) W = ZA_cc -A_fc, Z(A_cfW + A_cc) = A_ffW+A_fc. These equations have a similar structure to the case of M=I (<ref>). Here, note that Z=0 yields W= -A_ff^-1A_fc, which is ideal interpolation. §.§ M=(A^*A)^1/2 For completeness we consider the norm of M=(A^*A)^1/2, tying into our theory paper on convergence of nonsymmetric AMG <cit.>. Let A have SVD A= UΣ V^*, where U and V are unitary and the diagonal matrix Σ contains the singular values. Note that M = (A^*A)^1/2 = VΣV^*. Appealing to (<ref>), we seek VΣ V^*PB_P = VΣ U^*RB_R ⇔ V^*PB_P= U^*RB_R. In <cit.>, we compared V^* P and U^*R and proved that if they have the same range, then Π is (A^*A)^1/2-orthogonal, consistent with the framework developed herein. §.§ Ideal transfer operators In fact, for a given CF-spitting, the so-called ideal transfer operators can be defined on any matrix M, and there is a certain symmetry to the ideal R and P. Note the identities AA^* = [ A_ffA_ff^* + A_fcA_fc^* A_ffA_cf^* + A_fcA_cc^*; A_cfA_ff^* + A_ccA_fc^* A_cfA_ff^* + A_ccA_cc^* ], A^*A = [ A_ff^*A_ff + A_cf^*A_cf A_ff^*A_fc + A_cf^*A_cc; A_fc^*A_ff + A_cc^*A_cf A_fc^*A_fc + A_cc^*A_cc ], A^-1 = [ 𝒮_F^-1 -𝒮_F^-1A_fcA_cc^-1; -A_cc^-1A_cf𝒮_F^-1 𝒮_C^-1 ]. where 𝒮_C := A_cc - A_cfA_ff^-1A_fc and 𝒮_F := A_ff - A_fcA_cc^-1A_cf denote the two Schur complements of A. Then, a few of the ideal formulae of interest are: Z_ideal^(A^-*) = A_cc^-*A_fc^* , W_ideal^(A^-*) = A_cf^*A_cc^-*, Z_ideal^(I) =0 , W_ideal^(I) = 0, Z_ideal^(A) = -A_cfA_ff^-1 , W_ideal^(A) = -A_ff^-1A_fc, Z_ideal^(AA^*) = -(A_cfA_ff^* + A_ccA_fc^*)(A_ffA_ff^* + A_fcA_fc^*)^-1, W_ideal^(A^*A) = -(A_ff^*A_ff + A_cf^*A_cf)^-1(A_ff^*A_fc + A_cf^*A_cc). Notationally, if no operator is specified in superscipt, the ideal operator of A is implied; for example, Z_ideal = Z_ideal^(A) = -A_cfA_ff^-1. Also note that for general M, Z_ideal^(M^*) = [W_ideal^(M)]^*. The following corollary introduces the symmetry in ideal operators and orthogonal projections. For a given CF-splitting, let R_ideal(M) denote the ideal restriction operator based on matrix M, and likewise for P_ideal(M). Then, the following diagram shows pairs of R and P such that Π is orthogonal in the I-, A-, and A^*A-norms. For example, R_ideal(A^*A) and P_ideal(A) correspond to an orthogonal projection in the I-norm. Note, the A-norm assumes that A is SPD, while the I- and A^*A-norms do not. at (0,6) R_ideal(·); at (4,6) P_ideal(·); [thick] (-1, 5.5) – (5, 5.5); at (0,5) A^-*; at (0,4) I; at (0,3) A; at (0,2) AA^*; at (4,5) A^-*; at (4,4) I; at (4,3) A; at (4,2) A^*A; [dotted, ultra thick] (0.5, 5) – (3.5,5); [dotted, ultra thick] (0.5, 4) – (3.5,4); [dotted, ultra thick] (0.5, 3) – (3.5,3); [dotted, ultra thick] (0.5, 2) – (3.5,2); [dashed, ultra thick] (0.5, 5) – (3.5,4); [dashed, ultra thick] (0.5, 4) – (3.5,3); [dashed, ultra thick] (0.5, 3) – (3.5,2); [ultra thick] (0.5, 4) – (3.5,5); [ultra thick] (0.5, 3) – (3.5,4); [ultra thick] (0.5, 2) – (3.5,3); [ultra thick] (6, 4.25) – (7, 4.25); [align=right] at (9.1, 4.25) I-orthogonal projection; [dotted, ultra thick] (6, 3.5) – (7, 3.5); [align=right] at (9.35, 3.5) A-orthogonal projection; [dashed, ultra thick] (6, 2.75) – (7, 2.75); [align=right] at (9.2, 2.75) A^*A-orthogonal projection; All relations follow by plugging the ideal operators given in (<ref>) to the formulae for orthogonal projections given in (<ref>) and (<ref>). § OBSERVATIONS Although the pair of matrices R and P that project orthogonally onto ℛ(P) in a given norm are unique, the norm itself is not unique. For example, recall that R_ideal and interpolation P with W = 0 is orthogonal in the I-norm. It is easily verified that this pair is also orthogonal in the norm induced by M = [ A_ff^*A_ff 0; 0 A_cc^*A_cc ]. We can also develop norms in which a specific R and P are orthogonal. For example, consider R_ideal and P_ideal in the nonsymmetric setting. Although these operators do not lead to an I- or A^*A-orthogonal projection, there are, in fact, many norms in which the resulting projection is orthogonal. From (<ref>), orthogonality requires M such that MP_idealB_P = A^*R_idealB_R for some B_P and B_R. Expanding, this corresponds to [ M_ff M_fc; M_fc^* M_cc ][ -A_ff^-1A_fc; I ] B_P = [ 0; 𝒮^* ]B_R, where S = A_cc-A_cfA_ff^-1A_fc is the Schur complement of A. Let B_P = I and B_R = S^-*(M_cc - M_fc^*A_ff^-1A_fc). Then, if we set M_fc = M_ffA_ff^-1A_fc, as long as M is SPD such that it induces a norm, we satisfy (<ref>) and Π is M-orthogonal. Interestingly, there does not appear to be a nonsymmetric generalization of the A-norm such that all four relations in <Ref> hold. With significant algebra following the above framework, one can derive necessary and sufficient conditions on M such that the pairs {R_ideal(A^-*), P_ideal(A^-*)}, {R_ideal(I), P_ideal(I)}, and {R_ideal(A), P_ideal(A)} all correspond to an M-orthogonal coarse-grid correction, and these conditions do not have a general solution. As clear from <Ref>, results here are not constrained to the case of CF-splitting AMG with identities over the C-point block in transfer operators (<ref>). Suppose R and P take the more general forms R = [ Z; Y ], P=[ W; Q ]. We can still appeal to <Ref> and expand as we have done for various norms. However, this approach introduces additional freedom compared with the classical AMG approach of Y = I, so its benefit is not immediately clear without incorporating further constraints. From the operator pairs in <Ref>, it is easy to see that an approximation property, on R or P, and stability (or an orthogonal projection) does not imply an approximation property on the other transfer operator. For many scalar problems, ideal interpolation and restriction have fairly good approximation properties, but in general Z=0 or W = 0 have no approximation property, despite coupling for an I- or A^*A-orthogonal projection with R_ideal or P_ideal, respectively. In <cit.>, an example is used to prove that an approximation property on both P and R does not guarantee a stable coarse-grid correction. Thus, the three measures, stability and two approximation properties, are largely independent and each must be given proper consideration. As discussed in <Ref>, in nonsymmetric problems, relaxation must quickly attenuate error that is amplified by the non-orthogonal coarse-grid correction. It turns out that the nonsymmetric AMG method based on approximate ideal restriction (AIR) <cit.> is designed in such a way that this property can be satisfied. Recall from <cit.>, error propagation of coarse-grid correction can be written as (I-Π)𝐞 = [ 𝐞_f - W𝐞_c; 0 ] - P(RAP)^-1RA [ 𝐞_f - W𝐞_c; 0 ] If R = R_ideal, R^*A [ 𝐱; 0 ] = 0 for all 𝐱, and the latter term in (<ref>) is zero. Error is then zero at C-points, and only nonzero at F-points. For W = 0, the projection is orthogonal, while for W≠0, error can only be amplified at F-points. If R ≠ R_ideal, but is a very good approximation, the second term is likely to remain small, and error (particularly that amplified by the non-orthogonal projection) will still primarily reside on F-points. Indeed, AIR is used with a post-F-relaxation (sometimes followed by some combination of F/C-relaxation), which exactly focuses on the space in which the non-orthogonal projection amplifies error (that is, on F-points). Thus although relaxation is not orthogonal to the oblique range of coarse-grid correction (as it is for R=R_ideal and F-relaxation), effective F-relaxation coupled with a good approximation to R_ideal is very likely to be complementary in exactly the way we need it, and provides a more intuitive understanding on the effectiveness of AIR applied to hyperbolic and advection-dominated problems. §.§ Acknowledgements Los Alamos National Laboratory report LA-UR-23-26552. plain
http://arxiv.org/abs/2307.04413v1
20230710083640
Quantum Zeno effect: a qutrit controlled by a qubit
[ "Komal Kumari", "Garima Rajpoot", "Sudhir Ranjan Jain" ]
quant-ph
[ "quant-ph" ]
Optical-power-dependent splitting of magnetic resonance in nitrogen-vacancy centers in diamond Kensuke Kobayashi Received / Accepted ============================================================================================== For a three-level system monitored by an ancilla, we show that quantum Zeno effect can be employed to control quantum jump for error correction. Further, we show that we can realize cNOT gate, and effect dense coding and teleportation. We believe that this work paves the way to generalize the control of a qudit. § INTRODUCTION Quantum errors can be corrected only by developing methods to control quantum jumps. Recently, the quantum Zeno effect <cit.> has been employed to delay spontaneous emission, giving us time to detect possible erroneous jumps. Moreover, to observe and hence control quantum jumps, QZE has been shown to realize Dehmelt-like shelving <cit.>. This work was inspired by a very interesting and important experiment on “catching" and “reversing" a quantum jump by Minev et al. <cit.>. To take these thoughts further for realistic applications, we need to show this method of control for multi-level systems. Here we take the next step and consider a three-level system which has the possibility of three distinct frequencies ω_12, ω_23 and ω_13. One of these states is monitored by a detector: a two-level ancillary qubit <cit.>. In contrast to the control of two-level system where there is just one frequency, here there are three frequencies. Thus there are multiple time-scales under consideration. The aim of this article is to study the possibility of controlling spontaneous errors and shelving in the sense of Dehmelt and improvised in <cit.>. The plan of the paper is as follows. In Section 2.1, we state the problem and present the principle of least action approach relevant to our physical situation. This is based on the mathematical treatment of n- level system, the details of which are reviewed in the Appendix. The solution of the evolution equation of the density matrix in terms of coordinates and conjugate momenta is shown. In Section 2.2, the construction of a cNOT gate using a three-level system is explained. It is interesting to see that the three-level system considered here can be related to dense coding and teleportation, explained in Sections 2.3 and 2.4. § QUTRIT DYNAMICS We have a three-level system, i.e., a qutrit, with levels |1⟩, |2⟩ and |3⟩ and transition frequencies ω_12, ω_23 and ω_31. For a three-level system, N=3, the density matrix is ρ=1/3𝕀̂+1/2∑_i=1^8x_ix̂_i, where 1≤ j<k≤ N, 1≤ l≤ N-1 <cit.>. For a detailed description, see Appendix. The operators are x̂_1 = û_12 = |1⟩⟨2|+|2⟩⟨1| x̂_2 = v̂_12 = -ι(|1⟩⟨2|-|2⟩⟨1|) x̂_3 = ŵ_1 = |1⟩⟨1|-|2⟩⟨2| x̂_4 = û_13 = |1⟩⟨3|+|3⟩⟨1| x̂_5 = v̂_13 = -ι(|1⟩⟨3|-|3⟩⟨1|) x̂_6 = û_23 = |2⟩⟨3|+|3⟩⟨2| x̂_7 = v̂_23 = -ι(|2⟩⟨3|-|3⟩⟨2|) x̂_8 = ŵ_2 = √(1/3)(|1⟩⟨1|+|2⟩⟨2|-2|3⟩⟨3|). The density operator in the matrix form is ρ̂ =[ 1/3+x_3/2+x_8/√(3) 1/2(x_1-ιx_2) 1/2(x_4-ιx_5); 1/2(x_1+ιx_2) 1/3-x_3/2+x_8/√(3) 1/2(x_6-ιx_7); 1/2(x_4+ιx_5) 1/2(x_6+ιx_7) 1/3-2x_8/√(3) ]. §.§ Monitoring a single level Consider that the qutrit is interacting with an ancilla, a two-level system prepared initially in the state |0⟩ of σ_z, Fig. <ref>. The ancilla monitors the third level of the qutrit with a coupling strength J_3=√(α_3/δ t), where α_3 is a stochastic parameter related to the frequency of the detector. The qutrit+ancilla system evolves for a time δ t and then its σ_y operator is measured. If the outcome of measurement is 0, qutrit is in state |1⟩ or |2⟩. This evolution and measurement is performed n times for a total time of T=nδ t. The ancilla is reset after every measurement. The Hamiltonian of the qutrit+ancilla system is H =H_s+H_s-d =ω_12(|1⟩⟨2|+|2⟩⟨1|) + ω_23(|2⟩⟨3|+|3⟩⟨2|)+ ω_13(|1⟩⟨3|+|3⟩⟨1|)+J |3⟩⟨3|⊗σ_y^(3), where H_s-d=J|3⟩⟨3|⊗σ_y^(3), denoting that the state |3⟩ is entangled with the ancilla and a measurement of the y observable of the ancilla. The Kraus operators for measurement are given by ℳ_r =⟨r|exp[-ιH_s-dδt]|0⟩ = ⟨r|𝕀-ιH_s-d δt -1/2H_s-d^2 (δt)^2|0⟩ ℳ_0 = 𝕀-α_3/2|3⟩⟨3|δt ℳ_1 =√(α_3δt)|3⟩⟨3|. Upon unitary evolution of system via the operator 𝒰=exp-ι H_sδ t and measurements post-selected on t=0, we obtain ρ(t+δ t)=ℳ^0 𝒰ρ𝒰^†ℳ^0†/Tr[ℳ^0 𝒰ρ𝒰^†ℳ^0†]. By extremising the action obtained for the Joint Probability Distribution Function (JPDF) for the system, we obtain eight coupled equations, their canonical conjugates, and a functional ℱ incorporating the back-action of measurement performed by the detector <cit.> ẋ_1 =ω_23x_5+ω_13x_7+1/3α_3x_1(1-2√(3)x_8) ẋ_2 =-2ω_12x_3-ω_23x_4+ω_13x_6+α_3/3x_2(1-2√(3)x_8) ẋ_3 =2ω_12x_2+ω_13x_5-ω_23x_7+α_3/3x_3(1-2√(3)x_8) ẋ_4 = ω_23x_2-ω_12x_7-α_3/6x_4(1+4√(3)x_8) ẋ_5 = -ω_23x_1 +ω_12x_6 - ω_13(x_3+2√(3)x_8)-α_3/6x_5(1+4√(3)x_8) ẋ_6 = -ω_13x_2-ω_12x_5-α_3/6x_6(1+4√(3)x_8) ẋ_7 = -ω_13x_1+ω_12x_4+ω_23(x_3-2√(3)x_8)-α_3/6x_7(1+4√(3)x_8) ẋ_8 =√(3)/2[ω_13x_5+ω_23x_7+2/9α_3(1-√(3)x_8(1+2√(3)x_8))] The functional ℱ is given by ℱ=-α_3/3x_8(1-2√(3)x_8). The dynamical Hamiltonian is given by ℋ =∑_i=1^8 p_iẋ_̇i̇+ℱ. The canonically conjugate momenta can be derived by Hamilton's equations p_i=-∂ℋ/∂ x_i. Thus we obtain the coupled equations: ṗ_1 = -α_3/3(1-2√(3)x_8)p_1+ω_23p_5+ω_13p_7 ṗ_2 = -α_3/3(1-2√(3)x_8)p_2-2ω_12p_3-ω_23p_4 +ω_13p_6 ṗ_3 = 2ω_12p_2-α_3/3(1-2√(3)x_8)p_3+ω_13p_5-ω_23p_7 ṗ_4 = ω_23p_2+α_3/6 (1+4√(3)x_8)p_4 ṗ_5 = ω_23 p_1 -ω_13p_3+α_3/6 (1+4√(3)x_8)p_5+ω_12p_6-√(3)/2ω_13p_8 ṗ_6 =-ω_13p_2-ω_12p_5+α_3/6(1+4√(3)x_8)p_6 ṗ_7 = -ω_13p_1+ω_23p_3+ω_12p_4+α_3/6(1+4√(3)x_8)p_7-√(3)/2ω_23p_7 ṗ_8 =2/√(3)α_3(x_1p_1+x_2p_2+x_3p_3+x_4p_4+x_5p_5+x_6p_6+x_7p_7+2x_8p_8) +2√(3)(ω_13p_5+ω_23p_7)+α_3/3(p_8+1)-4/√(3)α_3x_8. The dynamics of the position coordinates of the qutrit with time are shown in Fig. <ref>. When the detection frequency is less compared to all the transition frequencies of the system, the dynamics shows continuous oscillations, Fig. <ref> (a). In an intermediate frequency, the system shows oscillations for some time, after which, it gets arrested in a particular state, Fig. <ref> (b). When the detection frequency is higher compared to all the transition frequencies of the system, the Zeno regime sets in, Fig. <ref> (c). Each coordinate freezes at a particular value around a time t=6 and the system does not evolve any further. The phase space dynamics of the qutrit are plotted in Figs. <ref> and <ref>, for a frequency lower and higher than the transition frequencies, respectively. In Fig. <ref>, for each coordinate, the qutrit shows evolution in the phase-space. However, in the Zeno regime, Fig. <ref>, it is evident that localization in x(p) is accompanied by delocalization of p(x). This shows that the system is shelved to a state. In terms of stability, localization in x or p corresponds to stability along that coordinate. It is clear that both x and p are not stable simultaneously, hence the points are saddle points, as in <cit.>. §.§ Creating a cNOT gate The three-level system can be used as a control and the ancilla as a target such that when the system is in |1⟩ or |2⟩, it does nothing to the ancilla (ancilla stays in initial state |0⟩_(n), whereas flips the ancilla to |1⟩_(n) when qutrit is in |3⟩. Such a gate can be represented as cNOT =(|1⟩⟨1|+|2⟩⟨2|)⊗𝕀̂ + |3⟩⟨3|⊗σ_x^(n). The states on which the cNOT acts are |1,0⟩, |2,0⟩ or |3,0⟩, where the first state is the qutrit state which controls the target ancilla initially in the state |0⟩. When cNOT acts on |3,0⟩, it gives |3,1⟩ and leaves the others unchanged. §.§ Dense coding and teleportation Some of the applications of entangled pairs are dense coding and teleportation. Dense coding uses one quantum bit together with a shared EPR pair to encode and transmit two classical bits <cit.>. Without using entanglement, only one classical bit of information can be extracted. Teleportaion is the opposite of dense coding as it uses two classical bits to transmit the state of an unknown qubit. The initial setup for both includes two parties, Alice and Bob who wish to communicate. Each is sent one of the entangled particles of an EPR pair |ψ_0⟩=1/√(2) (|0⟩_A|0⟩_B+|1⟩_A|1⟩_B). Each can perform transformations only on their particle unless they send over their particle. Dense coding: Alice wants to transmit the state of two classical bits encoding one of the numbers {0,1,2,3}, depending on which, she performs one of the transformations {I,X,Y,Z} on her qubit of |ψ_0⟩. The resulting state is shown in table <ref>. Bob decodes the information in two steps: cNOT to the entangled pair followed by Hadamard H on the first qubit: Bob finally measures the two qubits to obtain the binary encoding sent by Alice. Quantum teleportation: Due to the no-cloning theorem, the original state is destroyed and finally created at the target, hence the name teleportation. Alice has an qubit with unknown state |ϕ⟩=a|0⟩+b|1⟩. Both Alice and Bob share a part of the EPR pair just like in dense coding (<ref>). The initial state is then the three-qubit state: |ψ⟩⊗|ψ_0⟩ =1/√(2)(a|0⟩⊗(|00⟩+|11⟩)+b|1⟩⊗(|00⟩+|11⟩)) =1/√(2)(a|000⟩+a|011⟩+b|100⟩+b|111⟩). Alice controls the first two qubits and Bob controls the third. Alice uses the decoding step used by Bob in dense coding to the first two qubits in (<ref>), i.e., cNOT on first two followed by Hadamard on first qubit (H⊗I⊗I) (cNOT⊗I)(|ψ⟩⊗|ψ⟩) =(H⊗I⊗I)1/√(2)(a|000⟩+a|011⟩+b|110⟩+b|101⟩) =1/2[a(|000⟩+|011⟩+|100⟩+|111⟩)+b(|010⟩+|001⟩-|110⟩-|101⟩)] =1/2(|00⟩(a|0⟩+b|1⟩)+|01⟩(a|1⟩+b|0⟩)+|10⟩(a|0⟩-b|1⟩)+|11⟩(a|1⟩-b|0⟩)). Upon measuring the first two qubits, Alice obtains one of the four states |00⟩, |01⟩, |10⟩ or |11⟩, depending upon which, Bob's qubit is projected to one of the four states a|0⟩+b|1⟩, a|1⟩+b|0⟩, a|0⟩-b|1⟩ or a|1⟩-b|0⟩. Alice sends her result as two classical bits to Bob. The original state |ϕ⟩ is contained in Bob's qubits. Upon receiving the two bits, Bob reconstructs the state by applying decoding transformation to his qubit: Bob will finally have the qubit Alice wished to send. §.§ Applications of entanglement using three-level system We have considered a three-level system where the third level is being monitored by an ancilla. For communication and teleportation using the qutrit, we need to have two of the states acting as ground and the third, which is being monitored as the higher level. This will enable us to create a cNOT gate for the qutrit. Further, we need the regular Pauli operators corresponding to this setup, such that the bit-flip operator acts on the states as X_13|1⟩=|3⟩, X_23|2⟩=|3⟩, X_13+23|3⟩= |1⟩+|2⟩/√(2). Hence, the operators may be written as X_13=[ 0 0 1; 0 0 0; 1 0 0; ] X_23=[ 0 0 0; 0 0 1; 0 1 0; ]. The resulting X operator read as X=X_13+X_23/√(2)=1/√(2)[ 0 0 1; 0 0 1; 1 1 0; ]. We have X|1⟩=1/√(2)[ 0; 0; 1; ]=1/√(2)|3⟩, X|2⟩=1/√(2)[ 0; 0; 1; ]=1/√(2)|3⟩ and X|3⟩=|1⟩+|2⟩/√(2). Similarly, the Y operator is Y=1/√(2)[ 0 0 1; 0 0 1; -1 -1 0; ], with Y|1⟩=1/√(2)[ 0; 0; -1; ]=-1/√(2)|3⟩, Y|2⟩=1/√(2)[ 0; 0; -1; ]=-1/√(2)|3⟩ and Y|3⟩=|1⟩+|2⟩/√(2). The phase operator should act as Z(|1⟩+|2⟩/√(2))=(|1⟩+|2⟩/√(2)) and Z|3⟩=-|3⟩. That is, Z=1/√(2)[ 1 0 0; 0 1 0; 0 0 -√(2); ]. The cNOT gate is given by cNOT=(|1⟩⟨1|+|2⟩⟨2|)⊗ I^(n)+|3⟩⟨3|⊗σ_x^(n), where superscript (n) represents the ancilla. To find the Hadamard operator, note that |ψ_0⟩ =1/√(2)((|1⟩+|2⟩)/√(2)+|3⟩) H 1/√(2)((|1⟩+|2⟩)/√(2)+|3⟩)=|1⟩+|2⟩/√(2) H 1/√(2)((|1⟩+|2⟩)/√(2)-|3⟩)=|3⟩. These are effected by the Hadamard gate: H=1/2√(2)[ 1 1 √(2); 1 1 √(2); √(2) √(2) -2; ]. Now we have a set of operators at our disposal, acting as gates on this three-level system for dense coding and teleportation. Dense coding: Alice encodes the digits {0,1,2,3} in state |ψ_0⟩ and performs transformations on her part of the state. Let the states of ancilla be {|g⟩,|e⟩}, the eigenstates of σ_z. These are entangled with the qutrit to parallel the EPR pair of qubits. Then, Bob decodes using cNOT followed by Hadamard on the (first) qutrit. Here, the cNOT has control as the three-level system and target as a two level system. Hence, the flip operator will be the usual 2D Pauli σ_x. This is shown in table <ref>. Teleportation: Alice has an unknown qubit |ϕ⟩=a|g⟩+b|e⟩ (ancilla). She wants to send this to Bob through a classical channel. They each share a part of the state |ψ_0⟩=1/√(2)[|11⟩+|12⟩+|21⟩+|22⟩/2+|33⟩], so that the combined state initially is |ϕ⟩⊗|ψ_0⟩ =1/2√(2)[a(|g11⟩+|g12⟩+|g21⟩+|g22⟩+2|g33⟩) +b(|e11⟩+|e12⟩+|e21⟩+|e22⟩+2|e33⟩)]. Alice controls the first two states in the tensor product in (<ref>) and Bob controls the third state. For the decoding step, Alice applies cNOT (|g⟩⟨g|⊗ I_3+|e⟩⟨e|⊗ X_3) on the first two states of the product followed by Hadamard on the first (H_2⊗I⊗I) (cNOT⊗I)(|ϕ⟩⊗|ψ_0⟩) = (H_2⊗I⊗I)1/2√(2)[a(|g11⟩+|g12⟩+|g21⟩+|g22⟩+2|g33⟩) +√(2)b(|e31⟩+|e32⟩+1/√(2)(|e13⟩+|e23⟩))] = 1/4[a(|g11⟩+|e11⟩+|g12⟩+|e12⟩+|g21⟩+|e21⟩+|g22⟩+|e22⟩+2|g33⟩+2|e33⟩) +√(2)b(|g31⟩-|e31⟩+|g32⟩-|e32⟩+|g13⟩-|e13⟩+|g23⟩-|e23⟩)] = 1/2√(2)[|g1⟩(a(|1⟩+|2⟩)/√(2)+b|3⟩)+|e1⟩(a(|1⟩+|2⟩)/√(2)-b|3⟩) +|g2⟩(a(|1⟩+|2⟩)/√(2)+b|3⟩)+|e2⟩(a(|1⟩+|2⟩)/√(2)-b|3⟩) +|g3⟩(√(2)a|3⟩+ √(2)b(|1⟩+|2⟩)/√(2))+|e3⟩(√(2)a|3⟩- √(2)b(|1⟩+|2⟩)/√(2))]. Thus the final encoded state is |ψ⟩_f =1/2[|g⟩(|1⟩+|2⟩/√(2)){a(|1⟩+|2⟩/√(2))+b|3⟩}+|e⟩(|1⟩+|2⟩/√(2)){a(|1⟩+|2⟩/√(2))-b|3⟩} +|g⟩|3⟩{a|3⟩+b(|1⟩+|2⟩/√(2))}+|e⟩|3⟩{a|3⟩-b(|1⟩+|2⟩/√(2))}] Upon measuring the first two states, Alice will obtain one of the four states mentioned in the first column of Tab. <ref>, which she sends as two classical bits to Bob. Upon receiving them, Bob reconstructs the state by applying a decoding transformation (<ref>) to his part of the product state which contains the unknown state |ϕ⟩. Thus Bob will finally have the qubit state Alice wanted to send. §.§ Monitoring two levels Consider a qutrit interacting with two ancillae. The ancillae are again two-level systems, one of which monitor the state |2⟩ whereas the other monitors the state |3⟩ as shown in Fig. <ref>. The interaction strength between qutrit and ancilla monitoring |2⟩ (|3⟩) is J_2=√(α_2/δ t) (J_3=√(α_3/δ t)). The Hamiltonian for this system can be given as H =ω_12(|1⟩⟨2|+|2⟩⟨1|)+ω_23(|2⟩⟨3|+|3⟩⟨2|)+ω_13(|1⟩⟨3|+|3⟩⟨1|)+ H_s-d, where H_s-d = J_2|2⟩⟨2|⊗σ_y^(2)⊗𝕀^(3) + J_3 |3⟩⟨3| ⊗𝕀^(2)⊗σ_y^(3) + (J_2 |2⟩⟨2|+ J_3 |3⟩⟨3|) ⊗σ_y^(2) ⊗σ_y^(3). The Kraus operators are given by ℳ_r =⟨r_1 r_2|exp[-ιH_s-d δt]|00⟩ ℳ_00 = 𝕀 -J_2^2|2⟩⟨2| (δt)^2 -J_3^2|3⟩⟨3| (δt)^2 ℳ_01 = -J_3|3⟩⟨3| δt -ιJ_2^2 |2⟩⟨2| (δt)^2 ℳ_10 = -J_2|2⟩⟨2| δt -ιJ_3^2 |3⟩⟨3| (δt)^2 ℳ_11 = ι(J_2 |2⟩⟨2| +J_3 |3⟩⟨3|) δt. So we have a 2× 2 Kraus operator matrix. The unitary evolution of qutrit under system Hamiltonian H_s and measurement postselected on r=00, we obtain 8 coupled dynamic equations from the density matrix ρ(t+δ t)=ℳ_00𝒰ρ𝒰^†ℳ_00^†/Tr[ℳ_00𝒰ρ𝒰^†ℳ_00^†]. These equations are ẋ_1 = -α_2 x_1x_3 +ω_23x_5 +ω_13x_7+1/3(α_2-2α_3)x_1(2√(3)x_8-1) ẋ_2 = - [2ω_12x_3 + α_2 x_2x_3 + ω_23 x_4-ω_13 x_6 -1/3(α_2-2α_3)x_2(2√(3)x_8-1) ] ẋ_3 = 1/3 [6ω_12x_2 + 2α_3 x_3 +3 ω_23 x_5 -3ω_23x_7-4√(3)α_3 x_3 x_8-α_2(1+x_3)(-2+3x_3-2√(3)x_8)] ẋ_4 =1/3[3ω_23x_2-3ω_12x_7 +α_2 x_4 (2-3x_3+2√(3)x_8) -α_3 x_4 (1+4√(3) x_8) ] ẋ_5 =1/3[-3ω_23x_1+(2α_2-α_3-3α_2 x_3)x_5 +3ω_12x_6+2√(3)(α_2-2α_3)x_5x_8-3ω_13(x_3+2√(3)x_8)] ẋ_6 = [-ω_13 x_2 - ω_12x_5 - 1/3x_6 (α_2+α_3+3α_2 x_3 - 2√(3)α_2 x_8 +4√(3)α_3 x_8) ] ẋ_7 = [-ω_13x_1 +ω_12x_4 -1/3 (α_2+α_3 +3 α_2 x_3)x_7 +2/√(3)(α_2-2α_3)x_7x_8+ω_23 (x_3-2√(3)x_8) ] ẋ_8 = 1/6√(3) [4 α_3 + 9 ω_13 x_5+9 ω_23x_7-4α_3x_8(√(3)+6x_8)+α_3(-2+3x_3+2√(3)(1-3x_3)x_8+12x_8^2)] The functional incorporating the backaction is ℱ=α_2x_3-2/3(α_2+α_3+√(3)α_2x_8-2√(3)α_3x_8). The corresponding conjugate momenta are p_1 =α_2x_3p_1-1/3(α_2-2α_3)(2√(3)x_8-1)p_1+ω_23p_5+ω_13p_7 p_2 =α_2x_3 p_2-1/3(α_2-2α_3)(2√(3)x_8-1)p_2-2ω_12p_3-ω_23p_4+ω_13p_6 p_3 =α_2x_1p_1+2ω_12p_2+α_2x_2p_2-2/3α_3p_3+4√(3)/3α_3x_8p_3+α_2/3(1+6x_3-2√(3)x_8)p_3 +α_2x_4p_4+α_2x_5p_5+ω_13p_5+α_2x_6p_6+α_2x_7p_7-ω_23p_7-α_3/2√(3)p_8+x_8p_8-α_2 p_4 =ω_23p_2-α_2/3(2-3x_3+2√(3)x_8)p_4+α_3/3(1+4√(3)x_8)p_4-ω_12p_7 p_5 =-ω_23p_1-ω_23p_3-1/3(2α_2-α_3-3α_2x_3)p_5-2√(3)/3(α_2-2α_3)x_8p_5+ω_12p_6-√(3)/2ω_13p_8 p_6 =-ω_13p_2-ω_12p_5+1/3(α_2+α_3+3α_2x_3-2√(3)α_2x_8+4√(3)α_3x_8)p_6 p_7 =-ω_13p_1+ω_23p_3+ω_12p_4+1/3(α_2+α_3+3α_2x_3)p_7-2/√(3)(α_2-2α_3)x_8p_7-√(3)/2ω_23p_8 p_8 =-2/√(3)(α_2-2α_3)(x_1p_1+x_2p_2+x_3p_3+x_4p_4+x_5p_5+x_6p_6+x_7p_7-1?)-2/√(3)α_2p_3 +2√(3)(ω_13p_5+ω_23p_7)+2/3α_3(1+2√(3)x_8)p_8-α_3/3(1-3x_3)p_8-4/√(3)α_3x_8p_8. The dynamics of the position coordinates of the qutrit with time are shown in Fig. <ref>. When the detection frequencies of the two detectors are less compared to all the transition frequencies of the system, the dynamics shows continuous oscillations, Fig. <ref> (a). In an intermediate frequency range, the system shows oscillations for some time, after which, it gets arrested in a particular state, Fig. <ref> (b). When the detection frequency is higher compared to all the transition frequencies of the system, the Zeno regime sets in, Fig. <ref> (c). Each coordinate freezes at a particular value around a time t=6, just as in the previous section where a single state was being monitored. The phase space dynamics of the qutrit are plotted in Figs. <ref> and <ref>, for frequencies of both the detectors lower and higher than the transition frequencies, respectively. In Fig. <ref>, for each coordinate, the qutrit shows evolution in the phase-space. However, in the Zeno regime, Fig. <ref>, the system follows uncertainty principle - as soon as the position coordinates are fixed at a particular value, the uncertainty in the momentum coordinates peaks. This also shows that there is a saddle point. The qutrit gets shelved in the position coordinates and the is delocalised in the momentum coordinates. § CREATING A TOFFOLI GATE The Kraus operators in (<ref>) indicate that the system may be in state 1 (M_00), state 2 (M_10), state 3 (M_01) or in a combination of 2 and 3, i.e., anywhere but not in state 1 (M_23). This can be interpreted as an operator T = |1⟩⟨1|⊗|1⟩⟨1|⊗(𝕀⊗𝕀) + |2⟩⟨2|⊗|1⟩⟨1|⊗(X⊗𝕀) +|1⟩⟨1|⊗|3⟩⟨3|⊗(𝕀⊗X) + |2⟩⟨2| ⊗|3⟩⟨3|⊗(X⊗X). Consider 𝕀⊗𝕀, 𝕀⊗ X and x⊗𝕀 as giving an outcome of 0 and x⊗ X equivalent to producing an outcome of 1. The setup can then be interpreted as a Toffoli gate. For instance, if control is (1,1) and target is 0, the state is |1,1,(00)≡ 0⟩. If control is (2,3) and target is 1, the state is |2,3,(11)≡ 1⟩. § CONCLUDING REMARKS Control of qutrit is shown by monitoring one or two levels. Due to the Quantum Zeno Effect, the state of the system is shown to shelve to a state other than the states of the three-level system. Treatment to a three-level system takes us out of Pauli algebra, here we have Gell-Mann matrices. In addition, we write a new set of operators to realise the cNOT gate with the qutrit as the control and the two-level ancilla as the target. With these operators, the applications of entanglement have been realised in a three-level system in dense coding and teleportation for the purpose of quantum communication. Application of the system to universal gates allows us to manipulate the states. In general, for N-level system also, the conclusion will hold good. 0.5 truecm Data Availability Statement: No Data associated in the manuscript 0.25 truecm Conflict of interests: Authors declare no conflict of interest. § APPENDIX: DENSITY MATRIX OF LG-LEVEL SYSTEM An N-level system is defined by a Bloch vector whose components are expectation values of some observables <cit.>. The number of observables needed to identify the state are N^2-1. These correspond to N^2-1 independent parameters used to define a Hermitian density matrix operator ρ̂ with a constraint, Trρ̂=1. Choosing the generators of SU(N) for the observables x̂_i, the density matrix is determined from their expectation values ⟨x̂_i⟩'s as ρ=1/N𝕀̂_N + 1/2∑_i=1^N^2-1⟨x̂_i⟩x̂_i. The properties of the density matrix associated with a Hilbert space ℋ_N is given as ρ∈ℒ(ℋ_N) : (i) Trρ=1 (ii) ρ = ρ^† (iii) ρ_i ≥ 0, where ℒ is the space of linear operators on ℋ_N, i=1,2,… N and ρ_i's are the eigenvalues of ρ. The property (iv) Trρ^2≤ 1 follows from Eq. (<ref>). Equality holds when ρ is a pure state. Following these properties, the operators x̂_i satisfy ( i) x̂_i = x̂_i^† ( ii) Tr [x̂_i] = 0 ( iii) Tr [x̂_ix̂_j] = 2δ_ij. The x_i's are characterised with structure constants f_ijk, completely asymmetric tensor and g_ijk, completely symmetric tensor of Lie algebra [x̂_i,x̂_j] = 2if_ijk x̂_k {x̂_i,x̂_j} =2/Nδ_ijÎ_N+2g_ijkx̂_k. By imposing (iv), the length of the operators x̂_i are restricted as |x|≡√(x_ix_j)≤√(2(N-1)/N). Systematic construction of the generators generalising the Pauli spin operators for an N-level system is given by <cit.> {x̂_i}_i=1^N^2-1 = {û_jk,v̂_jk,ŵ_l} where û_jk = |j⟩⟨k| + |k⟩ ⟨j|, v̂_jk = -ι(|j⟩⟨k| - |k⟩⟨j|), ŵ_l = √(2/l(l+1))( ∑_j=1^l |j⟩⟨j|-l|l+1⟩⟨l+1|), 1≤j < k ≤N, 1≤l ≤N-1. For N=2, x̂_1 = û_12 = |1⟩⟨2| + |2⟩ ⟨1| ≡X̂, x̂_2 = v̂_12 = -ι(|1⟩⟨2| - |2⟩⟨1|) ≡Ŷ, x̂_3 = ŵ_l= |1⟩⟨1| - |2⟩⟨2| ≡Ẑ, where |1⟩=[ 1 0; ]^ T and |2⟩=[ 0 1; ]^ T and the structure constants are f_ijk = ϵ_ijk (Levi-Civita), g_ijk=0. 99 ms B. Misra and E. C. G. Sudarshan, J. Math. Phys. 18, 756 (1977). dehmelt H. G. Dehmelt, Bull. Am. Phys. Soc. 20, 60 (1975). deh H. G. Dehmelt, IEEE Transactions on Instrumentation and Measurement, IM31, 83 (1982). minev Z. Minev et al., Nature 570, 200 (2019). parveen K. Snizhko, P. Kumar, A. Romito, Phys. Rev. Res. 2, 033512 (2020). krjj Komal Kumari, Garima Rajpoot, Sandeep Joshi, and Sudhir R. Jain, Ann. Phys. 450, 169222 (2023). kimura G. Kimura, The Bloch vector for N-level systems, Phys. Lett. A 314, 339 (2003). jordan A. Chantasri, J. Dressel, A. Jordan, Phys. Rev. A 88, 042110 (2013). jordan2 A. Chantasri, A. Jordan, Phys. Rev. A.92, 032125 (2015). rieffel E. Rieffel, Wolfgang Polak, Quantum Computing: A gentle introduction, (The MIT Press, Cambridge) (2011). hioe F. T. Hioe and J. H. Eberly, N-Level Coherence Vector and Higher Conservation Laws in Quantum Optics and Quantum Mechanics, Phys. Rev. Lett. 47, 838 (1981). pottinger_lendi J. Pöttinger and K. Lendi, Generalized Bloch equations for decaying systems, Phys. Rev. A 31, 1299 (1985). lendi K. Lendi, Entropy production in coherence-vector formulation for N-level systems, Phys. Rev. A 34, 662 (1986).
http://arxiv.org/abs/2307.04259v2
20230709201858
Dynamical Wormhole Solutions in Rastall Theory
[ "Yaghoub Heydarzade", "Maryam Ranjbar" ]
gr-qc
[ "gr-qc" ]
1]Yaghoub Heydarzademailto:[email protected]@bilkent.edu.tr 2]Maryam Ranjbarmailto:[email protected]@gmail.com [1]Department of Mathematics, Faculty of Sciences, Bilkent University, 06800 Ankara, Turkey [2]Istanbul, Turkey Dynamical Wormhole Solutions in Rastall Theory [ August 12, 2023 ============================================== Wormhole configurations in Einstein's general theory of relativity (GR) require exotic matter sources violating the weak energy condition (WEC). Rastall's theory is a generalization of GR in its matter source considering a nonconserved energy-momentum (EM) tensor. Hence, on one hand, the nature of this generalization of the matter source of field equations and, on the other hand, the possibility of respecting energy conditions for dynamical wormholes in contrast to static ones motivates us to study the possibility of the existence of wormhole configurations respecting energy conditions or minimizing the violations of them in Rastall's modified theory. We derive general analytical solutions considering a constant redshift function and a particular equation of state for energy density and pressure profiles. We show that because of the modification in the EM source of the field equations, there exist solutions respecting the WEC in the vicinity of the wormhole's throat for specified values of the parameters. Some particular solutions are discussed in detail. Dynamical Wormhole Solutions in Rastall Theory [ August 12, 2023 ============================================== § INTRODUCTION Despite the success of Einstein's general relativity (GR) in explaining many gravitational phenomena, it falls short in explaining dark matter and dark energy. To address these issues, modifications of GR have been proposed, e.g scalar-tensor theories <cit.>, f(R) theories <cit.>, and braneworlds <cit.>. For a comprehensive review, see <cit.>. In 1972, Peter Rastall proposed a modification to Einstein's theory with a nonconserved energy-momentum tensor <cit.>. In his theory, the divergence of energy-momentum tensor is proportional to the gradient of the Ricci scalar through a proportionality constant <cit.>. Hence, in contrast to the standard conservation law of energy-momentum, the Bianchi identity still holds. Rastall gravity yields some interesting results, for instance, the late time accelerating expansion of the universe can be explained <cit.>, and the de Sitter black hole solutions can be found without explicitly assuming a cosmological constant <cit.>. The question of equivalence of Rastall gravity to Einstein's theory as a redefinition of the EM tensor was raised in <cit.>. However, it has been shown that the nature of this theory considering a nonconserved EM source is not just a redefinition of EM, and it gives different results than GR, see for instances <cit.>. It is shown recently that a Lagrangian formulation for a Rastall-type theory can be provided in the context of f(R, ℒ_m) and f(R, T) theory <cit.> where R is the Ricci scalar, ℒ_m is the Lagrangian of matter fields and T is the trace of the energy-momentum tensor. Einstein's general theory of relativity (GR) admits solutions describing geometrical bridges connecting two distant regions of a universe or even two different universes. For the first time it was Wheeler who proposed the term "wormhole" for these geometrical bridges in order to provide a mechanism for having "charge without charge". He claimed that the electric charge emerges as a manifestation of the topology of a space, a sheet with a handle <cit.>. The interest in these solutions almost declined over years until the notion of traversable Lorentzian wormholes was introduced by Morris, Thorne and Yurtsewer <cit.>. It was discussed that these structures could allow humans not only to travel between distant parts of a universes, or even two universes, but also to construct time machines. In the framework of GR, the flaring-out condition on the throat of wormhole leads to the violation of weak energy condition (WEC) demanding an exotic matter source in the Earth-based laboratory context. This violation of the energy condition is conventionally a problematic issue that requires a resolution or at least a minimization <cit.>. Numerous studies have endeavored to address the nature of exotic matter within various settings <cit.>. One approach is to construct thin-shell wormholes in the context of GR via cut-and-paste procedure in which the exotic matter source is minimized by concentrating at the wormhole's throat <cit.>. Another approach is to investigate the modified theories of gravity where the presence of curvature higher order terms in curvature may provide a possibility for constructing wormhole structures by ordinary matter sources <cit.>. As instances, see wormhole solutions in Brans-Dicke theory <cit.>, Einstein-Gauss-Bonnet theory <cit.>, f(R) gravity <cit.>, and scalar-tensor gravity <cit.>, higher dimensioanal theories <cit.>. Moreover, in contrast to static wormholes in GR, it has been noted that for evolving wormholes there is this possibility of satisfaction of energy conditions for a finite interval of time <cit.>, see also pioneer works <cit.>. Akin to the other modified theories, Rastall theory also has numerous successful applications in cosmology and astrophysics and this drives a motivation for investigating it versus the conditions for the existence wormholes structures. Nevertheless, our main motivation for the present study rely on very distinct feature of this theory that distinguishes it from other modified theories: its modification in the matter source of the Einstein field equations only and leaving the geometric part unaltered. As the result, this provides very unique possibilities in the context of this theory as i) the field equations remain rather simple to be handled, and ii) the main concern in constructing wormholes, need for exotic matter fields, can be traced easily by the nonminimal coupling of the EM tensor and geometry, and their interplay through a constant coupling parameter. We will see the footprint of this coupling in the solutions derived. On the other hand, the possibility of respecting ECs for finite time intervals in dynamical configurations, in contrast to static cases, in GR, stimulate another motivation to investigate how these dynamical configurations behave in Rastall theory. Therefore, the objective of our study is to discover viable dynamical wormhole solutions within the framework of Rastall theory and demonstrate how the nonminimal coupling nature of this theory influences the shape and evolution of these solutions. Here it is necessary to to mention that static wormhole solutions have been studied in the context of Rastall theory showing that the WEC can be met for some particular solutions, see for instance <cit.>. In <cit.> it is shown that Rastall theory is capable of modifying the energy condition requirements of the matter source to satisfy the strong energy condition at the throat. This modification demands that either the Rastall coupling κ or λ has to be negative. It is concluded that Rastall gravity has the potential to alleviate some issues encountered by static wormholes within the framework of Einstein gravity. Since the dynamical wormholes in the context of Rastall theory have not been studied yet, it seems worthwhile to put one step further to explore the theory for the possible generalizations of the static solutions to dynamical cases. The organization of the paper is as follows. In section II, we derive the general analytical solutions of the field equations for a wormhole geometry. In section III, we analyse some particular solutions versus the flaring out and WEC, and show that under some constraints these conditions are respected in the context of Rastall gravity. Section IV is devoted to our concluding remarks. § EVOLVING WORMHOLES IN RASTALL THEORY The validity of the energy-momentum conservation law in the four dimensional spacetime was questioned by Rastall <cit.>. He considered the following hypothesis 𝑇^μν _; μ= λℜ^, ν, where T^μν is the energy-momentum tensor of matter source, λ is the Rastall constant parameter, and ℜ is the Ricci scalar. Hence, the Einstein field equations get modified as 𝐺_μν+ κλ g_μνℜ = κ𝑇_μν, where κ is the gravitational coupling. In the present work, we are interested in dynamical wormhole solutions of these field equations. For the static wormhole solutions in Rastall theory, see <cit.>. Hence, we consider time-dependent generalization of Morris-Thorne wormhole metric as <cit.> ds^2 = -U(r) dt^2 + 𝑅(t)^2 ( dr^2/1-B(r)/r + r^2 (d θ^2 + sin^2 θ dϕ^2) ), where R(t) is the scale factor of the background Universe, U(r) is the redshift function and B(r) is the wormhole shape function. The static Morris-Thorne wormhole is recovered by setting R(t)=constant. In order to have a wormhole geometry, the following general constraints on the redshift and shape functions are required <cit.>. ∙ The wormhole throat connecting two asymptotic regions is located at the minimum radial coordinate r_0=B(r_0). ∙ The shape function B(r) must satisfy the so-called flaring-out condition B(r)-rB^' (r)>0 at the vicinity of the throat which reduces to B^'(r_0)<1 at the throat. ∙ In order to keep the signature of the metric for r > r_0, the shape function holds the condition 1-B(r)/r>0. ∙ For asymptotically flat wormholes, the metric functions should satisfy the conditions U(r)→ 1,   B(r)/r → 0 as r →∞. In this case, the metric (<ref>) tends to the flat Friedmann-Robertson-Walker metric in the asymptotic region. ∙ The redshift function U(r) must be finite and nonzero throughout the spacetime in order to ensure the absence of horizons and singularities. We use a similar methodology as in <cit.> for evolving Lorentzian wormholes in GR. We will see that how Rastall's paprameter appears in the solutions for the scale factor and shape function to modify the similar solutions in <cit.>. Considering the metric (<ref>) with the constant redshift function U(r)=1, and the energy-momentum tensor T^μ_ν=diag(-ρ(t,r), P_r(t,r), P_l(t,r), P_l(t,r)), field equations (<ref>) yield ρ(t,r)=1/κ( 3 H^2 + B^'(r)/r^2 𝑅(t)^2 -κλℜ), P_r(t,r)= 1/κ( -3 H^2 -2 Ḣ -B(r)/𝑅(t)^2 r^3 + κλℜ), P_l(t,r)= 1/κ( -3H^2 - 2Ḣ -B^'(r)/2 r^2 𝑅(t)^2 +B(r)/2 r^3 𝑅(t)^2 + κλℜ), where H=Ṙ(t)/R(t), and the Ricci scalar reads as ℜ=2 B^'(r)/r^2 𝑅(t)^2 + 12 𝐻^2 + 6 𝐻̇. For integrating the present system of three nonlinear partial differential equations (<ref>), (<ref>) and (<ref>) with five unknowns R(t), B(r), ρ(t,r), P_r(t,r) and P_l(t,r), one can consider a physically motivated constraint; more specifically an equation of state for the sets of unknowns (ρ(t,r), P_r(t,r)) and (ρ(t,r), P_l(t,r)) or even for (P_r(t,r), P_l(t,r)) as in <cit.>. Another possibility is to consider traceless constraint on EM tensor as in <cit.>. Here, in order to keep the equation of state as much as possible general which can reduce to some known specific equations of state, we consider a general EoS including our three unknowns (ρ(t,r), P_r(t,r),P_l(t,r) as in <cit.> ρ(t,r)=ω/1+2 γ(P_r (t,r)+ 2 γ P_l(t,r) ), where ω and γ are equation of state parameters. This equation of state depending on two parameters ω and γ can reduce to the following special cases: i) the barotropic EoS as ρ(t,r)=ω P(t,r) when P_r(t,r)=P_l(t,r)=P(t,r), ∀γ, which reduces to cosmological constant for ω=-1,   ii) the traceless EM's EoS as -ρ(t,r) +P_r(t,r)+2P_l(t,r)=0 when ω=3, γ=1,   and iii) the dimension (n) dependant EoS ρ(t,r)=α(P_r (t,r)+ (n-2) P_l(t,r) ) <cit.> in n=4 when γ=1. Later we will see that how the Rastall's coupling β and the wormhole conditions together put constraints on each of these two parameters ω and γ in (<ref>). Combining the set of equations (<ref>, <ref>, <ref>) with the EoS (<ref>), we obtain the following single nonlinear partial differential equation in our unknown functions B(r) and R(t) ( 1+ γ (2 + ω) ) r B^'(r) - ω (γ -1) B(r)/κ (1+2 γ)r^3 = - 𝑅(t)^2 (1+2 γ) ( 8 ω𝐻̇ + 12 H^2 (ω +1) )/(4+ 8 γ) κ + λℜ (1+ ω) 𝑅(t)^2. This equation can be integrated for B(r) and R(t) by separating it into the radial and temporal parts as follows ( 1+ γ (2 + ω) ) r B^'(r) - ω (γ -1) B(r)/ (1+2 γ)r^3 -2β (1+ω) B^'(r)/r^2=   β (1+ ω) 𝑅(t)^2 (12 𝐻^2 + 6 𝐻̇)̇ - 𝑅(t)^2 (1+2 γ) ( 8 ω𝐻̇ + 12 H^2 (ω +1) )/(4+ 8 γ) , where β=κλ. This equation can be considered as the master equation to be solved for our unknowns, and it is similar to the master equation in <cit.>. In <cit.> the master equation was derived by combining the field equations considering the relation p_r(t,r)=α p_t(r,t) where in general α=α(r). However, one notes to the modification here by the Rastall's parameter β and the difference in the coefficients due to the different equation of states used. The radial and temporal parts of Eq. (<ref>) give the following ordinary differential equations (ODEs) for the shape function and scale factor respectively ( 1+ γ (2 + ω) ) r B^'(r) - ω (γ -1) B(r)/ (1+2 γ)r^3 -2β(1+ω) B^'(r)/r^2 =C, and R(t)^2 [(6 β (ω +1) - 2 ω) Ḣ+(12 β (ω +1)- 3(ω +1) )H^2 ] =C. Let the constants a=6 β (ω+1)-2 ω and d= 12 β (ω+1)-3(1+ω), then Eq.(<ref>) can be rewritten as R(t)^2 [a Ḣ+ d H^2]= C, or equivalently a R(t) R̈(t)+ b Ṙ(t)^2 =C, where the constant b= d - a = a + ω -3. Here, one notes that the dynamics of the scale factor depends on the Rastall's coupling parameter β and EoS parameter ω while is independent of the parameter γ. In the following subsections, we obtain general exact solutions to Eqs.(<ref>) and (<ref>) for two cases C=0 and C 0. Some particular sub-classes of the obtained general solutions will be investigated versus the flaring out and weak energy conditions in the next section. §.§ Solutions for C=0 §.§.§ Solution for the shape function Integrating Eq.(<ref>) for C=0, the shape function can be obtained as B(r)= r_0 (r_0/r)^(1-γ ) ω/1-2 β (2 γ +1) (ω+1)+γ ( ω+2), Here one observes that how the Rastall's coupling parameter β modifies the wormhole's shape function in comparison to the case of GR when β=0. The resulting geometry can be asymptotically flat or nonflat depending on the set of parameters ω,γ and β. The flaring out condition at the throat reads as B'(r_0)=(γ -1) ω/1-2 β (2 γ +1) (ω+1)+γ ( ω+2)<1. Moreover, in order to satisfy the asymptotically flatness B(r)/r→ 0 as r→∞, the following condition should be fulfilled -1<(1-γ ) ω/1-2 β (2 γ +1) (ω+1)+γ ( ω+2)<1. §.§.§ Solution for the scale factor One can integrate Eq.(<ref>) for C= 0 to find the general solution R(t)= (R_0 t+ R_1)^1/1+ b/a=(R_0 t+ R_1)^a/d, where R_0 and R_1 are integration constants. One observes that this solution does not contain the Big Bang singularity if t≠ -R_1/R_0. Here one notes that the solution (<ref>) is a generic dynamic wormhole solution that is similar to the solution obtained in <cit.> in GR. Hence, the general form of the solution for the scale factor is independent of the Rastall gravity due to the similarity in the governing ODE on R(t) in (<ref>). However the solutions may differ depending on the assumed parameter constraints for the purpose of the solution in the underlying theory. Here, Rastall's coupling β arises in the power a/d and can be considered as a factor for distinguishing the solution from those in GR in the limit β→ 0. Later we will discuss the values of β parameter and its effect in satisfaction of wormhole conditions. The following particular subclasses of (<ref>) and (<ref>) can be of interest. ∙ 𝐚=𝐝 For this case, the scale factor, shape function and ω are given by R(t)=R_0 t +R_1,    ω =6 β -3/1-6 β,    β1/6, B(r)=r^3/r_0^2. One can verify that this solution to (<ref>) fails to satisfy the flaring out condition for evolving wormhole solutions. Hence, we do not analyze this solution versus the WEC. ∙ 𝐚=2𝐝 In this case, we have R(t)=(R_0 t + R_1)^2,    ω =9 β -3/2-9 β,    β2/9, B(r)=r_0 (r_0/r)^3 (3 β -1) (γ -1)/β (5 γ +7)-γ -2, where γ should satisfy the wormhole conditions. ∙ 𝐚=1/2 𝐝 In this case, we find R(t)=(R_0 t + R_1)^1/2,    ω=3, B(r)=r_0 (r_0/r)^3 (γ -1)/8 β (2 γ +1)-5 γ -1. Here, β parameter remains arbitrary and γ should satisfy the wormhole conditions. Here to make clear how the Rastall gravity, and not only the choice of the stress-tensor, is important in influencing the solutions (<ref>) and (<ref>), one may consider the following two possibilities: i) fix the parameter γ and ω by assuming known specific stress energy tensors at this step, so that solutions now clearly depend on the Rastall factor, and ii) consider the theoretically and observationally verified values or ranges on Rastall parameter β, and then obtain corresponding allowable ω and γ values satisfying the wormhole conditions that can include parameter ranges for both the normal and exotic matters. The latter possibility implies how the coupling parameter β confines or affects the matter sources needed for such configurations. Up to this point, one observes the constraint on ω parameter. In section 3, in order to investigate the obtained viable solutions versus the ECs, regarding the theoretical and observational constraints on β parameter <cit.>, we will consider two admissible ranges 0<β<1/6 and β<0, and we will analyse the above latter possibility in detail. Specifically, we show that the satisfaction of wormhole conditions is possible for two observationally obtained values of β=0.163 <cit.> and β=0.041 <cit.>. As an instance, for the particular solution of a=1/2d, consideration the EoS parameters ω=3, γ=0.35 with β=0.041 provides the possibility of satisfaction of all wormhole conditions that is illustrated in Figure <ref>. This is an interesting case in the sense that substituting these EoS parameters in (<ref>) and defining an effective pressure P_e(t,r)=P_r(t,r)+(0.7) P_l(t,r) we have an effective equation of state P_e(t,r)=1.7/3ρ(t,r) which denotes a matter source respecting ECs. This indeed is an example for the first possibility mentioned above as well. §.§ Solutions for C 0 §.§.§ Solution for the shape function The shape function B(r) can be obtained by integrating (<ref>) as B(r)= -C /6 β (ω+1)-ω-3r^3+ C_1 r^(γ -1) ω/1-2 β (2 γ +1) (ω+1)+γ ( ω+2), where C and C_1 are separation and integration constants, respectively. Like (<ref>), the solution (<ref>) is a generic wormhole shape function and is similar to the solution in <cit.>. The difference being is upto some parameter choices. However, one observes that, as we will see later in analyzing solutions versus WEC, the difference in the underlying theories, i.e here the being of Rastall parameter β, can play a crucial role in satisfying wormhole conditions even by ordinary matter sources. This indeed implies how such a modification in EM source, akin to the higher order curvature terms in other modified theories, is capable of solving the issue of the need for exotic matter in GR. To be specific, the presence of β puts constraints on the required matter sources, i.e on ω and γ, see the classification given in Table <ref>. In other words, as discussed in <cit.> for static cases, considering the field equations G_μν=κ_r S_μν where the effective EM tensor S_μν includes the Rastall's modification term βℜg_μν, the actual matters make up with phantom characteristics. Therefore, in Rastall gravity, general wormhole solutions can exist with both normal and phantom matter, depending on the Rastall coupling parameter. Using the (initial) condition B(r_0)=r_0 at the wormhole's throat we can determine integration constant C_1 as C_1= (6 β (ω+1)-ω-3)r_0 + C r_0^3/(6 β (ω+1)- ω - 3)r_0^(γ -1) ω/1-2 β (2 γ +1) (ω+1)+γ ( ω+2), from which we find the flaring out condition at the throat as B'(r_0)= -C r_0^2(1+2 γ) +ω(1-γ) /-1+2 β (2 γ +1) (ω+1)-γ ( ω+2)<1. Here one observes that depending on the set of parameters ω,γ, and β, the coefficient of the first term in (<ref>), i.e k=C /6 β (ω+1)-ω-3, appears as an effective cosmological constant. This means that for C≠ 0, we have asymptotically (anti) de Sitter-like solutions and the asymptotic flatness condition does not hold here. Also, as it is pointed out in <cit.>, the above defined k constant can be interpreted as a topological number denoting the spatial curvature of the background FRW spacetime taking values ± 1, 0 representing a closed, open and flat universe, respectively. one can write the B(r) function as B(r)= -k r^3 + B_n(r), where k represents the spatial curvature of the FRW metric and B_n(r) is the shape function of a wormhole inhabiting within this spacetime. One should note to the difference here in (<ref>) and (<ref>), similar to <cit.> as instances, and in <cit.> where the throat condition B_n(r_0)=r_0 is imposed only on the second term B_n(r) in the shape function. It is mentioned in <cit.> that imposing the throat condition B(r_0)=r_0, the spatial extension of the wormhole solution cannot be arbitrarily large. Following <cit.>, the throat condition B_n(r_0)=r_0 together with t the flaring out condition give B'(r_0)=(γ -1) ω/1-2 β (2 γ +1) (ω+1)+γ ( ω+2)<1. The asymptotic flatness condition reads as -1<(1-γ ) ω/1-2 β (2 γ +1) (ω+1)+γ ( ω+2)<1. §.§.§ Solution for the scale factor Considering the general case a,b 0, Eq.(<ref>) can be integrated giving the following first order nonlinear differential equation Ṙ^2(t)=C/b( 1-R_0 R^- 2b/a), where R_0 is an integration constant, and hence ∫d R/√(1-R_0 R^ -2b/a)= ±√(C/b)∫ d t, for C/b>0. Here one can obtain the explicit from of the scale factor R(t) for some particular cases of parameters a and b. The following particular cases can be of interest. ∙ 𝐚=-2𝐛 This case gives the scale factor R(t), ω and shape function B(r) as follows R(t) =1/R_0-R_0/4(±√(C/b) t + R_1)^2, ω =3-9 β/9 β -2,  β2/9, B(r) =(9 β -2) C /12 β -3r^3 +r_0 ((3-12 β )+(9 β -2) C r_0^2)/12 β -3 (r/r_0)^3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2, where R_1 is an integration constant. Considering B_n(r) as the shape function of the inhabiting wormhole, we have R(t) =1/R_0-R_0/4(±√(k) t + R_1)^2, ω =3-9 β/9 β -2,  β2/9, B_n(r) = r_0 (r/r_0)^3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2, where the reality of the solution requires k=1. Later we will show that the WEC can be respected in both the above cases for a=-2b. ∙ 𝐚=-𝐛 In this case, one finds R(t)= 1/√(R_0)sin(±√(CR_0/b) t+R_1),   β = 1/4, B(r)= -2 C /ω -3r^3 + r_0 (2 C r_0^2+ω -3)/ω -3 (r/r_0)^2 (γ -1) ω/2 γ -ω +1, where R_1 is an integration constant. We do not analyze this solution versus the wormhole conditions since the contraction of the field equations (<ref>) by the metric gives the Ricci scalar as ℜ=1/1-4βT which diverges for β = 1/4 and T≠ 0 <cit.>. § WEAK ENERGY CONDITION In order to investigate the obtained viable solutions versus the energy conditions, regarding the theoretical and observational constraints on β parameter <cit.>, we will consider two admissible ranges 0<β<1/6 and β<0. §.§ WEC for 0<β<1/6 In this subsection, considering 0<β<1/6 we obtain the valid ranges of ω and γ satisfying both the WEC (ρ≥ 0, ρ+P_r >0 and ρ+P_l>0) and flaring-out condition (B^'(r_0)<1) simultaneously. §.§.§ Analysis of solutions for C=0 Here we analyze the following particular solutions for the scale factor when C=0. ∙ a=2 d Inserting the scale factor and the shape function in (<ref>) into the field equations (<ref>-<ref>), one obtains ρ(t,r)= -3 (3 β -1) (6 β -1) R_0^2/2 π G (4 β -1) (R_0 t+R_1)^2 + 3 r_0^-2 (2 β -1) (3 β -1) (6 β -1) (γ -1)/8 π G (4 β -1) (β (5 γ +7)-γ -2) (R_0 t+R_1)^4 (r_0/r)^3+ 3 (3 β -1) (γ -1)/β (5 γ +7)-γ -2, ρ(t,r)+P_r(t,r) =(6 β -1) R_0^2/2 π G (4 β -1)(R_0 t+R_1)^2 - r_0^-2 (6 β -1) (2 β (7 γ -1)-4 γ +1)/8 π G (4 β -1) (β (5 γ +7)-γ -2) (R_0 t+R_1)^4 (r_0/r)^3+ 3 (3 β -1) (γ -1)/β (5 γ +7)-γ -2, ρ(t,r)+P_l(t,r) =(6 β -1) R_0^2/2 π G (4 β -1)(R_0 t+R_1)^2 - r_0^-2 (6 β -1) (4 β (γ -4)-2 γ +5)/16 π G (4 β -1) (β (5 γ +7)-γ -2) (R_0 t+R_1)^4 (r_0/r)^3+ 3 (3 β -1) (γ -1)/β (5 γ +7)-γ -2. In order to avoid the singularities in density and pressure profiles that corresponds to the big bang singularity at R(t)=0, it requires t≠ -R_1/R_0. Combining the constraint on ω and β in (<ref>) with 0<β<1/6, the flaring-out, flatness and weak energy condition can all be satisfied simultaneously if R_0 R_1>0,    2 β -1/14 β -4<γ <16 β -5/4 β -2,         r_0>1/2√(14 βγ -2 β -4 γ +1/R_0^2 R_1^2 (5 βγ +7 β -γ -2)). Here one observes that the satisfaction of all wormhole conditions imposes some interesting constraints. Specifically: (i) the required matter type (γ and ω parameters) for a specific solution is constrained by the Rastall's coupling, and ii) the wormhole throat radius r_0 cannot be arbitrary, and it constrained by the Rastall's coupling β and the matter parameter γ. This is similar to the result in <cit.> where it is shown that for wormholes in the Einstein-de Sitter universe, the wormhole throat radius not only depends on the shape function parameters but also on the background cosmological constant. For a specific set of parameters according to the constraints (<ref>), the behavior of ρ, ρ+P_r and ρ+P_l as well as B(r)/r are illustrated in the Figures <ref> and <ref>. The positiveness of ρ, ρ+P_r and ρ+P_l represents the satisfaction of the WEC in Rastall's theory. Figure <ref> shows that for β=0.163 with variety of γ values in the range given by (<ref>), the WEC condition remains respected for a variety of wormholes with radii r_0 satisfying (<ref>). Here one notes that the throat radius r_0 is fixed for a fixed value of β and γ, and is defined as the point where B(r) is minimum. In case of a dynamic wormhole the throat area is subject to change in time due to changing R(t). In Figure <ref>, the first plot represents the asymptotic flatness of B(r)/r function and the other plots represent the satisfaction of WEC for a specific wormhole with the characteristic parameters r_0=0.1, β=0.163, γ=0.4. ∙ a=1/2 d Inserting the scale factor and and shape function in (<ref>) into the field equations (<ref>-<ref>), we find ρ(t,r) =3 R_0^2 (6 β -1) /32 π G (4 β -1) (R_0 t+R_1)^2 +3 r_0^-2 (6 β -1) (2 β -1) (γ -1)/8 π G (4 β -1) (8 β (2 γ +1)-5 γ -1) (R_0 t+R_1) (r_0/r)^3 (γ -1)/8 β (2 γ +1)-5 γ -1+3, ρ(t,r)+P_r(t, r)= (6 β -1) R_0^2/8 π (4 β -1) G (R_0 t+R_1)^2 -r_0^-2(6 β -1) (β (8 γ +4)-γ -2)/4 π G (4 β -1)(8 β (2 γ +1)-5 γ -1) (R_0 t+R_1) (r_0/r)^3 (γ -1)/8 β (2 γ +1)-5 γ -1+3, ρ(t,r)+P_l(t, r)=(6 β -1) R_0^2/16 π G (4 β -1) (R_0 t+R_1)^2 +r_0^-2(6 β -1) (β (8 γ +4)-4 γ +1)/16 π G (4 β -1) (8 β (2 γ +1)-5 γ -1) (R_0 t+R_1) (r_0/r)^3 (γ -1)/8 β (2 γ +1)-5 γ -1+3. In this case, satisfaction of flaring-out condition, flatness condition and WEC at throat requires R_0,R_1<0: γ <2-4 β/8 β -1,    0<β<1/8;      r_0≥ 2 √(-2 βγ R_1-2 β R_1-γ R_1+R_1/R_0^2 (16 βγ +8 β -5 γ -1)), γ >-4 β -1/8 β -4,   0<β<1/8;      r_0>√(-8 βγ R_1-4 β R_1+4 γ R_1-R_1/R_0^2 (16 βγ +8 β -5 γ -1)), γ >1/2,  β =1/8 ;      r_0>√(-6 γ R_1-3 R_1/6 γ R_0^2), -4 β -1/8 β -4<γ <2-4 β/8 β -1,     1/8<β < 1/6;      r_0>√(-8 βγ R_1-4 β R_1+4 γ R_1-R_1/R_0^2 (16 βγ +8 β -5 γ -1)) . R_0,R_1>0: γ <2-4 β/8 β -1,    γ >-4 β -1/8 β -4;   0<β<1/8; r_0>√(2)√(8 βγ R_1+4 β R_1-γ R_1-2 R_1/R_0^2 (16 βγ +8 β -5 γ -1)), γ >1/2,   β =1/8; r_0>√(R_1/γ R_0^2), -4 β -1/8 β -4<γ <2-4 β/8 β -1,   1/8<β < 1/6; r_0>√(2)√(8 βγ R_1+4 β R_1-γ R_1-2 R_1/R_0^2 (16 βγ +8 β -5 γ -1)). Similar arguments given for the previous solution and its figures can be also made here. Figure <ref> shows that for a specific β=0.041, the WEC will be satisfied for variety of wormholes with r_0 and γ meeting the constraints in (<ref>). Figure <ref> shows the asymptotic behavior of B(r), as well as ρ, ρ+P_r and ρ+P_l satisfying the WECs for a specific set of parameters according to the constraints (<ref>) in the entire spacetime. §.§.§ Analysis of solutions for C 0 Here we analyze the following two particular cases. ∙ 𝐚=-2 𝐛 Substituting the scale factor and shape function in (<ref>) into the field equations (<ref>-<ref>), we find ρ(t,r) =3 C R_0^2 (3 β -1) (6 β -1) (9 β -2) /2 π G (4 β -1) (-48 β +R_0^2 (2 √(3) (4 β -1) R_1 t √((2-9 β ) C/4 β -1)+(2-9 β ) C t^2+3 (4 β -1) R_1^2)+12) -162 R_0^2 (2 β -1) (3 β -1) (6 β -1) (γ -1) (C (9 β -2) +r_0^-2(3-12 β)))/π G (1-4 β )^2 (β (5 γ +7)-γ -2) (R_0^2 (√(3) t √((2-9 β ) C/4 β -1)+3 R_1)^2-36)^2 (r_0/r)^3-3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2, ρ (t,r)+P_r(t,r)=C R_0^2 (6 β -1) (9 β -2) /2 π G (4 β -1) (48 β +R_0^2 (2 √(3) (1-4 β ) R_1 t √((2-9 β ) C/4 β -1)+(9 β -2) C t^2+(3-12 β ) R_1^2)-12) + 6 R_0^2 (6 β -1) (2 β (7 γ -1)-4 γ +1) (C(9 β -2)+r_0^-2(3-12 β))/π G (β (5 γ +7)-γ -2) (-48 β +R_0^2 (2 √(3) (4 β -1) R_1 t √((2-9 β ) C/4 β -1)+(2-9 β ) C t^2+3 (4 β -1) R_1^2)+12)^2 (r_0/r)^3-3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2, ρ (t,r)+P_l(t,r)=C R_0^2 (6 β -1) (9 β -2)/2 π G (4 β -1) (48 β +R_0^2 (2 √(3) (1-4 β ) R_1 t √((2-9 β ) C/4 β -1)+(9 β -2) C t^2+(3-12 β ) R_1^2)-12) + 3 R_0^2 (6 β -1) (4 β (γ -4)-2 γ +5) (C(9 β -2)+r_0^-2(3-12 β)))/π G (β (5 γ +7)-γ -2) (-48 β +R_0^2 (2 √(3) (4 β -1) R_1 t √((2-9 β ) C/4 β -1)+(2-9 β ) C t^2+3 (4 β -1) R_1^2)+12)^2 (r_0/r)^3-3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2. Since 0<β<1/6 and ω =3-9 β/9 β -2, the WEC and flaring-out condition will be satisfied under the fallowing conditions C<0: R_0≤-2/|R_1|,      R_0>2/|R_1|;    -1/2<γ≤2 β -1/14 β -4,            r_0>√(14 βγ -2 β -4 γ +1/(9 β -2) (2 γ +1) C), -2/|R_1|<R_0<2/|R_1|,    -1/2<γ≤20 β -7 β R_0^2 R_1^2+2 R_0^2 R_1^2-4/-76 β +5 β R_0^2 R_1^2-R_0^2 R_1^2+20,        r_0>√(14 βγ -2 β -4 γ +1/(9 β -2) (2 γ +1) C). Similar arguments given for the previous solutions and their figures can be also made here. Figure <ref> shows that for a specific β=0.041, the WEC will be satisfied for variety of wormholes with r_0 and γ meeting the constraints in (<ref>). Figure <ref> shows the behavior of B(r)/r as well as ρ, ρ+P_r and ρ+P_l satisfying the WEC for a specific set of parameters according to the constraints in (<ref>). As it is seen from the first plot, in this case we have a finite wormhole configuration which cannot be arbitrarily large. ∙ 𝐚=-2 𝐛, 𝐤=1 Considering the shape function and scale factor as (<ref>) leaves the field equations (<ref>-<ref>) as ρ(t,r) =-3 (6 β -1) R_0^2 ((3 β -1) R_0^2 (R_1 ± t)^2-4 β)/2 π (4 β -1) G (R_0^2 (R_1 ± t)^2-4)^2 +6 (6 β -1) (β (6 β -5)+1) (γ -1) R_0^2 r_0^-2/(G π (4 β -1) (β (5 γ +7)-γ -2) (R_0^2 (R_1 ± t)^2-4)^2) (r_0/r)^3-3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2, ρ(t,r)+P_r(t,r) =(6 β -1) R_0^2 (R_0^2 (R_1 ± t)^2+4)/2 π G (4 β -1) (R_0^2 (R_1 ± t)^2-4)^2 - 2 R_0^2 r_0^-2(6 β -1) (2 β (7 γ -1)-4 γ +1)/π G (4 β -1) (β (5 γ +7)-γ -2) (R_0^2 (R_1± t)^2-4)^2 (r_0/r)^3-3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2, ρ(t,r)+P_l(t,r) =(6 β -1) R_0^2 (R_0^2 (R_1± t)^2+4)/2 π G (4 β -1)(R_0^2 (R_1± t)^2-4)^2 - r_0^-2 R_0^2 (6 β -1) (4 β (γ -4)-2 γ +5)/(π G (4 β -1) G (β (5 γ +7)-γ -2) (R_0^2 (R_1 ± t)^2-4)^2) (r_0/r)^3-3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2. Considering 0<β<1/6 and ω=3-9 β/9 β -2, the WEC, flaring-out and asymptotically flantess condition will be satisfied simultaneously if R_00,  R_0± 2/|R_1|,     2 β -1/14 β -4<γ <16 β -5/4 β -2,      r_0>2 √(14 βγ -2 β -4 γ +1/(5 βγ +7 β -γ -2) (R_0^2 R_1^2+4)) Figure <ref> shows that the WEC will be satisfied for variety of wormholes with r_0 and γ meeting the constraints in (<ref>). Figure <ref> shows the asymptotic behavior of B_n(r)/r as well as ρ, ρ+P_r, and ρ+P_l respecting WEC in entire spacetime. §.§ WEC for β<0 Some observational tests of Rastall theory indicates negative values of β , see as an instance <cit.>. Hence, in this subsection we address the WEC and flaring-out condition for β<0. §.§.§ Analysis of solutions for C=0 ∙ 𝐚=2𝐝 In order to satisfy the WEC, flaring-out condition and flatness condition in this case, using Eq.(<ref>) with β <0, the following constraints should be satisfied. R_0 R_1>0,    2 β -1/14 β -4<γ <16 β -5/4 β -2,           r_0>1/2√(14 βγ -2 β -4 γ +1/R_0^2 R_1^2 (5 βγ +7 β -γ -2)). The constraints here are the same as the obtained ones for 0<β<1/6 in (<ref>). ∙ 𝐚=1/2𝐝 Using (<ref>), since ω=3 and β<0, the following restrictions on γ parameter provides respecting the WE, flaring-out and flatness conditions R_0, R_1<0: γ <2-4 β/8 β -1,      r_0≥ 2 √(-2 βγ R_1-2 β R_1-γ R_1+R_1/R_0^2 (16 βγ +8 β -5 γ -1)), γ >-4 β -1/8 β -4,       r_0>√(-8 βγ R_1-4 β R_1+4 γ R_1-R_1/R_0^2 (16 βγ +8 β -5 γ -1)). R_0, R_1>0: γ <2-4 β/8 β -1,     γ≥ 0;   β <-1/4; r_0>√(2)√(8 βγ R_1+4 β R_1-γ R_1-2 R_1/R_0^2 (16 βγ +8 β -5 γ -1)), -4 β -1/8 β -4<γ <0,   β <-1/4; r_0 ≥ 2 √(-2 βγ R_1-2 β R_1-γ R_1+R_1/R_0^2 (16 βγ +8 β -5 γ -1)), γ <2-4 β/8 β -1,    γ >-4 β -1/8 β -4;    -1/4≤β <0; r_0>√(2)√(8 βγ R_1+4 β R_1-γ R_1-2 R_1/R_0^2 (16 βγ +8 β -5 γ -1)). §.§.§ For solution of C 0 ∙ 𝐚=-2𝐛 Considering Eq.<ref>, the WEC and flaring-out condition will be met if C<0: R_0≤-2 √(5)/|R_1|     R_0>2 √(5)/|R_1|,    -1/2<γ≤2 β -1/14 β -4,      r_0>√(14 βγ -2 β -4 γ +1/(9 β -2) (2 γ +1) C), -2 √(5)/|R_1|<R_0<2 √(5)/|R_1|,    -1/2<γ≤20 β -7 β R_0^2 R_1^2+2 R_0^2 R_1^2-4/-76 β +5 β R_0^2 R_1^2-R_0^2 R_1^2+20,      r_0>√(14 βγ -2 β -4 γ +1/(9 β -2) (2 γ +1) C). ∙ 𝐚=-2𝐛, 𝐤=1 Considering (<ref>), with β<0 and ω=3-9 β/9 β-2, the WEC, flaring-out, and flatness conditions will be respected if R_00,  R_0± 2/|R_1|,     2 β -1/14 β -4<γ <16 β -5/4 β -2,      r_0>2 √(14 βγ -2 β -4 γ +1/(5 βγ +7 β -γ -2) (R_0^2 R_1^2+4)) § CONCLUSION In this paper, analytical evolving wormhole solutions with a constant redshift function are investigated in the context of Rastall's modified theory. A general class of solutions, including the asymptotically flat and (anti)de Sitter solutions, is derived by assuming a particular equation of state for the energy density and pressure profiles. Regarding the theoretical and observational constraints on Rastall's coupling β, two admissible ranges 0<β<1/6 and β<0 are considered in order to study the solutions versus the required conditions for traversable wormholes. It is shown that simultaneous satisfaction of all these conditions is achievable under the obtained constraints on the parameters of the solutions. Also it is shown that the size of the wormhole throat is constrained and depends on both the Rastall's coupling β and the equation of state parameters of the matter source. A list of three particular solutions with their constraints providing the satisfaction of all wormhole conditions is given in Table <ref>. Data Availability Statement: No Data associated in the manuscript. plain 1 intro25 V. Faraoni, Cosmology in scalar tensor gravity https://doi.org/10.1007/978-1-4020-1989-0Springer, Dordrecht (2004) intro29 A. D. Felice and S. Tsujikawa, f(R) Theories https://doi.org/10.12942/lrr-2010-3Living Reviews in Relativity 13, (2010) intro31 R. Maartens, Brane-World Gravity https://doi.org/10.12942/lrr-2004-7 Living Rev. Relativ. 7, 7 (2004) intro24 T. Clifton, P. G. Ferreira, A. Padilla, and C. Skordis, Modified gravity and cosmology https://doi.org/10.1016/j.physrep.2012.01.001Physics Reports 513, 1 (2012) intro36 Peter Rastall, Generalization of the Einstein Theory https://doi.org/10.1103/PhysRevD.6.3357Phys. Rev. D 6, 3357 (1972) intro37 H. Moradpour, Y. Heydarzade, F. Darabi, and Ines G. Salako, A generalization to the Rastall theory and cosmic eras https://doi.org/10.1140/epjc/s10052-017-4811-zEur. Phys. J. C 77, 259 (2017) introbh Y. Heydarzade, H. Moradpour, F. Darabi, Black hole solutions in Rastall theory https://doi.org/10.1139/cjp-2017-0254Canadian Journal of Physics 95, 12 (2017) introvis Matt Visser, Rastall gravity is equivalent to Einstein gravity https://doi.org/10.1016/j.physletb.2018.05.028Physics Letters B 782, 83 (2018) intro38 F. Darabi, H. Moradpour, I. Licata, Y. Heydarzade, and C. Corda, Einstein and Rastall theories of gravitation in comparison https://doi.org/10.1140/epjc/s10052-017-5502-5Eur. Phys. J. C 78, 25 (2018) intro371 G.G.L. Nashed, and W.E. Hanafy, Non-trivial class of anisotropic compact stellar model in Rastall gravity https://doi.org/10.1140/epjc/s10052-022-10634-0Eur. Phys. J. C 82, 679 (2022) intro372 Muhammad F.A.R. Sakti, Agus Suroso , Anto Sulaksono, and Freddy P. Zen, Rotating black holes and exotic compact objects in the Kerr/CFT correspondence within Rastall gravity https://doi.org/10.1016/j.dark.2022.100974Physics of the Dark Universe 35, 100974 (2022) intro373 L. Meng, and DJ. Liu, Tidal Love numbers of neutron stars in Rastall gravity https://doi.org/10.1007/s10509-021-04013-6Astrophys Space Sci 366, 105 (2021) intro374 Miguel Cruz et al, A thermodynamics revision of Rastall gravity https://doi.org/10.1088/1361-6382/ab45abClass. Quantum Grav. 36 225007 (2019) introlag Fabris, Júlio C. and Piattella, Oliver F. and Rodrigues, Davi C. On Rastall gravity formulation as a f(R,ℒ_m) and a f(R, T) theory https://doi.org/10.1140/epjp/s13360-023-03845-1Eur. Phys. J. Plus 138, 232 (2023) intro381 De Moraes, W. A. G. and Santos, A. F., Lagrangian formalism for Rastall theory of gravity and Gödel-type universe https://doi.org/10.1007/s10714-019-2652-9Gen Relativ Gravit 51, 167 (2019) introwheel John A. Wheeler, On the nature of quantum geometrodynamics https://doi.org/10.1016/0003-4916(57)90050-7Annals of Physics 2, 604 (1957) introMTY1 Michael S. Morris, and Kip S. Thorne, Wormholes in spacetime and their use for interstellar travel: A tool for teaching general relativity https://doi.org/10.1119/1.15620American Journal of Physics 56, 395–412 (1988) introMTY2 Michael S. Morris, Kip S. Thorne, and Ulvi Yurtsever, Wormholes, Time Machines, and the Weak Energy Condition https://doi.org/10.1103/PhysRevLett.61.1446Phys. Rev. Lett. 61, 1446 (1988) WHbook Matt Visser, Lorentzian Wormholes: From Einstein to Hawking American Institute of Physics (1995) Hochberg-Visser David Hochberg and Matt Visser, Geometric structure of the generic static traversable wormhole throat https://link.aps.org/doi/10.1103/PhysRevD.56.4745Phys. Rev. D 56, 4745 (1997) Hochberg-Visser2 David Hochberg and Matt Visser, Null Energy Condition in Dynamic Wormholes https://link.aps.org/doi/10.1103/PhysRevLett.81.746Phys. Rev. Lett. 81, 746 (1998) intro64 David Hochberg and Matt Visser, Dynamic wormholes, antitrapped surfaces, and energy conditions https://doi.org/10.1103/PhysRevD.58.044021Phys. Rev. D 58, 044021 (1998) viser-d-1989 Matt Visser, Traversable wormholes: Some simple examples https://link.aps.org/doi/10.1103/PhysRevD.39.3182Phys. Rev. D 39, 3182 (1989) viser-n-1989 Matt Visser, Traversable wormholes from surgically modified Schwarzschild spacetimes https://doi.org/10.1016/0550-3213(89)90100-4Nucl. Phys. B 328, 203 (1989) Eiroa-2005 E. F. Eiroa and C. Simeone, Thin-shell wormholes in dilaton gravity https://link.aps.org/doi/10.1103/PhysRevD.71.127501Phys. Rev. D 71, 127501 (2005) Zaslavski-2007 O. B. Zaslavskii, Traversable wormholes: Minimum violation of the null energy condition revisited https://link.aps.org/doi/10.1103/PhysRevD.76.044017Phys. Rev. D 76, 044017 (2007) introEC2 Eric Poisson and Matt Visser, Thin-shell wormholes: Linearization stability https://doi.org/10.1103/PhysRevD.52.7318Phys. Rev. D 52, 7318 (1995) introor1 S. Habib Mazharimousavi, M. Halilsoy, and Z. Amirabi, Stability of thin-shell wormholes supported by normal matter in Einstein-Maxwell-Gauss-Bonnet gravity https://doi.org/10.1103/PhysRevD.81.104002Phys. Rev. D 81, 104002 (2010) introor2 Mohammad Reza Mehdizadeh, Mahdi Kord Zangeneh, and Francisco S.N. Lobo, Higher-dimensional thin-shell wormholes in third-order Lovelock gravity https://doi.org/10.1103/PhysRevD.92.044022Phys. Rev. D 92, 044022 (2015) lobo-2011 Francisco S. N. Lobo, Wormhole geometries in modified gravity https://doi.org/10.1063/1.4734456F. S. N. Lobo, AIP Conf. Proc. 1458, 447 (2011) harko-2013 T. Harko, F. S. N. Lobo, M. K. Mak and S. V. Sushkov, Modified-gravity wormholes without exotic matter Phys. Rev. D 87, 067504 (2013) intro52 A. G. Agnese and M. La Camera, Wormholes in the Brans-Dicke theory of gravitation https://doi.org/10.1103/PhysRevD.51.2011Phys. Rev. D 51, 2011 (1995) intro53 Kamal K. Nandi, Anwarul Islam, and James Evans, Brans wormholes https://doi.org/10.1103/PhysRevD.55.2497Phys. Rev. D 55, 2497 (1997) intro54 Francisco S. N. Lobo and Miguel A. Oliveira, General class of vacuum Brans-Dicke wormholes https://doi.org/10.1103/PhysRevD.81.067501Phys. Rev. D 81, 067501 (2010) intro55 Sergey V. Sushkov and Sergey M. Kozyrev, Composite vacuum Brans-Dicke wormholes https://doi.org/10.1103/PhysRevD.84.124026Phys. Rev. D 84, 124026 (2011) bran Ernesto F. Eiroa, Martin G. Richart, Claudio Simeone, Thin-shell wormholes in Brans-Dicke gravity https://doi.org/10.1016/j.physleta.2008.10.065 Phys.Lett. A 373, 1 (2008) dotti-2007 G. Dotti, J. Oliva and R. Troncoso, Exact solutions for the Einstein-Gauss-Bonnet theory in five dimensions: Black holes, wormholes, and spacetime horns https://link.aps.org/doi/10.1103/PhysRevD.76.064038Phys. Rev. D 76, 064038 (2007) dotti-2009 G. Dotti, J. Oliva and R. Troncoso, Vacuum solutions with nontrivial boundaries for the Einstein-Gauss-Bonnet theory https://doi.org/10.1142/S0217751X09045248Int. J. Mod. Phys. A 24, 1690 (2009) intro56 Francisco S. N. Lobo and Miguel A. Oliveira, Wormhole geometries in f(R) modified theories of gravity https://doi.org/10.1103/PhysRevD.80.104012Phys. Rev. D 80, 104012 (2009) intro57 Nadiezhda Montelongo Garcia and Francisco S. N. Lobo, Wormhole geometries supported by a nonminimal curvature-matter coupling https://doi.org/10.1103/PhysRevD.82.104018Phys. Rev. D 82, 104018 (2010) intro58 Nadiezhda Montelongo Garcia and Francisco S N Lobo, Nonminimal curvature–matter coupled wormholes with matter satisfying the null energy condition https://doi.org/10.1088/0264-9381/28/8/085018Class. Quantum Grav. 28 085018 (2011) Bhattacharya-2017 S. Bhattacharya and S. Chakraborty, f(R) gravity solutions for evolving wormholes https://doi.org/10.1140/epjc/s10052-017-5131-zEur. Phys. J. C 77, 558 (2017) intro59 Rajibul Shaikh and Sayan Kar, Wormholes, the weak energy condition, and scalar-tensor gravity https://doi.org/10.1103/PhysRevD.94.024011Phys. Rev. D 94, 024011 (2016) camera-2003 M. La Camera, Wormhole solutions in the Randall–Sundrum scenario https://doi.org/10.1016/j.physletb.2003.08.042Phys. Lett. B 573, 27 (2003) higherdim-dotti G. Dotti, J. Oliva and R. Troncoso, Static wormhole solution for higher-dimensional gravity in vacuum https://link.aps.org/doi/10.1103/PhysRevD.75.024002Phys. Rev. D 75, 024002 (2007) Lovelock-gravity J. Matulich and R. Troncoso, Asymptotically Lifshitz wormholes and black holes for Lovelock gravity in vacuum https://doi.org/10.1007/JHEP10(2011)118J. High Energ. Phys. 2011, 118 (2011) Torii-2013 Takashi Torii and Hisa-aki Shinkai, Wormholes in higher dimensional space-time: Exact solutions and their linear stability analysis https://link.aps.org/doi/10.1103/PhysRevD.88.064027Phys. Rev. D 88, 064027 (2013) dgp MG. Richarte, Wormholes and solitonic shells in five-dimensional DGP theory https://doi.org/10.1103/PhysRevD.82.044021 Phy.Rev. D 82, 044021 (2010) intro62 Sayan Kar and Deshdeep Sahdev, Evolving Lorentzian wormholes https://doi.org/10.1103/PhysRevD.53.722Phys. Rev. D 53, 722 (1996) intro63 N. Riazi, and B. Nasre Esfahani, Time-dependent wormholes in an expanding universedominated by traceless matter https://doi.org/10.1023/A:1002434423671Astrophysics and Space Science 271, 237–243 (2000) intro65 S. A. Hayward, Dynamic Wormholes https://doi.org/10.1142/s0218271899000286International Journal of Modern Physics D 08, 373-382 (1999) intro66 Sayan Kar, Evolving wormholes and the weak energy condition https://doi.org/10.1103/PhysRevD.49.862Phys. Rev. D 49, 862 (1994) intro67 Sergey V. Sushkov and Yuan-Zhong Zhang, Scalar wormholes in a cosmological setting and their instability https://doi.org/10.1103/PhysRevD.77.024042Phys. Rev. D 77, 024042 (2008) intro68 Peter K. F. Kuhfittig, Static and dynamic traversable wormhole geometries satisfying the Ford-Roman constraints https://doi.org/10.1103/PhysRevD.66.024015Phys. Rev. D 66, 024015 (2002) intro69 Luis A. Anchordoqui, Diego F. Torres, Marta L. Trobo, and Santiago E. Perez Bergliaffa, Evolving wormhole geometries https://doi.org/10.1103/PhysRevD.57.829Phys. Rev. D 57, 829 (1998) intro71 M. LA Camera, On thin-shell wormholes evolving in flat FRW spacetime https://doi.org/10.1142/s0217732311035407Modern Physics Letters A 26, 857 (2011) intro73 Aarón V B Arellano and Francisco S N Lobo, Evolving wormhole geometries within nonlinear electrodynamics https://doi.org/10.1088/0264-9381/23/20/004Class. Quantum Grav. 23 5811 (2006) intro74 B.N. Esfahani, The null energy condition in wormholes with cosmological constant https://doi.org/10.1007/s10714-005-0018-yGen Relativ Gravit 37, 271–279 (2005) intro75 M. Cataldo, F. Aróstica, and S. Bahamonde, (N+1)-dimensional Lorentzian evolving wormholes supported by polytropic matter https://doi.org/10.1140/epjc/s10052-013-2517-4Eur. Phys. J. C 73, 2517 (2013) intro76 Mauricio Cataldo and Sergio del Campo, Two-fluid evolving Lorentzian wormholes https://doi.org/10.1103/PhysRevD.85.104010Phys. Rev. D 85, 104010 (2012) intro79 N. Riazi, and M.R. Bordbar, Time-dependent wormhole in an inhomogeneous spherically symmetric space time with a cosmological constant https://doi.org/10.1007/s10509-010-0435-6Astrophys Space Sci 331, 315–320 (2011) introras2 K. A. Bronnikov, J. C. Fabris, O. F. Piattella, and E. C. Santos, Static, spherically symmetric solutions with a scalar field in Rastall gravity https://doi.org/10.1007/s10714-016-2152-0Gen Relativ Gravit 48, 162 (2016) introras3 Mustafa, G. and Shahzad, M. R. and Abbas, G. and Xia, T., Stable wormholes solutions in the background of Rastall theory https://doi.org/10.1142/S0217732320500352Modern Physics Letters A 35, 2050035 (2020) lob Iarley P. Lobo, Martín G. Richarte, J. P. Morais Graça nd H. Moradpour, Thin-shell wormholes in Rastall gravity https://doi.org/10.1140/epjp/s13360-020-00553-yEur. Phys. J. Plus 135, 550 (2020). naz N. Nazavari, K. Saaidi and A. Mohammadi, Wormhole solution in modified teleparallel-Rastall gravity and energy conditions https://doi.org/10.1007/s10714-023-03093-9 Gen. Relativ. Gravit 55, 45 (2023). traversable H. Moradpour, N. Sadeghnezhad, and S. H. Hendi, Traversable asymptotically flat wormholes in Rastall gravity https://doi.org/10.1139/cjp-2017-0040Canadian Journal of Physics 95, 1257 (2017) introras4 Shibaji Halder, Subhra Bhattacharya, and Subenoy Chakraborty, Wormhole solutions in Rastall gravity theory https://doi.org/10.1142/S0217732319500950Modern Physics Letters A 34, 1950095 (2019) Bhat-2021 S Bhattacharya, T Bandyopadhyay, Revisiting the evolving Lorentzian wormhole: a general perspective https://doi.org/10.1007/s10714-021-02878-0Gen Relativ Gravit 53, 104 (2021) ext1 Mohammad Reza Mehdizadeh, Amir Hadi Ziaie, Dynamical wormholes in Lovelock gravity https://doi.org/10.48550/arXiv.2111.14828Phys. Rev. D 104, 104050 (2021) ext2 Mohammad Reza Mehdizadeh, Dynamical wormholes in Einstein-Gauss-Bonnet gravity https://doi.org/10.1140/epjc/s10052-020-7871-4Eur. Phys. C 80:310 (2020) kar S. Kar, and D. Saahdev, Restricted class of traversable wormholes with traceless matter, https://link.aps.org/doi/10.1103/PhysRevD.52.2030Physical Review D 52,4 2030, (1995). EOS1 Luis A. Anchordoqui, Santiago Perez Bergliaffa, and Diego F. Torres, Brans-Dicke wormholes in nonvacuum spacetime https://doi.org/10.1103/PhysRevD.55.5226Phys. Rev. D 55, 5226 (1997) EOS2 Mohammad Reza Mehdizadeh and Francisco S.N. Lobo, Novel third-order Lovelock wormhole solutions https://doi.org/10.1103/PhysRevD.93.124014Phys. Rev. D 93, 124014 (2016) dim Mohammad Reza Mehdizadeh, Mahdi Kord Zangeneh, and Francisco S. N. Lobo Einstein-Gauss-Bonnet traversable wormholes satisfying the weak energy condition https://doi.org/10.1103/PhysRevD.91.084004 Phys.Rev.D 91, 084004 (2015). PRD-moradpour H. Moradpour, Alexander Bonilla, Everton M.C. Abreu, and Jorge Ananias Neto, Accelerated cosmos in a nonextensive setup https://doi.org/10.1103/PhysRevD.96.123504Phys. Rev. D 96, 123504 (2017) El-Hanafy Waleed El Hanafy, Impact of Rastall Gravity on Mass, Radius, and Sound Speed of the Pulsar PSR J0740+6620 https://doi.org/10.3847/1538-4357/ac9410ApJ 940, 51 (2022) stz967 R. Li, J. Wang, Z. Xu, and X. Guo, Constraining the Rastall parameters in static space–times with galaxy-scale strong gravitational lensing https://doi.org/10.1093/mnras/stz96Monthly Notices of the Royal Astronomical Society 486, 2407 (2019) moradpur-beta1.6 H. Moradpour and I. G. Salako, Thermodynamic Analysis of the Static Spherically Symmetric Field Equations in Rastall Theory https://doi.org/10.1155/2016/3492796dvances in High Energy Physics 2016, 3492796 (2016) cjp Y. Heydarzade, N. Riazi, and H. Moradpour, Phantom wormhole solutions in a generic cosmological constant background https://doi.org/10.1139/cjp-2015-0359Canadian Journal of Physics 93, 1523 (2015)
http://arxiv.org/abs/2307.04860v1
20230710191606
Levi problem in context of generalised convexity
[ "Krzysztof J. Ciosmak" ]
math.CV
[ "math.CV", "math.FA", "Primary: 31C05, 31C10, 32E40, 46E10, Secondary: 06B23, 31D05, 32T05,\n 32T35, 32U05, 46E05, 46J10, 52A01" ]
We provide a generalisation of the Levi problem to the context of generalised convexity, with an elementary proof. We show that the Cartan–Thullen theorem and its generalisation, which we prove here, can be seen as consequences of the classical theorems of functional analysis. Furthermore, we characterise the domains of holomorphy, and their generalisations, as precisely the spaces that are complete. Bragg-Primakoff Axion Photoconversion in Crystal Detectors Adrian Thompson ^1Central European University, Quellenstraße 51, 1100 Vienna, Austria ^2Computational Social Science - Research Center for Educational and Network Studies, Centre for Social Sciences, Tóth Kálmánutca 4,Budapest, 1097, Hungary ^3Department of Social Research Methodology, Faculty of Social Sciences, Eötvös Loránd University, Pázmány Péter s étány 1/A, Budapest, 1117, Hungary. ^4 National Laboratory for Health Security, Hungary. ^5 Rényi Institute of Mathematics, Reáltanodautca 13-15, Budapest, 1053, Hungary. ^*Corresponding author: [email protected] =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION §.§ Generalised convexity In <cit.> Fan introduced a notion of generalised closed and convex set with respect to a family of functions, which allowed for a generalisation of the Krein–Milman theorem. Here we shall be concerned with sets that are convex with respect to a cone ℱ of functions on a topological space Ω. A closed set K⊂Ω shall be called ℱ-convex whenever it is equal to its closed ℱ-convex hull clConv_ℱK={ω∈Ω| f(ω)≤sup f(K) for all f∈ℱ}. We observe that not only does the notion of generalised convexity generalise classical convexity, when ℱ is the cone of lower semi-continuous and convex functions, but also it includes pseudoconvexity of complex analysis. We refer the reader to the book of Hörmander <cit.> for an account comprising matter on notions of convexity with respect to: subharmonic functions, <cit.>, plurisubharmonic functions, <cit.>. Note that the latter is equivalent to the notion of holomorphic convexity of complex analysis, see the book of Krantz <cit.>. The main result of the paper is a generalisation of the Cartan–Thullen theorem, see Theorem <ref>, to the setting of generalised convexity, under mild assumptions. This includes the setting of Stein spaces. Another achievement is a definition of a convex set with respect to a family of functions, regardless of the closedness of the set. This is achieved by means of embedding the space via the Gelfand transform, see Section <ref> and Section <ref>. Let us mention that the notion of convexity has been developed and further generalised over the years, see <cit.>, <cit.> and <cit.>. These developments mainly were concerned with various dualities, notions of subdifferentials and Fenchel transforms. §.§ The Levi problem and the Cartan–Thullen theorem In the setting of complex analysis the classical result of Cartan and Thullen, see e.g. <cit.>, <cit.>, <cit.>, is concerned with holomorphically convex subsets of ℂ^n. Let Ω⊂ℂ^n be an open set. For a set B⊂Ω its closed holomorphically convex hull is equal to {z∈Ω|h(z)≤suph(B) for any holomorphic h on Ω}. Then Ω is said to be holomorphically convex, whenever any of its compact subsets has relatively compact holomorphically convex hull in B. The Cartan–Thullen theorems reads as follows. Suppose that Ω⊂ℂ^n is an open set. Then the following conditions are equivalent: * for any compact set K⊂Ω, its closed holomorphically convex hull is a compact subset of Ω, * Ω is a domain of holomorphy, i.e., there are no open sets Ω_1,Ω_2⊂ℂ^n such that * ∅≠Ω_1⊂Ω_2∩Ω, * Ω_2 is connected and Ω∖Ω_2≠∅, * for each holomorphic a on Ω there exists a holomorphic function â on Ω_2 such that â=a on Ω_1, * Ω is a domain of existence, i.e., there exists holomorphic a on Ω such that there are no open sets Ω_1,Ω_2 satisfying <ref>, <ref> and such that a=â on Ω_1 for some holomorphic function â on Ω_2, * there exists a continuous, proper plurisubharmonic function pΩ→ℝ. Above, we used the following definition. A function pΩ→ℝ on a topological space Ω is called proper, whenever preimages via p of compactae are compact. The equivalence of <ref>, <ref> and <ref> is also known as the Levi problem. This problem, posed in <cit.>, was concerned with geometric characterisation of domains of existence in ℂ^n. It was solved independently by Oka <cit.>, Bremermann and Norguet. In <cit.> it is claimed that the equivalence is very difficult to prove. Let us mention existence of another apporach developed by Hörmander <cit.>, <cit.>, which employs methods of partial differential equations. We shall provide a simple and elementary proof of this equivalence in the context of generalised convexity. We refer the reader also to a survey on the Levi problem <cit.>, where the Cartan–Thullen theorem is presented <cit.> and to the book <cit.> for a thorough discussion of the problem in the setting of complex analysis. We refer also to <cit.> for a more recent paper on the topic and to <cit.> and <cit.>. §.§ Main results We shall show that Theorem <ref> can be adapted to the setting of generalised convexity. Our proofs rely purely on functional analytic techniques, including the Banach–Alaoglu theorem and the Banach–Steinhaus uniform boundedness principle. Some of the implications admit elementary proofs, but we also provide proofs of these implications that rely on the Krein–Šmulian theorem and on the Mazur theorem. Definition <ref>, <ref>, and Definition <ref> below are analogues of domain of holomorphy and domain of existence, respectively. These definitions we find most useful in our developments. Let Ω be a topological space and let 𝒜 be a linear space of continuous functions on Ω. We shall say that: * a net (ω_α)_α∈ A in Ω is a Cauchy net with respect to 𝒜 whenever for each a∈𝒜, (a(ω_α))_α∈ A is a Cauchy net in ℝ, * Ω is complete with respect to 𝒜 whenever every Cauchy net in Ω is convergent in τ(𝒜). Let Ω be a set and let 𝒜 be a set of functions on Ω. The coarsest topology on Ω with respect to which all functions in 𝒜 are continuous we shall call the topology generated by 𝒜 and denote by τ(𝒜). The above defined topology τ(𝒜) is generated by the basis sets of the form {ω∈Ω|a_i(ω)-a_i(ω_0)≤ϵ_i for i=1,2,…,k}, where ϵ_i>0, ω_0∈Ω and a_i∈𝒜 for i=1,2,…,k. Let Ω be a set. Let 𝒜 be a linear space of continuous functions on Ω. Suppose that Ω, equipped with the weak topology τ(𝒜) induced by 𝒜, is metrisable. Let us denote by Ω̅ the completion of Ω. We shall say that Ω is an 𝒜-space whenever there exists a∈𝒜 that does not extend to a continuous function on Ω∪{ω} for any element ω∈Ω̅∖Ω. If Ω is an 𝒜-space, is also complete with respect to 𝒜. Suppose that Ω⊂ℂ^n is an open set and that 𝒜 is the space of real-parts of holomorphic functions on Ω. If Ω is an 𝒜-space, then it is a domain of existence. We shall say that a family of functions is symmetric if together with any function it contains its negative. Let ℱ be a cone of functions on a set Ω. We say that P is an ℱ-polygon whenever there exists a finite sequence (f_i)_i=1^k of functions in ℱ such that P={ω∈Ω| f_i(ω)≤ 1 for i=1,2…,n}. We shall say that an ℱ-polygon is symmetric whenever the corresponding family can be taken to be symmetric. Let Ω be a σ-compact and locally compact Hausdorff topological space. Let 𝒜 be a linear space of continuous functions on Ω that contains constants and separates points of Ω. Then the following conditions are equivalent: * for any compact set K⊂Ω and for any C≥ 1 the set {ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜} is a compact subset of Ω, * there exists a non-negative, continuous and proper function p on Ω such that p=sup{a_α|α∈ A} for some symmetric family (a_α)_α∈ A of functions in 𝒜, * there exists a non-negative, proper function p on Ω, bounded on compactae, such that p=sup{a_α|α∈ A} for some symmetric family (a_α)_α∈ A of functions in 𝒜, * there exists a family (P_i)_i=1^∞ of compact, symmetric 𝒜-polygons such that ⋃_i=1^∞P_i=Ω, P_i⊂intP_i+1 for i=1,2,…. Moreover, the suprema in <ref> and in <ref> may be locally taken over finite families of functions in 𝒜. Suppose moreover that Ω is separable and 𝒜 is closed with respect to the compact-open topology. Then the above conditions are equivalent to Ω being complete with respect to 𝒜. If moreover 𝒜 consists of real-parts of a complex algebra, then the above conditions are equivalent to Ω being an 𝒜-space. If Ω is locally compact, Hausdorff topological space and 𝒜 consists of continuous functions that separate points of Ω, then the topology of Ω is the weak topology induced by 𝒜, see Lemma <ref>, <ref>. If Ω is σ-compact, locally compact, Hausdorff topological space and 𝒜 is a closed, in the compact-open topology, algebra of continuous functions that separate points of Ω, then 𝒜 is the space of all continuous functions on Ω, as follows by the Stone–Weierstrass theorem. Let us note that if <ref> is satisfied for 𝒜 then it is also satisfied for the completion of 𝒜, with respect to the metric of locally uniform convergence. We can think of function p of <ref> or of <ref> as an analogue of a norm on the space Ω. The assumption that 𝒜 separates points of Ω is technical. If 𝒜 does not satisfy this assumption, then we may pass to the quotient space and infer the same equivalences as in Theorem <ref>. Let us note that the condition <ref> of Theorem <ref> differs from the condition <ref> of Theorem <ref>, as it involves arbitrary constants. We shall in Proposition <ref>, that when 𝒜 consists of real-parts of a complex algebra ℬ, then <ref> is equivalent to the condition that for any compact K⊂Ω the set {ω∈Ω|b(ω)≤supb(K) for all b∈ℬ} is a compact subset of Ω. §.§.§ Families satisfying maximum principle As we see, Theorem <ref> does not exactly correspond to Theorem <ref>, as <ref> of Theorem <ref> requires not only compactness of the ℱ-convex hull of sets, where ℱ is the cone generated by 𝒜, but compactness of a larger family of sets. However, under the additional assumption that the family of considered functions satisfies the maximum principle, we may show that the existence of an appropriate exhaustion function p, cf. Definition <ref>, is equivalent to ℱ-convexity of the space Ω, see Definition <ref>. We shall say that a convex cone ℱ of functions on a topological space Ω satisfies the maximum principle whenever for any: * function f∈ℱ that is non-constant, * compact set K⊂Ω, * open set U⊂Ω such that K⊂ U, there is sup f(K)<sup f(U). If Ω is compact, then one may take K=Ω. Thus, if ℱ satisfies the maximum principle, then any function f∈ℱ has to be constant. Let Ω be a σ-compact, locally compact, Hausdorff connected topological space. Let ℱ be a convex cone of continuous functions on Ω that contains constants and satisfies the maximum principle. Then the following conditions are equivalent: * for any compact set K⊂Ω the set {ω∈Ω| f(ω)≤sup f(K) for all f∈ℱ} is a compact subset of Ω, * there exists a non-negative, continuous and proper function p on Ω such that p=sup{f_α|α∈ A} for some family (f_α)_α∈ A of functions in ℱ, * there exists a non-negative, proper function p on Ω, bounded on compactae, such that p=sup{f_α|α∈ A} for some family (f_α)_α∈ A of functions in ℱ, * there exists a family (P_i)_i=1^∞ of compact ℱ-polygons such that ⋃_i=1^∞P_i=Ω, F_i⊂intP_i+1 for i=1,2,…. Moreover, the suprema in <ref> and in <ref> may be locally taken over finite families of functions in ℱ. One may take for ℱ a linear space of continuous functions on Ω. This allows for a comparison of Theorem <ref> and Theorem <ref>. As we have already noticed in Remark <ref>, we shall show that when ℱ is an algebra, or consists of real-parts of a complex algebra, then the conditions <ref> of Theorem <ref>, where 𝒜=ℱ, and <ref> of Theorem <ref> are equivalent, see Proposition <ref>. For an equivalent characterisation of <ref> of Theorem <ref>, see Proposition <ref>. However, note that in general <ref> of Theorem <ref> and <ref> of Theorem <ref> are not equivalent as observed in Remark <ref>. We shall that the cone ℱ of functions on a topological space Ω is local, whenever for any family (U_i)_i∈ I of distinct connected components of Ω and any family (f_i)_i∈ I⊂ℱ, there exists f∈ℱ such that f=f_i on U_i. Suppose that Ω is a σ-compact, locally compact, Hausdorff topological space. Let ℱ be a local convex cone of continuous functions that satisfies the maximum principle. Then the following conditions are equivalent: * for any compact set K⊂Ω the set {ω∈Ω| f(ω)≤sup f(K) for all f∈ℱ} is a compact subset of Ω, * there exists a non-negative, continuous and proper function p on Ω such that p=sup{f_α|α∈ A} for some family (f_α)_α∈ A of functions in ℱ, * there exists a non-negative, proper function p on Ω, bounded on compactae, such that p=sup{f_α|α∈ A} for some family (f_α)_α∈ A of functions in ℱ. Moreover, the suprema in <ref> and in <ref> may be locally taken over finite families of functions in ℱ. §.§ Examples We shall now exhibit several examples how Theorem <ref> and Theorem <ref> applies to various settings. §.§.§ Plurisubharmonic functions and holomorphic convexity The example that inspired this research is concerned with plurisubharmonic functions and holomorphic convexity. We shall see how Theorem <ref> follows from Theorem <ref>. Let Ω⊂ℂ^n be an open set. Let ℬ be the algebra of holomorphic functions on Ω and let 𝒜 is the set of its real-parts, i.e., the space of pluriharmonic functions, see e.g. <cit.>. Suppose that Theorem <ref> holds true. By Remark <ref>, <ref> of Theorem <ref> is equivalent to <ref> of Theorem <ref>. Note that Ω is separable, and 𝒜 is closed in the compact-open topology, cf. Definition <ref> and Remark <ref>. Therefore Theorem <ref> shows that <ref> of Theorem <ref> is equivalent to Ω being complete with respect to 𝒜, as well as to Ω being an 𝒜-space. It suffices to observe that the condition that Ω is an 𝒜-space implies readily that it is a domain of existence, which in turn trivially implies that it is a domain of holomorphy. The latter is known to imply <ref> of Theorem <ref> by a simple argument involving power series expansion, see <cit.>. The function p that we obtain by Theorem <ref> in <ref> is plurisubharmonic, as it is locally a maximum of a finite family of pluriharmonic functions. Therefore equivalence of <ref> and <ref> of Theorem <ref> follows from equivalence of <ref> and <ref> of Theorem <ref>. We see that the two last assertions of Theorem <ref> show that Ω is complete with respect to 𝒜, see Definition <ref>, and that it is an 𝒜-space, provided that Ω is holomorphically convex. The completeness property of Ω implies immediately that Ω is a domain of holomorphy, which is the difficult part of the equivalence of <ref> and <ref> of Theorem <ref>, according to <cit.>. In our proof, it follows from the Banach–Alaoglu theorem. The holomorphically convex sets are exactly sets convex with respect to the cone of plurisubharmonic functions. The way to show this is as follows. If h is holomorphic, then equality exp(ℜ𝔢h)=exp(h), and the fact that h is plurisubharmonic imply that for any set K⊂Ω, its closed holomorphically convex hull and its closed plurisubharmonically convex hull coincide, see Proposition <ref>. When ℱ is the cone of plurisubharmonic functions, then Theorem <ref> and Proposition <ref> allow us to recover also <cit.>. The usual definition of plurisubharmonicity, see e.g. <cit.>, <cit.>, requires a plurisubharmonic function to be upper semi-continuous. Note however that with this definition, the space of plurisubharmonic functions is not a complete lattice. Note also that if a compact set K is ℱ-convex with respect to some family ℱ of functions, i.e., K={ω∈Ω f(ω)≤sup f(K) for all f∈ℱ}, then it is also convex with respect to the complete lattice cone generated by that family. Since the plurisubharmonic functions play a vital rôle in the theory of holomorphic convexity it seems that it is natural to define the space of plurisubharmonic functions as the complete lattice cone generated by real-parts of holomorphic functions. Since we shall not need to deal with general plurisubharmonic functions in our developments, we shall not pursue this topic further. §.§.§ Subharmonic functions When ℱ is the cone of subharmonic functions, then Theorem <ref> allows us to prove an analogue of <cit.>, cf. <cit.>. We shall not dwelve into details here, as they are completely analogous to the details in the previous section. §.§.§ Convex functions For related characterisation for convex functions, see a theorem <cit.> and its corollary <cit.>, which can be inferred from Theorem <ref>. §.§ Gelfand transform and relation to convexity The notion of convexity that we study here is naturally linked to the classical convexity via the Gelfand transform, see Section <ref>. The Gelfand transform in our setting can be viewed as a linearisation, as it replaces a possibly non-linear setting with a linear setting. The linearisation technique has been successful in a great many contexts, e.g., in the context of Lipschitz-free spaces, see e.g. <cit.>. Using the Gelfand transform, we can transfer linear notions to the setting of the space Ω, in a similar way to transference of the notion of convex hull, see Section <ref>, Definition <ref>. This allows to reduce studying Ω and 𝒜 to studying subsets of a linear space and linear, continuous functionals on that space. §.§ Relation to theorems of functional analysis Let us note the following analogies of Theorem <ref> and Theorem <ref> with classical results of functional analysis, see e.g. <cit.>. These results include the Krein–Šmulian theorem, <cit.>, the Banach–Alaoglu theorem and the Mazur theorem. Indeed, that completeness of Ω implies Theorem <ref>, <ref>, is follows from the Banach–Alaoglu theorem, cf. Lemma <ref> and proof of Theorem <ref>; that Theorem <ref>, <ref>, implies completeness of Ω is analogous to the Krein–Šmulian theorem, see <ref>. Moreover, that completeness of Ω implies <ref> of Theorem <ref> follows from the Mazur theorem. §.§ Other notions of convexity We refer the reader to <cit.> for another development concerning convexity with respect to algebras of functions rather than with respect to linear spaces. The theory in <cit.> does not however include analogues of our results and is concerned with the notion of naturality of the pair of the space and the function algebra on the space, which is close to our considerations of closedness of the embedding of Ω into 𝒜^*, cf. <cit.> and Theorem <ref>. Moreover, the generalisation of plurisubharmonicity in <cit.> is different than ours. §.§ Applications Let us mention that generalised convexity has been employed in the study of the monopolist's problem by Figalli, Kim and McCann <cit.> and by McCann and Zhang in <cit.>. An application of the results obtained in this paper is concerned with a generalisation of martingale transport of measures, which we investigate in <cit.>. Theorem <ref> is used there to provide a characterisation of spaces that admit an exhaustion function. In the setting of such spaces in <cit.> we extend the results of De March and Touzi <cit.>, Obłój and Siorpaes <cit.>, Ghoussoub, Kim, Lim <cit.> to the setting of martingales with respect to a linear space of functions. Let us mention also <cit.>, where we study a related concept of ordering of measures with respect to a cone of functions. The aforementioned extension allows for a new type of localisation-type results. Another type of localisation has been studied in <cit.>. This approach is related to optimal transport of vector measures <cit.>, <cit.> and extensions of Lipschitz maps <cit.>. § PRELIMINARIES Let us recall several definitions. We shall say that a topological space Ω is exhaustible by compactae if there exists a sequence (K_i)_i=1^∞ of compact subsets of Ω such that * Ω=⋃_i=1^∞K_i, * K_i⊂intK_i+1 for i=1,2,…. We shall say that Ω is σ-compact whenever there exist compact subsets (K_i)_i=1^∞ whose union is Ω. It is readily seen that the condition that Ω is exhaustible by compactae is equivalent to Ω being σ-compact and locally compact. Let Ω, Z be topological spaces. A map pΩ→ Z is said to be proper whenever preimages p^-1(Z) of compact sets in Z are compact sets in Ω. Let 𝒦 denote the family of all compact subsets of Ω. For any compact set K∈𝒦 and a∈𝒜 we put a_𝒞(K)=sup{a(ω)|ω∈ K}. The family of semi-norms (·_𝒞(K))_K∈𝒦 on 𝒜 defines a locally convex topology on 𝒜, which we shall denote below by τ_𝒦. Let us remark that the topology τ_𝒦 coincides with the compact-open topology on 𝒜. If not stated otherwise, we shall denote by 𝒜^* the space of all linear functionals on 𝒜, continuous with respect to the topology generated by the above family of semi-norms. The space 𝒜^* we shall consider equipped with the weak* topology generated by 𝒜. We refer to <cit.> and to <cit.> for a background on Cauchy nets and completeness in relation to linear topological spaces. If Ω is σ-compact, then 𝒜 in τ_𝒦 is metrisable, as the topology is induced by a countable separating family of semi-norms <cit.>. If moreover Ω is separable, then for each compact set K⊂Ω the space 𝒞(K) is separable, as follows by the Stone–Weierstrass theorem, so that 𝒜 is separable. Therefore separability and σ-compactness of Ω imply that 𝒜^* in the weak* topology is metrisable. Suppose that Ω is σ-compact and separable and that 𝒜 is closed under the topology τ_𝒦. Then 𝒜^* is complete with respect to 𝒜. Since Ω is σ-compact and separable, then τ_𝒦-topology on 𝒜 is metrisable and 𝒜^* is metrisable in the weak* topology, cf. Remark <ref> Suppose that (h_n)_n=1^∞ is a Cauchy sequence in 𝒜^*. It follows that for each a∈𝒜, (h_n(a))_n=1^∞ is convergent and we may define a linear functional h by the formula h(a)=lim_n→∞h_n(a) for a∈𝒜. We need to prove that h∈𝒜^*. Since 𝒜 is complete and metrisable, it follows by the Banach–Steinhaus uniform boundedness principle that there exist a compact set K∈𝒦 and C>0 such that h_n(a)≤ Ca_𝒞(K) for all n=1,2,… and all a∈𝒜. Thus h∈𝒜^*. Let Ω be a set. Let 𝒜 be a set of functions on Ω with values in (-∞,∞]. A set 𝒦 of functions on Ω is said to be: * a convex cone whenever α f+β g ∈𝒦 for any f,g∈𝒦 and any numbers α,β≥ 0; * stable under suprema provided that the supremum sup{f_i| i ∈ I} of any family (f_i)_i∈ I⊂𝒦 belongs to 𝒦; * a complete lattice cone whenever it is a convex cone stable under suprema; * the complete lattice cone generated by 𝒜 whenever it is the smallest complete lattice cone of functions on Ω containing 𝒜; * separating points of Ω whenever for any two distinct ω_1,ω_2∈Ω, there exist f,g∈𝒦 such that f(ω_1)≠ g(ω_2). Let us stress that we allow the functions to take value +∞. Note however that this is not allowed for linear subspaces. We refer to <cit.> for a study of the above notions in contexts related to this work. Suppose that 𝒜 is a linear space of functions on Ω. Let ℱ denote the complete lattice cone generated by 𝒜. Then for any f∈ℱ there exists a family (a_i)_i∈ I⊂𝒜 such that f(ω)=sup{a_i(ω)| i∈ I} for all ω∈Ω. Let 𝒢 be the set of functions of the form (<ref>). We shall show that 𝒢 is a convex cone. Let α,β≥ 0, f,g,∈𝒢. Let (a_i)_i∈ I, (a_j)_j∈ J⊂𝒜 be families of functions in 𝒜 corresponding to f and g respectively. Then α f+β g=sup{α a_i+β a_j| j∈ J, i∈ I}∈𝒢. Clearly, ℱ⊃𝒢. Since 𝒢 is a complete lattice cone, ℱ=𝒢. If Y is the space all linear, continuous functionals on a linear topological space X, then the complete lattice cone generated by Y is the set of all convex, lower semi-continuous functions on X. We refer the reader to <cit.> for a proof of this fact. Let ℱ be the complete lattice cone of functions on Ω that contains constants. Let p∈ℱ. Let ξℝ→ℝ convex and non-decreasing. Then the composition ξ(p) belongs to ℱ. As ξ is convex and non-decreasing, there exists a family of non-decreasing affine functions (λ_j)_j∈ J on ℝ such that for all t≥ 0 there is ξ(t)=sup{λ_j(t)| j∈ J}. Then ξ(p)(ω)=sup{λ_j(p)(ω)| j∈ J}. For any j∈ J, there exist α_j≥ 0 and β_j∈ℝ such that λ_j(t)=α_j t+β_j for all t∈ℝ, so that α_j p+β_j∈ℱ for each j∈ J. Thus ξ(p)∈ℱ. § GELFAND TRANSFORM Given a topological space Ω and a linear space 𝒜 of continuous functions on Ω, one may consider the space of all linear functionals on 𝒜 that are evaluations at points of Ω. Let ΦΩ→𝒜^* be given by the formula Φ(ω)(a)=a(ω) for ω∈Ω and a∈𝒜. We shall call Φ the Gelfand transform, cf. <cit.>. Let us recall, see <cit.>, that if ℬ is Banach algebra, then one considers the space Δ of multiplicative linear functionals on ℬ. It can be shown that the kernels of these functionals are precisely the maximal ideals of ℬ. To any element b∈ℬ we can assign a function Φ(b) on Δ defined by the formula Φ(b)(ϕ)=ϕ(b) for ϕ∈Δ. With an appropriate topology this function is continuous, so that we have defined a map Φℬ→𝒞(Δ). In this way we can treat ℬ as a linear space of functions on Δ and apply the developed theory. Let Ω be a σ-compact and locally compact Hausdorff space and let 𝒜 be a linear space of continuous functions on Ω that separates points of Ω. Let ℱ be the complete lattice cone generated by 𝒜. We equip 𝒜 with the topology τ_𝒦 and 𝒜^* with the weak* topology induced by 𝒜. Then: * Φ is a homeomorphism of Ω and Φ(Ω), * for any a∈𝒜, a(Φ^-1) extends to a weakly* continuous linear map on 𝒜^*, * for any weakly* continuous linear map h on 𝒜^*, the function ω↦ h(Φ(ω)) belongs to 𝒜, * for any f∈ℱ, f(Φ^-1) extends to a map in the complete lattice cone G of functions on 𝒜^* generated by 𝒜, * for any map g∈ G, the function ω↦ g(Φ(ω)) belongs to ℱ. We shall first show that Φ is homeomorphism onto Φ(Ω). As 𝒜 separates points of Ω and consits of continuous functions, Φ is injective and continuous. Therefore, for any compact K⊂Ω, the restriction of Φ to K is a homeomorphism onto the corresponding image. As Ω is exhaustible by compactae, cf. Remark <ref>, there exist compact sets (K_i)_i=1^∞ such that ⋃_i=1^∞K_i=Ω, K_i⊂intK_i+1 for i=1,2,…. Therefore for any open set U⊂Ω the set Φ(U)=⋃_i=1^∞Φ(U∩intK_i) is open. This is to say, Φ is a homeomorphism. Let a∈𝒜. Extension in <ref> is given by the formula 𝒜^*∋ a^*↦ a^*(a)∈ℝ. We shall now define an extension for f∈ℱ. By Lemma <ref> for f∈ℱ there exists a family (a_i)_i∈ I of elements of 𝒜 such that f=sup{a_i| i∈ I}. We define a map 𝒜^*∋ a^*↦sup{a^*(a_i)| i∈ I}∈ (-∞,∞]. By (<ref>), this is an extension. As functionals a^*↦ a^*(a) are weakly* continuous, this extension belongs to the complete lattice cone G generated by 𝒜. Thus <ref> is proven. Note that any weakly* continuous linear functional h on 𝒜^* is given by some element a_0∈𝒜. Therefore for ω∈Ω, h(Φ(ω))=a_0(ω). This proves <ref>. Point <ref> is proven similarly. Suppose that Ω is separable and σ-compact, locally compact Hausdorff topological space Suppose also that 𝒜 is a linear space of continuous functions on Ω that separates points of Ω and is closed under τ_𝒦. Then Ω is complete with respect to 𝒜 if and only if Φ(Ω) is closed in the weak* topology. Note that Lemma <ref> tells us that, under current assumptions, 𝒜^* is complete with respect to the weak* topology. Therefore, thanks to Lemma <ref>, <ref>, closedness of Φ(Ω) is equivalent to completeness of Ω with respect to 𝒜. § ℱ-CONVEX SETS We shall continue to study a linear space 𝒜 of continuous functions on a topological space Ω. Typically, ℱ shall denote the complete lattice cone generated by 𝒜. As announced in the introduction we shall study the following hulls of sets, which are analogues of convex hulls. Let ℱ be a convex cone of functions on Ω. Let S⊂Ω. We define closed ℱ-convex hull of S by the formula clConv_ℱS={ω∈Ω| f(ω)≤sup f(S) for all f∈ℱ}. For a discussion of this notion in the setting of complex analysis we refer the reader to <cit.>. For a discussion in other settings see <cit.>. Let S⊂Ω. Then clConv_ℱS={ω∈Ω| a(ω)≤sup a(S) for all a∈𝒜}. Any element of ℱ is a supremum of elements of 𝒜, by Lemma <ref>. The claim follows. Definition <ref> generalises the closed convex hull of a set, when 𝒜 is taken to be the space of linear functions on a vector space, as follows by the Hahn–Banach theorem. When we take 𝒜 to be the space of real-parts of holomorphic functions, then, according to Proposition <ref>, it yields the holomorphically convex hull of a set. Let S⊂Ω. Then clConv_ℱS=Φ^-1(clConv_GΦ(S)), where G is the complete lattice cone of convex, lower semi-continuous functions on 𝒜^* generated by evaluations on elements of 𝒜. By Remark <ref> and Lemma <ref> clConv_GΦ(S)={h∈𝒜^*| h(a)≤supΦ(S)(a) for all a∈𝒜}. Since for any a∈𝒜, Φ(S)(a)=a(S), we see that Φ^-1(clConv_GΦ(S))={ω∈Ω| a(ω)≤sup a(S) for all a∈𝒜}, which is exactly what we were to prove. The above proposition tells us that, up to a change of coordinates, the closed convex hull with respect to the complete lattice cone ℱ is the usual closed convex hull with respect to convex, lower semi-continuous functions. Lemma <ref>, <ref>, and Proposition <ref> shows that the following definition of ℱ-convex hull is natural. Let S⊂Ω. We define the ℱ-convex hull of S by the formula Conv_ℱS= Φ^-1(Conv_GΦ(S)). We shall say that S⊂Ω is ℱ-convex whenever S=Conv_ℱS. Here G is the complete lattice cone of convex, lower semi-continuous functions on 𝒜^* generated by evaluations on elements of 𝒜. Note that above Conv_GΦ(S) denotes the usual convex hull of a set Φ(S)⊂𝒜^*. Moreover, if S⊂Ω is closed, then it is ℱ-convex if and only if S=clConv_ℱS. This is to say, Definition <ref> and Definition <ref> are consistent. Suppose that f∈ℱ. Then for any t∈ℝ the sets f^-1(-∞,t)) and f^-1((-∞,t]) are ℱ-convex. The claim follows by Defnition <ref>, Lemma <ref>, <ref> and an analogous claim for convex, lower semi-continuous functions on 𝒜^*. Let ℱ be a cone of continuous functions on Ω that contains constants. Let K⊂Ω be compact and let C≥ 1. Let ω∈Ω. The following conditions are equivalent: * f(ω)≤ C sup f(K) for all f∈ℱ, * f(ω)≤ C (sup f(K))_+ for all f∈ℱ, * ω∈clConv_ℱK. Suppose that <ref> holds true. Let 𝒢 denote the complete lattice cone generated by ℱ. Then also g(ω)≤ C( sup g(K))_+ for all g∈𝒢. Lemma <ref> implies that if ξ is a non-negative, convex and increasing function on [0,∞) then for any f∈ℱ, ξ(f_+(ω))≤ Csupξ(f_+(K)). For k≥ 2, the function t↦ t^k is convex and increasing on [0,∞). It follows that f_+^k(ω)≤ C sup f_+^k(K). Taking roots and letting k tend to infinity yields f_+(ω)≤sup f_+(K). Then applying the above to f+inf f(K∪{ω})∈ℱ shows that f(ω)≤sup f(K). That is ω belongs to clConv_ℱK. The converse implication is trivial. Let us note that the above does not imply that <ref> of Theorem <ref> and <ref> of Theorem <ref> are equivalent. Indeed, in general they are not. For example, one can take 𝒜 to be the linear space of affine functions on an open, convex set Ω⊊ℝ^n. Then Ω is ℱ-convex, but <ref> of Theorem <ref> is not satisfied. In this case, the union ⋃{{ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜}| C≥ 1} consists of the elements of the affine hull of a compact set K. Let 𝒜 be an algebra or a linear space consisting of the real-parts of a complex algebra ℬ of functions on Ω, closed in τ_𝒦 and containing constants. Let ℱ be the complete lattice cone generated by 𝒜. Then for any compact set K⊂Ω and any C≥ 1 clConv_ℱK={ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜}. Moreover, if 𝒜 consists of the real-parts of a complex algebra ℬ, then the above sets coincide with {ω∈Ω|b(ω)≤ Csupb(K) for all b∈ℬ}. We shall prove the proposition in the case when 𝒜 consists of the real-parts of a complex algebra ℬ. The proof of the other case is simpler. Let R={ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜} and S= {ω∈Ω|b(ω)≤ Csupb(K) for all b∈ℬ}. Clearly, clConv_ℱK⊂ R. Let ω∈ R. Let b∈ℬ. Take θ∈ℂ of modulus one such that θ b(ω)=b(ω). As ω∈ R and θ b∈ℬ we see that b(ω)≤sup Cb(K). This shows that R⊂ S. Let now ω∈ S. As ℬ is an algebra, for any b∈ℬ and k∈ℕ, b^k belongs to ℬ. It follows that b(ω)≤ C^1/ksupb(K). Letting k tend to infinity we get that b(ω)≤supb(K). Since ℬ is closed in τ_𝒦, for any b∈ℬ, exp(b)∈ℬ. Applying the above to exp(b) and taking logarithms, we see that ℜ𝔢 b(ω)≤supℜ𝔢b(K). This shows that ω∈clConv_ℱK and concludes the proof. Suppose that ℱ is a convex cone of of continuous functions on Ω that contains constants. Then for any compact set K⊂Ω, {ω∈Ω|f(ω)≤supf(K) for all f∈ℱ}={ω∈Ω| f(ω)≤sup f(K) for all f∈ℱ}. Indeed, as all functions in ℱ are continuous, if f∈ℱ, then also f+inf f(K)∈ℱ. Thus if ω belongs to the set on the left-hand side of the above equality, then for all f∈ℱ, f(ω)+inf f(K)≤sup f(K)+inf f(K), so that ω belongs also to the set on the right-hand side of the above equality. The converse inclusion is trivial. Suppose that Ω is a set and ℱ is a convex cone of functions on Ω. We shall say that Ω is ℱ-convex whenever for any compact set K⊂Ω {ω∈Ω| f(ω)≤sup f(K) for all f∈ℱ} is a compact subset of Ω. Suppose that Ω is a set and 𝒜 is a linear space of functions on Ω. We shall say that Ω is 𝒜-complete whenever for any compact set K⊂Ω and any C≥ 1 {ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜} is a compact subset of Ω. Theorem <ref>, under its assumptions, shows that Ω is 𝒜-complete if and only it is complete with respect to 𝒜. Under the assumptions of Theorem <ref> and the assumptions of Proposition <ref>, we see that ℱ-convexity of Ω and 𝒜-completeness of Ω are equivalent, whenever ℱ is the complete lattice cone generated by 𝒜. Note that ℱ-convexity has a different meaning when applied to the entire space Ω and when applied to a subset of Ω. § PROOFS §.§ Proof of Theorem <ref> We shall now prove Theorem <ref>. We begin with necessary lemmata. Let Ω be a set and let ℱ be a cone of functions on Ω. We say that pΩ→ [0,∞) is an exhaustion function whenever it belongs to ℱ and is proper. Let Ω be a topological space. Let 𝒜 be a linear space of continuous functions on Ω that contains constants. Let ℱ be the complete lattice cone generated by 𝒜. Suppose that Ω is exhaustible by compactae and that for any compact set K⊂Ω and C≥ 1, the set {ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜} is compact. Then there exists a continuous and non-negative exhaustion function p∈ℱ. Moreover, one may take p such that there exists a symmetric family (a_α)_α∈ A of functions in 𝒜 such that p=sup{a_α|α∈ A} and for any compact set K, p is a maximum of a finite number of functions in A. We shall first choose a sequence of compactae (K_i)_i=1^∞, such that for each i=1,2…, {ω∈Ω|a(ω)≤ 4^isupa(K_i) for all a∈𝒜}⊂intK_i+1. Such a sequence exists by the assumption that Ω is exhaustible by compactae and by the assumption that for each C≥ 1 and for each compact set K⊂Ω the set {ω∈Ω|a(ω)≤ Csupa(K) for all a∈𝒜} is compact. We shall construct non-negative, continuous functions p_i∈ℱ such that K_i⊂ p_i^-1((-∞, 1/2^i]) and K_i+3∖ K_i+2⊂ p_i^-1([ 2^i,∞)) for i=1,2,…. To this aim, observe that the compact set K_i+3∖intK_i+2 is covered by a family of open sets {ω∈Ω|a(ω)> 4^isupa(K_i)} with a∈𝒜. By compactness we may pick a finite subcover and the corresponding functions a_1,…,a_k_i∈𝒜. Let p_i=max{a_l/2^ia_l_𝒞(K_i)| l=1,…,k_i}∨ 0. Then p_i∈ℱ fulfils our requirements. Now, let p=max{p_i| i=1,2…}. Then p belongs to ℱ. For i=1,2,… K_i⊂ p_j^-1((-∞, 1/2^i]) for j≥ i and K_i+3∖ K_i+2⊂ p^-1([2^i,∞)). This shows that p is proper. Moreover, we see that p=max{p_1,…,p_i} on K_i, which immediately implies that p is continuous. Since each p_i is a supremum of a symmetric family of functions in 𝒜, thus so is p. To see that p on any compact is a maximum of a finite family of functions in 𝒜, it suffices to notice that each p_i is such a maximum, for i=1,2,…. This completes the proof. By Lemma <ref> and Remark <ref> it follows that <ref> implies <ref>. Clearly, <ref> follows from <ref>. Suppose that <ref> holds true. Let pΩ→ [0,∞) belong to ℱ. Then by Lemma <ref> for any t≥ 0 the set p^-1([0,t]) is ℱ-convex and compact. Suppose that K⊂Ω is compact. By the assumption, p is bounded on compact sets, so there is some t≥ 0 such that K⊂ p^-1([0,t]). Let C≥ 1. Consider the set S={ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜}. We claim that S⊂ p^-1([0,Ct]). Indeed, by the assumption on p∈ℱ, there is a family (a_α)_α∈ A of elements of 𝒜 such that p=sup{ a_α|α∈ A}. Then K⊂ p^-1([0,t]) implies that for α∈ A, supa_α(K)≤ t. Let ω∈ S. Then we see p(ω)=sup{a_α(ω)|α∈ A}≤ Csup{supa_α(K)|α∈ A}≤ Ct. This proves the claim and shows, due to properness of p, that <ref> is satisfied. Let us observe that since p is bounded on compactae and on any compact is a maximum of a finite family of functions in 𝒜, <ref> follows. Indeed, one can take P_i=p^-1([0,i]). These sets are compact, symmetric, 𝒜-polygons which exhaust Ω. Moreover, as p is non-negative and continuous P_i=p^-1([0,i])⊂ p^-1((-∞, i+1))⊂intP_i+1 for i=1,2,…. This is to say, <ref> is satisfied. Clearly, it immediately implies <ref>. We shall now show that these conditions are equivalent to Ω being complete with respect to 𝒜, if 𝒜 is closed in τ_𝒦 and Ω is separable. Let us assume that <ref> is satisfied. Let us pick a Cauchy net (ω_α)_α∈ A of elements of Ω. Suppose that the net is not convergent. Since Ω is exhaustible by compactae, we may pick a subsequence (ω_n)_n=1^∞ that is not convergent. The subsequence induces a sequence (Φ(ω_n))_n=1^∞ of linear, continuous functionals on 𝒜, which is pointwise bounded. Note that 𝒜 is complete and metrisable in τ_𝒦, according to Remark <ref>. Therefore, by the Banach–Steinhaus uniform boundedness principle we can find a compact set K⊂Ω and a constant C>0 such that for all a∈𝒜 and all n=1,2,3,… a(ω_n)≤ Ca_𝒞(K). That is the elements of (ω_n)_n=1^∞ belong to the set {ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜}, which is compact by <ref>. It follows that (ω_n)_n=1^∞ converges, contrary to the assumption. We have shown that Ω is complete with respect to 𝒜. Suppose now conversely that Ω is complete. Equivalently, Φ(Ω) is closed, see Lemma <ref>. Recall that by Lemma <ref>, <ref>, the map ΦΩ→𝒜^* is a homeomorphism onto its image, when 𝒜^* is equipped with the weak* topology. Let K⊂Ω and let C∈ℝ. Then the set R={a^*∈𝒜^*| a^*(a)≤ Csupa(K) for all a∈𝒜} is a compact subset of 𝒜^*, by the Banach–Alaoglu theorem. Now {ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜}= Φ^-1(R∩Φ(Ω)) is therefore compact, as Φ(Ω) is closed and Φ is a homeomorphism onto its image. This shows that <ref> holds true. Let us now assume that Ω is separable, that 𝒜 is τ_𝒦 closed and is an algebra or consists of real-parts of a complex algebra. Let us assume that <ref> is satisfied. We shall show that Ω is 𝒜-space. Let K_1,K_2,… be compact subsets of Ω such that ⋃_i=1^∞K_i=Ω and K_i⊂intK_i+1 for i=1,2,…. Taking their ℱ-convex hulls, and relabelling if necessary, we may moreover assume that these sets are ℱ-convex. We may moreover assume that none of these compact sets is equal to Ω. Otherwise, the claim is trivial. By the assumption Ω is separable and therefore it is also metrisable, see Remark <ref>. Let Ω̅ denote the completion of Ω. Then Ω̅ is separable as a metric space with a separable dense subset Ω. It is moreover metrisable, so that it admits a countable basis (U_i)_i=1^∞ of neighbourhoods of points in Ω̅∖Ω. Let us take a function kℕ→ℕ, with the property that each element appears infinitely often in its image. As the sets (K_j)_j=1^∞ are compact, for any j=1,2,… there exists an element ω_j∈ U_k(j)∩Ω∖ K_j. Relabelling the sets (K_j)_j=1^∞ if necessary, we may assume that ω_j∈ K_j+1. By Proposition <ref> there exists a_j∈𝒜 such that supa_j(K_j)≤ 2^-j and a_j(ω_j)>∑_i=1^j-1a_i(ω_i)+j. Let us set a=∑_j=1^∞a_j. Then the series converges in τ_𝒦, so that a∈𝒜. Moreover, for each j=1,2,… a(ω_j)>a_j(ω_j)-∑_i=1^j-1a_i(ω_j)-∑_i=j+1^∞a_i(ω_j)≥ j-2^-j. We shall now show that a verifies the fact that Ω is an 𝒜-space. We shall show that it does not extend to a continuous function on Ω∪{ω} for any ω∈Ω̅∖Ω. Suppose on the contrary, that it does extend to ω∈Ω̅∖Ω. By (<ref>), a is takes arbitrary large values on any neighbourhood of ω. If it extended continuously to ω, it would have to be infinite on ω. As already observed, the fact that Ω is an 𝒜-space immediately implies that Ω is complete with respect to 𝒜. We can prove that completeness of Ω implies that Ω is ℱ-convex alternatively in the following way. Proposition <ref> tells us that for a compact set K⊂Ω clConv_ℱK=Φ^-1(clConv_GΦ(K)∩Φ(Ω)). The Mazur theorem and Lemma <ref> imply that clConv_GΦ(K) is compact. By Lemma <ref>, Φ(Ω) is closed in the weak* topology on 𝒜^* and Φ is a homeomorphism onto its image, so that clConv_ℱK is compact subset of Ω. Let us recall the following theorem of Krein and Šmulian, <cit.>. Below we shall consider bounded weak* topology on a dual space H^* to a normed space H. A set U⊂ H^* is open in bounded weak* topology provided that the intersection of U with any closed ball in H^* is relatively weakly* open. Let H be a Banach space. Then a convex set Z⊂ H^* is closed in the weak* topology if and only if it is closed in the bounded weak* topology. Let us observe an analogy between the Krein–Šmulian theorem and the equivalence of <ref> of Theorem <ref> and closedness of Φ(Ω). If 𝒜 is an algebra, or consists of real-parts of a closed complex algebra, that is closed in τ_𝒦, then there is an alternative proof of the fact that <ref> of Theorem <ref> implies completeness of Ω, that is analogous to the proof <cit.>. It relies on the following proposition. Let Ω be a locally compact, σ-compact Hausdorff topological space. Suppose that 𝒜 is an algebra of functions, or consists of real-parts of a complex algebra, that is closed under τ_𝒦. Suppose that Ω is ℱ-convex. Then for any net (ω_α)_α∈ A with no accumulation point in Ω there exist a function a∈𝒜 and a subsequence (ω_j)_j=1^∞ of (ω_α)_α∈ A such that lim_j→∞a(ω_j)=∞. We shall assume that 𝒜 is an algebra. If 𝒜 consists of real-parts of a complex algebra, only minor modifications are needed. Let us pick a net (ω_α)_α∈ A of elements of Ω with no accumulation point in Ω. Thus, for any compact K⊂Ω, only finitely many elements of the net belong to K. We shall construct a sequence of compact sets (K_i)_i=1^∞, for which ⋃_i=1^∞K_i=Ω and K_i⊂intK_i+1 for i=1,2,…. Moreover, we shall find functions (a_i)_i=1^∞ in 𝒜 and a subsequence (ω_j)_j=1^∞, such that for all i=1,2,… ω_i∈ K_i+1 andsupa_i(K_i)≤ 2^-i and a_i(ω_i)>i+∑_j=1^i-1a_j(ω_i). We shall proceed inductively. Since Ω is exhaustible by compactae, there is a sequence of compact sets (L_i)_i=1^∞ for which ⋃_i=1^∞L_i=Ω, L_i⊂intL_i+1 for i=1,2,…. Suppose the functions a_1,…,a_k∈𝒜, elements α_1,…,α_k and sets K_1,…,K_k are chosen. There is an index j_k≥ k such that K_k∪{ω_1,…,ω_k}⊂intL_j_k. By the assumption, the set K_k+1={ω∈Ω| a(ω)≤supa(L_j_k) for all a∈𝒜} is compact. Since (ω_α)_α∈ A has no accumulation point, there is an element ω_k+1 of the net that does not belong to K_k+1. Thus, there also exists a_k+1'∈𝒜 for which a_k+1'(ω_k+1)>supa_k+1'(K_k+1). Normalising and taking power sufficiently high power of a_k+1' yields a function a_k+1 that we were after. Since j_k≥ k, we see that ⋃_i=1^∞K_i=Ω. We define now a=∑_i=1^∞a_i. Since supa_i(K_j)≤ 2^-i for i≥ j, we see that the series converges in τ_𝒦, and therefore, by the assumption, a∈𝒜. However, for j=1,2,… we have a(ω_j)≥ a_j(ω_j)-∑_i=1^j-1a_i(ω_j)-∑_i=j+1^∞a_i(ω_j)≥ j-2^-j, as ω_j∈ K_i for i≥ j+1, so that a_i(ω_j)≤ 2^-i. This shows that lim_j→∞ a(ω_j)=∞. The proposition above immediately shows that, under its assumptions, <ref> of Theorem <ref> implies completeness of Ω. Indeed, any Cauchy net (ω_α)_α∈ A with an accumulation point is convergent. §.§ Proof of Theorem <ref> Let Ω be a topological space. Let ℱ be a cone of continuous functions on Ω that contains constants and satisfies the maximum principle. Let 𝒢 be the complete lattice cone generated by ℱ. Let K_1,K_2,K_3,K_4 be compact and ℱ-convex subsets of Ω for which K_1⊊intK_2, K_2⊂intK_3 and K_3⊂ K_4. Then there exist a non-negative, continuous function p∈𝒢 such that K_1⊂ p^-1({0}) and K_2⊂ K_4∩ p^-1((-∞,1])⊂ K_3. As K_1,K_2 are compact and ℱ-convex we see that for i=1,2 K_i={ω∈Ω| f(ω)≤sup f(K_i) for all f∈ℱ}. Moreover, K_4∖intK_3 is a compact set, covered by a collection of open sets {ω∈Ω| f(ω)> sup f(K_2)}, for all f∈𝒜, so it is likewise covered by a finite subcollection of such open sets. Let f_1,…,f_k be the corresponding functions in ℱ. Since ℱ satisfied the maximum principle and K_1⊊intK_2, we may assume that for i=1,…,k we have sup f_i(K_1)<sup f_i(K_2). Then {ω∈ K_4|f_i(ω)-sup f_i(K_1)/sup f_i(K_2)-sup f_i(K_1)≤ 1 for i=1,…,k}⊂int K_3. It is readily verifiable that p=max{f_i-sup f_i(K_1)/sup f_i(K_2)-sup f_i(K_1)| i=1,…,k}∨ 0 satisfies our requirements. Let Ω be a topological space. Let ℱ be a cone of continuous functions on Ω that contains constants and satisfies the maximum principle. Let 𝒢 be the complete lattice cone generated by 𝒜. Let K_1,K_2,K_3,… be compact and ℱ-convex subsets of Ω for which K_i⊊intK_i+1 for i=1,2,… and ⋃_i=1^∞K_i=Ω. Then there exists a continuous, proper and non-negative function p∈𝒢 and such that for any compact set K, p is a maximum of a finite number of functions in ℱ. For i=1,2,… let p_i∈𝒢 be a non-negative, continuous function yielded by Lemma <ref> and corresponding to the sets K_i,K_i+1,K_i+2,K_i+3. Let p=sup{ip_i| i=1,2,…,}. Observe that p is non-negative and continuous. Indeed, if ω∈ K_j for j=1,2,… then p_i=0 for i≥ j. Thus, Ω is covered by open sets, on each of which p is a maximum of a finite number of non-negative and continuous functions. We shall show that p is proper. By the construction of Lemma <ref>, p_i is at least one on K_i+3∖ K_i+2. Thus, for i=1,2,… p≥ i on Ω∖ K_i+2. This shows that for non-negative t∈ℝ we have {ω∈Ω| p(ω)≤ t}⊂{ω∈Ω| p(ω)≤⌈ t⌉}⊂ K_⌈ t⌉+3. By continuity of p we see that preimages of compactae are compact, i.e., p is proper. This completes the proof. Similarly to the argument in the proof of Theorem <ref>, <ref> implies <ref>. Suppose now that <ref> is satisfied. If Ω is compact there is nothing to prove. Let us suppose that Ω is not compact. Let (K_i)_i=1^∞ be a sequence of compact, exhausting subsets of Ω, which exist by Remark <ref>. By <ref>, the sets (clConv_ℱK_i)_i=1^∞ are compact and ⋃_i=1^∞clConv_ℱK_i=Ω. Since intK_i⊂intclConv_ℱK_i for i=1,2,…, we see that the sets (clConv_ℱK_i)_i=1^∞ have non-empty interia that cover Ω. Relabelling the sets if necessary, we may therefore assume that clConv_ℱK_i⊊intclConv_ℱK_i+1 for i=1,2…. Indeed, if for some non-empty compact set K we had K=intK, then, due to assumption on connectedness, we would see that K=Ω, contrary to the assumption that Ω is not compact. Now, Lemma <ref> shows that <ref> holds true. Trivially, <ref> implies <ref>. The proof that <ref> is equivalent to the other conditions follows along similar lines to the proof of the analogous equivalence of Theorem <ref>. As Ω is σ-compact, it might have at most countably many connected components. Indeed, any compact subset of Ω is covered by at most finitely many of components of Ω. We shall prove that <ref> implies <ref>. The other implications follow as in the proof of Theorem <ref>. Let (U_i)_i=1^∞ be the family of connected components of Ω. For each i=1,2,…, let ℱ_i be the cone of restrictions of elements of ℱ to U_i. Thanks to Theorem <ref>, for i=1,2,… there exists p_i∈ℱ_i, which is a proper, non-negative continuous function which is locally maximum of a finite family of functions in ℱ_i. Let us set p=p_i+i on U_i, i=1,2,…. Then, by the assumption that ℱ is local, p∈ℱ. It is readily verifiable that p satisfies our requirements. plain
http://arxiv.org/abs/2307.05794v1
20230711204031
Machine Learning Study of the Extended Drug-target Interaction Network informed by Pain Related Voltage-Gated Sodium Channels
[ "Long Chen", "Jian Jiang", "Bozheng Dou", "Hongsong Feng", "Jie Liu", "Yueying Zhu", "Bengong Zhang", "Tianshou Zhou", "Guo-Wei Wei" ]
q-bio.BM
[ "q-bio.BM", "cs.LG" ]
1]Long Chen 1,2,*]Jian Jiang 1]Bozheng Dou 2]Hongsong Feng 1]Jie Liu 1]Yueying Zhu 1]Bengong Zhang 3]Tianshou Zhou 2,4,5,+]Guo-Wei Wei [1]Research Center of Nonlinear Science, School of Mathematical and Physical Sciences, Wuhan Textile University, Wuhan, 430200, P R. China [2]Department of Mathematics, Michigan State University, East Lansing, Michigan 48824, USA [3]Key Laboratory of Computational Mathematics, Guangdong Province, and School of Mathematics, Sun Yat-sen University, Guangzhou, 510006, P R. China [4]Department of Electrical and Computer Engineering Michigan State University, East Lansing, Michigan 48824, USA [5]Department of Biochemistry and Molecular Biology Michigan State University, East Lansing, Michigan 48824, USA [*]Corresponding author: [email protected] [+]Corresponding author: [email protected] Machine Learning Study of the Extended Drug-target Interaction Network informed by Pain Related Voltage-Gated Sodium Channels [ ============================================================================================================================== Pain is a significant global health issue, and the current treatment options for pain management have limitations in terms of effectiveness, side effects, and potential for addiction. There is a pressing need for improved pain treatments and the development of new drugs. Voltage-gated sodium channels, particularly Nav1.3, Nav1.7, Nav1.8, and Nav1.9, play a crucial role in neuronal excitability and are predominantly expressed in the peripheral nervous system. Targeting these channels may provide a means to treat pain while minimizing central and cardiac adverse effects. In this study, we construct protein-protein interaction (PPI) networks based on pain-related sodium channels and develop a corresponding drug-target interaction (DTI) network to identify potential lead compounds for pain management. To ensure reliable machine learning predictions, we carefully select 111 inhibitor datasets from a pool of over 1,000 targets in the PPI network. We employ three distinct machine learning algorithms combined with advanced natural language processing (NLP)-based embeddings, specifically pre-trained transformer and autoencoder representations. Through a systematic screening process, we evaluate the side effects and repurposing potential of over 150,000 drug candidates targeting Nav1.7 and Nav1.8 sodium channels. Additionally, we assess the ADMET (absorption, distribution, metabolism, excretion, and toxicity) properties of these candidates to identify leads with near-optimal characteristics. Our strategy provides an innovative platform for the pharmacological development of pain treatments, offering the potential for improved efficacy and reduced side effects. Keywords: pain management, voltage-gated sodium channels, protein-protein interaction, drug-target interaction, machine learning, virtual drug screen, repurposing, ADMET. Machine Learning Study of the Extended Drug-target Interaction Network informed by Pain Related Voltage-Gated Sodium Channels [ ============================================================================================================================== § INTRODUCTION Pain is a complex phenomenon and can be categorized in various ways based on different factors, such as acute pain and chronic pain, nociceptive pain and neuropathic pain, etc. Pain, with distinct types, has been estimated to occur in probably 35% of the population in the United States, with a higher morbidity rate than cancer and heart disease <cit.>. Pain management is a branch of medicine that utilizes an interdisciplinary approach. Although intensive efforts have been made to design new drugs for pain management over the last decades, almost half of patients with chronic pain show little response to existing analgesic drugs. Hence, there is an urgent need to design new drugs for pain treatment. Voltage-gated sodium channels (Navs) are integral membrane proteins that play a crucial role in the generation and propagation of action potentials in neurons and other excitable cells. These channels are responsible for the rapid influx of sodium ions into the cell, which leads to depolarization and the initiation of an action potential. More specifically, Navs modulate membrane permeability to sodium ions and facilitate important intercellular functions, which are related to a variety of diseases, including chronic pain, cardiac arrhythmia, and others. Particularly, Navs subtypes, such as Nav1.3, Nav1.7, Nav1.8, and Nav1.9, encoded respectively by the genes SCN3A, SCN9A, SCN10A, and SCN11A, present the best opportunities for pain therapeutics, since they almost exclusively distribute in the peripheral nervous system and highly express in sympathetic ganglia, olfactory epithelium, and dorsal root ganglion sensory neurons <cit.>. Nav1.3, originally termed sodium channel III, was cloned and sequenced in the 1980s from rat brain tissue <cit.>. In 2006, it was validated to be critical to pain transmission and modulation pathways <cit.>. The expression level of Nav1.7 was found to be related to pain based on animal models. Specifically, gain-of-function and loss-of-function mutations in humans result in extreme pain disorder and insensitivity to pain, respectively <cit.>. This suggests that Nav1.7 plays a vital role in pain generation, making it a hot target for pain treatment in recent years. Nav1.8 is preferentially expressed in peripheral sensory neurons. It has been shown to shape action potentials in these neurons and contribute to pain phenotypes in humans and animal studies. The key role of Nav1.8 in repetitive firing, and its localization in free nerve endings where the response to external stimuli is integrated and action potentials are initiated, indicates that Na_v1.8 can play a strong part in nociception and chronic pain <cit.>. Additionally, recent genetic and functional findings linking Nav1.9 to human pain disorders have suggested that Nav1.9 is a vital contributor to pain in humans, including its pattern of expression, subcellular localization, and modulation <cit.>. Subsequently, many kinds of Nav1.3, Nav1.7, Nav1.8, and Nav1.9 inhibitors have been found for pain treatment, including sulfonamides, guanidium compounds, and cystine knot peptides <cit.>. However, the specific roles of these pain-related Na_vs in the generation and transmission of pain signals remain unknown and are still being actively researched. It is well-known that proteins do not function independently in cells and organisms. Protein-protein interactions (PPIs) play a fundamental role in virtually all biological processes, including DNA replication, transcription, translation, protein folding, intracellular signaling, and metabolism. Therefore, it is crucial to understand the role of Nav-inferred PPI networks in pain generation, management, treatment, and therapeutic development. Nav-inferred PPI networks can be used to systematically analyze potential treatment efficacy and side effects. The nodes of a PPI network represent proteins, and the links or edges represent direct or indirect interactions between nodes that contribute to certain biological activities. The String Database v11 (https://string-db.org/) can be utilized to build a PPI network as it provides a large collection of protein-protein interactions for given proteins or diseases. In the study of sodium channels, we can build the PPI networks related to the major sodium channels involved in pain, such as Nav1.3, Nav1.7, Nav1.8, and Nav1.9, based on which we can carry out systematic analysis of medication treatment and side effects. The proteins in these PPI networks are the test targets for treatment or side effects. However, traditional in vivo or in vitro assay tests are highly time-consuming. High-throughput screening in experiments has been employed to find these inhibitors, but it is time-consuming and resource-intensive, making it unsuitable for screening a large collection of drug candidates in drug discovery. Moreover, large-scale experiments on animals raise legal and ethical issues. Hence, Artificial Intelligence (AI), including machine learning (ML) methods, can be employed in this study for large-scale predictions <cit.>. Artificial intelligence drug design (AIDD) has been considered capable of providing accurate computational predictions and speeding up drug development. It offers low costs and the ability to find optimally structured compounds with the help of ML algorithms and large availability of experimental data. Recently, many advanced ML methods have been applied to pain treatment. Lomartire et al. analyzed the data of a large population-representative sample of chronic pain patients and identified future sickness absence that should be considered when adapting interdisciplinary treatment programs to the patient's needs<cit.>. Using machine learning, Miettinen et al. show that sleep as a core factor in chronic pain <cit.>. Machine learning and multiplex in situ hybridization were used to assign transcriptomic class in the trigeminal ganglion <cit.>. Additionally, Robinson et al. used machine classification algorithms to measure the difference between neuroimaging data and self-report in their ability to classify individuals with and without chronic pain<cit.>. Currently, numerous in silico methods have been developed for virtual screening of sodium channel inhibitors. Molecular fingerprint-based characterization of molecules is particularly popular, and classification studies on specific targets often yield good performance based on ligand structure and properties <cit.>. For example, protein-ligand binding models for hERG (human ether-a-go-go potassium channel) have been proposed, achieving good classification results on hERG blockage using the Online Chemical Modeling Environment (OCHEM) <cit.>. Kong et al. <cit.> developed a molecular group optimization method by combining the Grammar Variational Autoencoder, a classification model, and simulated annealing to predict Nav1.7 sodium channel inhibitors. They found that the random forest algorithm with CDK fingerprint performs best in imbalanced data sets. They also employed multiple ML methods to predict Nav1.5 inhibitors <cit.>. Bosselmann et al. <cit.> built a multi-task multi-kernel learning framework to improve the prediction of functional effects of missense variants in voltage-gated sodium channels based on phenotypic similarity. Additionally, Herrera et al. <cit.> developed a bioinformatics tool called PEP-PRED^Na+ for highly specific prediction of voltage-gated sodium channel blocking peptides. This tool is helpful in accelerating and reducing the costs of designing new sodium channel blocking peptides with therapeutic potential. More studies on voltage-gated sodium channels can be found in review papers <cit.>, which describe recent progress and future opportunities for developing sodium channel-targeting small molecules and peptides as non-addictive therapeutics to treat pain. However, these studies primarily focus on individual sodium channels and lack consideration of drug-target interaction networks, as well as comprehensive ADMET (absorption, distribution, metabolism, excretion, and toxicity) analysis. Pain management is not limited to sodium channels and related inhibitors. Opioids, also known as narcotics, have been used for centuries in the treatment of pain. The use of opioids arose partially from the need to treat severe injuries sustained in warfare. Opioids can bind to opioid receptors, such as mu, kappa, and delta, on nerve cells in the brain, spinal cord, and other parts of the human body, blocking pain messages from reaching the brain, either from the body or the spinal cord. However, evidence suggests that long-term opioid use carries an increased risk of opioid use disorder (OUD) and opioid overdose, as well as various other adverse side effects, such as sleepiness, constipation, and nausea <cit.>. The risk of addiction is particularly high when opioids are used for long periods to manage chronic pain. Recently, more attention has been focused on the treatment of OUD and drug addiction issues. Feng et al. constructed an extended drug-target interaction (DTI) network based on the four major opioid receptors <cit.>. They developed advanced machine learning predictors to study the screening and repurposing potential of tens of thousands of compounds in the opioid DTI network for OUD management. The results of this work were used to analyze the repurposing potential of thousands of DrugBank compounds and evaluate their ADMET properties <cit.>. Additionally, Zhu et al. built a topology-inferred drug addiction learning (TIDAL) model to analyze the opioid DTI network and address the problem of drug addiction <cit.>. Although opioid-based medications have been successfully used to treat acute postsurgical and postprocedural pain, their high risks and side effects have raised significant concerns. There is a need for safer and more effective drugs for pain treatment. In the present work, we construct an extended drug-target interaction (DTI) network informed by pain-related voltage-gated sodium channels, namely Nav1.3, Nav1.7, Nav1.8, and Nav1.9. We develop advanced machine learning (ML) models using natural language processing (NLP) tools, such as autoencoders and transformers, to study this DTI network. Firstly, we build protein-protein interaction (PPI) networks of the four pain-related sodium channels from the String Database v11. This results in hundreds of related proteins that are considered potential side effect targets in our study. Secondly, we collect inhibitor datasets with experimental binding affinity labels from the CHEMBL database for these PPI targets, creating an extended DTI network with hundreds of targets and hundreds of thousands of drug candidates. Thirdly, to generate our ML models, we embed the inhibitor compounds using two NLP models: a transformer and an autoencoder. The resulting latent feature vectors are combined with gradient boosting decision tree (GBDT), support vector machine (SVM), and random forest (RF) algorithms to build binding affinity (BA) prediction models. Fourthly, we perform cross-predictions to screen side effects and repurposing potentials of over 150,000 compounds. Through these models, we evaluate the side effects of FDA-approved drugs or other existing medications and search for promising lead compounds. Finally, in order to identify lead compounds, we also assess the pharmacokinetic properties in compound filtering, such as absorption, distribution, metabolism, excretion, toxicity (ADMET), and synthesizability. These steps are illustrated in Fig.<ref>. Our study of the extended DTI network provides an innovative strategy for analyzing pain management and developing therapeutics. § RESULTS §.§ Pain related voltage-gated sodium channel informed drug-target interaction (DTI) networks Voltage-gated sodium channels (VGSCs), which consist of a family of nine distinct proteins or genes (Nav1.1-1.9), exhibit different pharmacological properties. Specifically, the proteins Nav1.3, Nav1.7, Nav1.8, and Na_v1.9 are involved in neuropathic pain and are associated with both human Mendelian pain disorders and common pain disorders such as small-fiber neuropathy <cit.>. These four VGSC proteins play a role in modulating different types of pain, offering potential for the development of specific sodium channel inhibiting agents for chronic pain treatment. Functionally, Nav1.7 is classified as tetrodotoxin-sensitive (TTX-S), while Nav1.8 and Nav1.9 are considered tetrodotoxin-resistant (TTX-R). Anatomically, these proteins exhibit broad and distinct expression patterns across neuronal and smooth muscle cells throughout the body, as well as in cells of the immune system where they participate in migration and phagocytosis <cit.>. Traditionally, Nav1.3 is primarily expressed in the brain and spinal cord, while Nav1.7, Nav1.8, and Na_v1.9 tend to be expressed in the peripheral nervous system. Furthermore, these channels are regulated by a variety of enzymes and structural proteins, such as kinases, auxiliary β-subunits, and ubiquitin-protein ligases, which collectively influence sodium channel biophysical properties and expression <cit.>. Pain-related VGSCs are widely distributed throughout the body, and their interactions with various upstream and downstream proteins play a crucial role in specific biological functions. To analyze these interactions, we constructed protein-protein interaction (PPI) networks centered around each of the four pain-related VGSCs. The gene names SCN3A, SCN9A, SCN10A, and SCN11A were used as inputs to the String database to extract the corresponding PPI networks. The resulting networks, shown in the top left panel of Fig.<ref>, represent direct and indirect interactions between proteins and each pain-related VGSC. Each PPI network contains 401 proteins, focusing on critical interactions rather than considering a larger number of proteins. It is important to note that there is some overlap between the networks, indicating interdependencies among the VGSCs. Considering that compounds that act as agonists or antagonists on pain-related VGSCs can influence their pharmacological behavior in pain treatment, we aimed to identify additional compounds that bind to these VGSCs. To evaluate the binding effects of inhibitors on VGSCs and other proteins in the PPI networks, we searched and collected inhibitor compounds from the Chembl database for each protein. This process resulted in an extended drug-target interaction (DTI) network, encompassing 111 targets or related datasets and a total of 150,147 inhibitor compounds, which is illustrated in the top right panel of Fig.<ref>. The protein names of these 111 datasets are listed in Table S2 in the Supporting Information, and additional details about the collected datasets can be found in Table S3 in the Supporting Information. §.§ Binding affinity predictions for the extended DTI network Using autoencoder and transformer embeddings, we developed 111 ML models for all 111 targets and 150,147 compounds in the extended DTI network. The cross-target binding affinity (BA) predictions were carried out using these 111 ML models, and the results are presented in Fig.<ref>. The diagonal elements of the heatmap represent the Pearson correlation coefficient (R) obtained from ten-fold cross-validation for each ML model. The mean, maximum, and minimum values of R across the models are 0.77, 0.93, and 0.25, respectively. Notably, 53 models achieved R values greater than 0.8, indicating high predictive performance. Furthermore, the root mean square error (RMSE) values of these models, as shown in Table S3 in the Supporting Information, range from 0.43 to 1.15 kcal/mol. These values fall within a reasonable range, suggesting that the ML models exhibit excellent prediction accuracy and reliable performance for binding affinity predictions. §.§.§ Cross-target binding affinity predictions for the extended DTI network In this section, we conduct an analysis of compound cross-target interactions to estimate their side effects on other proteins in the protein-protein interaction (PPI) network, providing a better understanding of the extended drug-target interaction (DTI) network. The off-diagonal elements of the heatmap in Fig.<ref> represent the maximum binding affinity (BA) values (i.e., BA with the largest absolute values) of inhibitor compounds from one dataset predicted by other ML models. The labels on the left side of the heatmap correspond to the 111 inhibitor datasets, while the labels on the top of the heatmap correspond to all the 111 ML models. Each column in the heatmap represents the predictions made by a specific model. For instance, the i-th element in the j-th column indicates the prediction result of the i-th dataset by the j-th model. These cross-target prediction results serve as indicators of the potential side effects of one inhibitor dataset on other proteins. In our analysis, we use an inhibition threshold value of -9.54 kcal/mol (K_i=0.1 μ M) for the BA values<cit.>. If a compound has a BA value below this threshold, it is considered active in terms of its biological function. Otherwise, it is classified as an inactive compound. According to our analysis, out of the 12,210 cross-predictions, 9,262 were found to exhibit side effects based on this threshold value, as their predicted maximal BA values were below -9.54 kcal/mol. Additionally, the remaining 2,948 cross-prediction results showed weak side effects, as their maximal BA values exceeded -9.54 kcal/mol. The color of the off-diagonal elements in the heatmap indicates the strength of the side effects, with closer proximity to green representing stronger side effects, and closer proximity to yellow indicating weaker side effects. It is worth noting that in Fig.<ref>, several yellow vertical lines can be observed, suggesting very slight predicted side effects on these proteins. This could be due to the majority of collected experimental BA labels being larger than -9.54 kcal/mol, which limits the predictive power of the ML models in such cases. The reasons for side effects caused by drug candidates targeting a specific protein are often complex, and one possible factor is the presence of similar binding sites on off-target proteins. Proteins within the same family often share similar structures or sequences, leading to the existence of comparable binding sites. As a result, an inhibitor compound that is effective against one protein may also bind to another protein within the same family, giving rise to mutual side effects. As observed in Fig.<ref>, mutual side effects occur among the three targets CAMK2A, CAMK2B, and CAMK2D, which belong to the calmodulin-dependent protein kinase II (CAMK2) family and share similar 3D structural conformations or 2D sequences. This observation is further supported by the alignments of their 3D structures and 2D sequences, as shown in Fig. S1 of the Supporting Information. We can identify more examples of mutual side effects among proteins within the same family. For instance, the fibroblast growth factor target (FGFR) family, which includes FGFR1, FGFR2, FGFR3, and FGFR4, as well as the mitogen-activated protein kinase (MARK) family, which comprises MARK2, MARK3, MARK8, MARK9, and MARK10, exhibit mutual side effects. These examples illustrate the occurrence of mutual side effects among proteins in the same family, emphasizing the importance of considering family-wide effects in drug development and analysis. §.§.§ Predictions of side effects and repurposing potentials for the extended DTI network Side effects occur when a drug candidate exhibits strong binding affinity to the intended target but inadvertently affects other proteins as potential off-target inhibitors. These side effects can be identified through cross-target predictions, as illustrated in Fig.<ref>a, for the extended DTI network. Each panel in the figure represents a specific target protein and two corresponding off-target proteins, indicated by the panel title, x-axis, and y-axis, respectively. The scattered points in the plot are color-coded based on the experimental binding affinities (BAs) of the inhibitors for the target protein. Red and green colors represent high and low binding affinities, respectively. The x-axis and y-axis values represent the predicted BAs obtained from two machine learning (ML) models constructed using inhibitor datasets for the two off-target proteins. The blue frames in the nine panels of Fig.<ref>a indicate regions where no side effects are predicted on the two off-target proteins. The three rows of the figure represent different scenarios for inhibitors targeting a specific protein, showing the presence of side effects on zero, one, or both of the given off-target proteins. For instance, in the first panel of the first row, all inhibitors for protein SCN9A are predicted to have weak inhibitory effects, with binding affinity (BA) values greater than -9.54 kcal/mol, on the two off-target proteins. In the first panel of the second row, approximately half of the inhibitors for protein CNR2 are predicted to exhibit strong binding affinity to the MTOR protein, while none of the inhibitors are predicted to bind to the SLC1A3 protein. Furthermore, in the second panel of the third row, most inhibitors of protein CNR1 are predicted to efficiently bind to both the TGFBR1 and TRPV1 proteins simultaneously. The repurposing potential of inhibitors can also be determined through cross-target predictions. Drug candidates that exhibit weak binding affinity to their designated targets but potent inhibition of other proteins are defined to possess repurposing potential. Fig.<ref>b displays six prediction cases of repurposing identified by our models. In the yellow frames, the inhibitors for the target protein exhibit strong binding to one protein (i.e., predicted BAs less than -9.54 kcal/mol), but weak binding to the other protein (i.e., predicted BAs greater than -9.54 kcal/mol). For example, in the first panel of the first row in Fig.<ref>b, many inhibitors for protein HRH1 are predicted to have repurposing potential for either SCN9A or SCN10A, but not for the other one. Since both SCN9A and SCN10A are important targets for drug design in pain treatment, it is crucial to identify more drug candidates for these two proteins through the virtual screening process. Carbamazepine, a voltage-dependent Nav1.7 sodium channel (SCN9A) blocker, has undergone a phase I clinical study in humans <cit.>. Our models can be employed to find more inhibitors that can bind to SCN9A, similar to the mechanism of Carbamazepine. The second and third rows in Fig.<ref> depict additional cases where inhibitors for a given protein have repurposing potential for two other proteins. §.§.§ Protein similarity inferred by cross-target correlations in the DTI network As side effects can arise when a drug candidate binds to proteins with similar 3D structures or sequences, the predicted BA values in cross-target BA prediction may exhibit correlation. In other words, correlated predicted BA values can serve as an indication of similar binding sites or 3D protein structures. Fig.<ref>a illustrates a linear correlation between the predicted BAs of inhibitors for PTGS2 on CHRM1 and CHRM2 proteins, with a Pearson correlation coefficient R of up to 0.71. The high correlation is attributed to the high binding site similarity between CHRM1 and CHRM2 proteins, as validated by the alignments of 3D structures and 2D sequences in Fig.<ref>a. The 3D structures of the two proteins were found to be quite similar, and the identity of the 2D binding site sequence reached as high as 63%. Two additional examples can be observed in Fig.<ref>b and c, demonstrating that the predicted BA correlation indicates similar 3D protein structures. The Pearson correlation coefficients are 0.82 and 0.72 for the cases in Fig.<ref>b, corresponding to the predicted BAs for OPRM1 on MARK9 and MARK8, respectively. These alignments of 3D structures and 2D sequences validate the usefulness of cross-prediction in detecting protein similarity. Furthermore, Fig.<ref>c reveals a bilinear correlation relationship, where the predicted BAs of MAPK9 inhibitors not only linearly correlate with MARK8 and MARK10 proteins, but also exhibit a linear correlation with their experimental BA values, as indicated by the color coding. This bilinear relationship is confirmed by the alignment of 3D structures and 2D sequences of the three proteins. This result suggests that a potent MAPK9 inhibitor is likely to be a strong binder for both MARK8 and MARK10 proteins simultaneously. The high structural similarities result in a drug-mediated trilinear target relationship. The observed bilinear or trilinear relationship indicates the possibility of developing inhibitors that can bind to multiple targets of major pain proteins simultaneously. §.§ Druggable property screening Evaluation of ADMET is of utmost importance in drug design and discovery. ADMET encompasses several essential attributes that are correlated with the pharmacokinetic study of a compound. A promising drug candidate should not only exhibit potency against the therapeutic target but should also possess favorable ADMET properties. Furthermore, hERG is a crucial potassium ion channel known for its contribution to the electrical activity of the heart. When this channel is blocked by a drug, it can lead to serious side effects on the heart. Therefore, the evaluation of hERG risk is indispensable in drug development and assessment. In this section, we conducted the evaluation of ADMET using six indexes, namely FDAMDD, T_1/2, F_20%, logP, logS, and Caco-2, along with synthetic accessibility (SAS) and hERG risk assessment. FDAMDD represents the FDA maximum recommended daily dose, which aims to avoid toxicity in the human body. The half-life (T_1/2) refers to the time it takes for the concentration of a drug in the body to decrease by half. A value of T_1/2 less than three hours indicates a shorter half-life. F_20% represents the probability of an administered drug reaching systemic circulation with less than 20% of the initial dose. This parameter is important for assessing the effectiveness, bioavailability, therapeutic efficacy, and potential side effects of a drug. LogP refers to the logarithm of the partition coefficient of a compound between a nonpolar solvent and water, providing information about its hydrophobicity. On the other hand, logS represents the logarithm of the aqueous solubility of a compound, which indicates its ability to dissolve in water. Caco-2 is a measure used to estimate the in vivo permeability of oral drugs. It provides valuable information about a drug candidate's interaction with efflux transporters, metabolism, and other factors that influence its absorption. SAS is employed to assess the feasibility of synthesizing a specific compound or molecule, taking into account its structural complexity and the availability of synthetic routes. During the above estimation in the present work, ADMETlab 2.0 (<https://admetmesh.scbdd.com/>) solvers were used for ML predictions and provided a set of optimal ranges for these ADMET properties <cit.>. The SAS assessment was implemented using Rdkit packages <cit.>. The optimal ranges of ADMET properties and SAS are listed in Table <ref>, in which a stricter threshold of -8.18 kcal/mol (K_i = 1 μ M) is applied to exempt hERG side effects. Fig.<ref> illustrates the ADMET screening of five inhibitor datasets, including SCN5A, SCN9A, SCN10A, CNR1, and SRC, that play essential roles in pain treatment. The first row of Fig.<ref> depicts the distributions of FDAMDD and hERG side effects of inhibitors from the five datasets. The blue frames represent the optimal domains of the two properties mentioned above. The colors of the points indicate the experimental BA values for targets. From this screening, all five datasets have sufficient compounds with optimal toxicity and hERG side effects. However, for the SCN10A dataset, there are only a few potent inhibitors in the optimal domains. This suggests that ADMET properties and side effects should be taken into account before synthesizing a new compound. The second row of Fig.<ref> displays the screening results on absorption properties: T_1/2 (half-life) and F_20% (bioavailability 20%). It is observed that for all five datasets, the optimal domain of T_1/2 and F_20% occupies only a small fraction of chemical space. This indicates a strict screening process, emphasizing the critical roles of these two properties in physicochemical assessment. The third row of Fig.<ref> illustrates the screening for logP and logS, which are closely related to the distribution of chemicals in the human body. In all five datasets, only a small portion of potent inhibitors is found within the optimal domain, suggesting that a large number of inhibitors are not well absorbed in the human body. The last row of Fig.<ref> presents the screening results for Caco-2 and SAS. These five plots demonstrate that almost all compounds from the five datasets are easy to synthesize, and approximately half of the compounds exhibit good cell permeability. Notably, a significant number of potent inhibitors fall within the optimal domain. § DISCUSSION §.§ Side effect evaluations of existing medications for pain treatment SCN3A, SCN9A, SCN10A, and SCN11A are genes that encode sodium channels in the Navs family. These channels play an important role in the generation and propagation of action potentials in neurons, including those involved in pain signaling. Additionally, it has been found that blocking these channels could reduce pain hypersensitivity. There are several FDA-approved experimental medications available for the treatment of pain, which can be roughly classified into four classes: non-opioid analgesics, nonsteroidal anti-inflammatory drugs (NSAIDs), opioid medications, and others. In this study, we utilized our DTI-based ML models to predict the side effects of these medications. Acetaminophen, commonly known as Tylenol or paracetamol, is a typical over-the-counter non-opioid analgesic used to temporarily relieve mild to moderate pain, such as headaches, muscular aches, backaches, toothaches, and premenstrual and menstrual cramps. It is a weak inhibitor of both cyclooxygenase (COX)-1 and COX-2 in vitro and eases pain by inhibiting the production of prostaglandins, which are chemicals that contribute to pain in the human body. Our BA predictions for acetaminophen on SCN9A and SCN10A are -9.60 kcal/mol and -9.29 kcal/mol, respectively, indicating that acetaminophen is a good binder on SCN9A. Furthermore, the predicted BA value on hERG from our model is -7.39 kcal/mol, which is higher than the hERG side effect threshold of -8.18 kcal/mol, validating the safety profile of acetaminophen on hERG. Our predictions suggest that acetaminophen exhibits the highest inhibitory effect on the LATS2 protein, with a predicted BA value of -11.2 kcal/mol. LATS2 is a protein kinase that plays a significant role in cell growth regulation, apoptosis, and tumor suppression. It is associated with various diseases, including breast cancer, lung cancer, ovarian cancer, neurofibromatosis type 2 (NF2), and cardiovascular diseases. Inhibiting the LATS2 protein could lead to serious side effects, which might explain the potential reasons for the high side effects of acetaminophen, such as liver damage, allergic reactions, skin reactions, gastrointestinal issues, blood disorders, and kidney problems. Nonsteroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen (Advil, Motrin), and naproxen (Aleve), are commonly used for the treatment of mild to moderate pain accompanied by swelling and inflammation. These medications can inhibit certain enzymes in the human body that are released due to tissue damage. Ibuprofen, a non-selective inhibitor of the enzyme cyclooxygenase (COX), plays a crucial role in the synthesis of prostaglandins through the arachidonic acid pathway. COX facilitates the conversion of arachidonic acid to prostaglandin H2 (PGH2) in the body, which is further transformed into other prostaglandins. By inhibiting COX, ibuprofen reduces the production of prostaglandins in the body, resulting in pain relief. The predicted BA values of ibuprofen for SCN9A and SCN10A are -9.11 and -9.72 kcal/mol, respectively, indicating strong potency of ibuprofen on SCN10A. The predicted BA value for hERG is -7.13 kcal/mol, suggesting a safe hERG-blockade profile. Additionally, ibuprofen is predicted to be a potent inhibitor of LATS2, USP9X, and MTOR, which are the top three proteins with the largest absolute predicted BA values (-11.17, -10.68, -10.46 kcal/mol). Furthermore, the predicted BA value of ibuprofen on TRPM8 is -10.04 kcal/mol, validating its strong binding affinity to TRPM8, a thermosensitive ion channel implicated in pain signaling, particularly in cold-induced pain or cold allodynia. Despite its effectiveness, ibuprofen can cause a number of side effects, including nausea, constipation or diarrhea, and indigestion (dyspepsia). Naproxen, like other NSAIDs such as ibuprofen, inhibits COX, leading to analgesic and anti-inflammatory effects. It is also a potent inhibitor of sodium channels, as validated by the predicted BA values of -9.02 and -9.6 kcal/mol for SCN9A and SCN10A, respectively. The predicted BA value of -6.55 kcal/mol for hERG confirms the safety profile of naproxen on hERG. Our predictions indicate that naproxen may have side effects on other targets, with the top three predicted BA values being -11.35, -11.32, and -11.13 kcal/mol for CSNK2A2, FGFR2, and LATS2, respectively. This aligns with the known fact that naproxen can cause a range of potential side effects, including dizziness, headache, bruising, allergic reactions, and stomach pain <cit.>. Additionally, naproxen demonstrates strong inhibition of TRPM8 with a predicted BA value of -9.97 kcal/mol. Opioids are powerful pain-relieving medications commonly prescribed for moderate to severe pain. Examples of opioid medications include oxycodone (OxyContin, Roxicodone), hydrocodone (Vicodin, Hysingla ER), fentanyl (Actiq, Fentora), and morphine (MS Contin), among others. They function by binding to opioid receptors in the brain, spinal cord, and other parts of the body, thereby reducing the perception of pain. Due to their potential for misuse, addiction, and overdose, these medications are subject to strict prescribing guidelines. Oxycodone, a strong semi-synthetic opioid, is used medically to treat moderate to severe pain. Its mechanism of action involves interacting with opioid receptors in the central nervous system. The predicted BA values of oxycodone for SCN9A and SCN10A are -9.75 and -10.62 kcal/mol, respectively. The predicted BA value for hERG is remarkably low at -7.8 kcal/mol, indicating a low potential for hERG side effects. Oxycodone demonstrates strong binding potency to the top three proteins: ROS1, CSNK2A2, and OPRM1, with the largest predicted BA values being -11.77, -11.47, and -11.45 kcal/mol, respectively. Additionally, our predictions suggest that oxycodone can inhibit the TRPA1 (Transient Receptor Potential Ankyrin 1) protein, with a predicted BA value of -10.09 kcal/mol. TRPA1 is a thermosensitive ion channel involved in the detection and transmission of pain signals. It is known for its role in mediating various types of pain, particularly in response to chemical irritants and inflammatory stimuli. Hydrocodone is indicated for the relief of acute pain, sometimes in combination with acetaminophen or ibuprofen. It is also used for the symptomatic treatment of the common cold and allergic rhinitis, often in combination with decongestants, antihistamines, and expectorants. Hydrocodone inhibits pain signaling in both the spinal cord and brain. Its actions in the brain can also lead to euphoria, respiratory depression, and sedation <cit.>. In our predictions, hydrocodone demonstrates good binding affinities for SCN9A and SCN10A, with BA values of -9.72 and -10.56 kcal/mol, respectively. The predicted BA value for hERG is -8.16 kcal/mol, suggesting a low potential for side effects on hERG. Hydrocodone has the potential to cause serious side effects on the top three proteins: ROS1, CSNK2A2, and TACR1, with predicted BA values of -11.98, -11.40, and -11.36 kcal/mol, respectively. Additionally, our findings indicate that hydrocodone is a strong binder to the TRPA1 protein, with a predicted BA value of -9.94 kcal/mol. Some medications prescribed to manage depression and prevent epileptic seizures have been found to relieve chronic pain. Tricyclic antidepressants used in the treatment of chronic pain include amitriptyline and nortriptyline (Pamelor). Anti-seizure medications used for chronic nerve pain include gabapentin (Gralise, Neurontin, Horizant) and pregabalin (Lyrica). Amitriptyline, a tricyclic antidepressant, has been used for decades to treat depression and has been investigated for its analgesic properties in pain-related conditions <cit.>. Our predicted BA values for SCN9A and SCN10A are -9.74 and -10.04 kcal/mol, respectively, validating the potency of amitriptyline in pain treatment according to our predictions. The predicted BA value of amitriptyline on hERG is -8.25 kcal/mol, indicating a potential side effect on hERG. The three strongest predicted BA values are for LATS2, HRH1, and KCNA3 proteins, with values of -11.08, -11.01, and -10.61 kcal/mol, respectively. Gabapentin, a structural analogue of the inhibitory neurotransmitter gamma-aminobutyric acid (GABA), was originally developed as an anti-epileptic medication. It is now widely used to treat neuropathic pain <cit.>. Our predictions suggest that gabapentin has the potential to inhibit SCN9A and SCN10A, with BA values of -9.0 and -9.35 kcal/mol, respectively. Moreover, gabapentin is predicted to have no side effects on hERG, with a BA value of -6.85 kcal/mol. In addition, our predictions show that the three strongest predicted BA values are for LATS2, KCNA3, and FGFR2, with values of -10.94, -10.61, and -10.6 kcal/mol, respectively. §.§ Nearly optimal lead compounds from screening and repurposing We dedicate our efforts to finding more potential inhibitors of the two pain targets, SCN9A and SCN10A, through the screening and repurposing processes in this section. In the process of screening and repurposing, we utilized 110 ML models to predict the cross-target binding affinity. In addition to considering potency, we also ensured that the optimal ranges for the ADMET properties and SAS (as listed in Table.<ref>), as well as the hERG side effect, were all well satisfied. SCN9A and SCN10A are not only major pain targets but also key pharmacological targets in pain treatment. To identify more promising potent compounds for these two targets, we utilized the 110 inhibitor datasets as a source of inhibitor compounds. During the screening process, we selected potent inhibitor compounds with experimental BA values below -9.54 kcal/mol from the inhibitor datasets of the two pain targets, SCN9A and SCN10A. We then evaluated a series of other properties. It's important to note that if a designated inhibitor of one target demonstrates high efficacy on the other target, it is not considered a side effect. This is because it is common for an inhibitor to be potent on both major pain targets simultaneously. However, we still need to evaluate the potential for side effects on the other 108 protein targets, as well as hERG. We require predicted BA values greater than -9.54 kcal/mol to exclude side effects, except for hERG, which has a stricter requirement of BA values greater than -8.18 kcal/mol. For repurposing, we assess the binding potency of all weak inhibitors in the other 108 datasets on the two pain targets, SCN9A and SCN10A. Therefore, we select inhibitors with experimental BA values greater than -9.54 kcal/mol and identify those with predicted BA values less than -9.54 kcal/mol on the two pain targets. In our search for inhibitors with repurposing potential on the pain targets, these inhibitors should have no side effects on the other 107 proteins, as well as hERG. Furthermore, we also study the optimal range of ADMET properties and synthetic accessibility. It is not easy to find inhibitors that satisfy all the aforementioned requirements. In the end, we identified two inhibitor compounds, CHEMBL 1767278 from the MAPK8 dataset and CHEMBL 1453498 from the CASP3 dataset, for repurposing. The former is predicted to have BA values of -8.13 and -9.68 kcal/mol on SCN9A and SCN10A, respectively, while the latter is predicted to have values of -9.68 and -8.04 kcal/mol, indicating their potency on SCN10A and SCN9A, respectively. Their predicted BA values on hERG are -7.13 and -7.92 kcal/mol, respectively, suggesting favorable side effect profiles. The representations of the two compounds and their side effect predictions are provided in Fig.<ref>c and d, respectively. Furthermore, these two compounds are predicted to have no binding or side effects on the remaining 96 and 99 proteins, respectively. We also evaluated additional ADMET properties of these two molecular compounds using the ADMETlab 2.0 prediction solver (https://admetmesh.scbdd.com/). Fig.<ref>a and b show that the two compounds fall within the optimal ranges of these ADMET properties. For more details on the meaning and optimal ranges of the 13 ADMET properties, please refer to Table S4 in the Supporting Information. Next, we investigated the molecular interactions between the two inhibitors and the two main pain targets, SCN9A and SCN10A, using the software AutoDock Vina <cit.>. Fig.<ref>a, c shows the 3D protein-ligand docking structures, and Fig.<ref>b, d shows the 2D interaction diagrams of the two compounds, CHEMBL1767278 and CHEMBL1453498, respectively. Due to the structural complexity of SCN9A and SCN10A, we focused on the docking between the inhibitors and the central sites of the targets. AutoDock Vina generated 9 docking poses with different docking scores calculated from its scoring function. In our figures, we selected the pose with the highest affinity (kcal/mol), where hydrogen bonds are formed between the inhibitors and the two pain targets SCN9A and SCN10A. In the docking of compound CHEMBL1767278 (see Fig.<ref>b), one strong hydrogen bond with Asn312 (2.85 Å) is formed, while in the docking of compound CHEMBL1453498 (see Fig.<ref>d), three hydrogen bonds with Tyr1696 (2.98 Å, 2.92 Å) and Arg1599 (3.22 Å) are formed. The predicted binding energies of these two compounds with SCN10A and SCN9A are both -9.68 kcal/mol. Additionally, we found that neither of the two compounds formed a covalent bond with the side chains of the targets during the docking process, suggesting that hydrogen bonds play vital roles in the interaction between the atoms. § METHODS §.§ Datasets All inhibitor datasets were collected from the Chembl database (https://www.ebi.ac.uk/chembl/) for all proteins in the present DTI network, which was informed by four investigated sodium channels (Nav1.3, Nav1.7, Nav1.8, and Nav1.9, corresponding to encoded genes SCN3A, SCN9A, SCN10A, and SCN11A, respectively). Since the predictive results of machine learning-based models depend on high-quality and quantity of data, we set the minimal size of the collected inhibitor datasets to be 250 and obtained a total of 111 datasets, including SCN9A and SCN10A. The datasets for SCN3A and SCN11A were not included due to their small data size. The labels for these datasets are binding affinities (BAs) obtained using the formulas BA = 1.3633*log10K_i and K_i=IC50/2 <cit.>. As hERG is a key target for side effects in virtual screening of drug design, an inhibitor dataset was also collected from the Chembl database. All details of the datasets are provided in Table S3 in the Supporting Information. §.§ Molecular fingerprints Molecular fingerprints represent the property profiles of a molecule, typically in the form of vectors where each element represents the presence, degree, or frequency of a specific structural characteristic. These fingerprints can be used as features in machine learning (ML) models. The original molecular fingerprints for the inhibitors in the collected 111 datasets are 2D SMILES strings. In this study, we utilized two types of latent-vector molecular fingerprints in the ML models: bidirectional encoder transformer fingerprint (BET-FP) and autoencoder fingerprint (AE-FP). These fingerprints were generated from pre-trained models based on natural language processing (NLP) algorithms such as transformers and sequence-to-sequence autoencoders <cit.>. They are latent embedding vectors with a length of 512, obtained by encoding the 2D SMILES strings of the inhibitor compounds using the pre-trained models. §.§.§ Sequence-to-sequence auto-encoder fingerprint Recently, Winter et al. proposed a data-driven unsupervised learning model for extracting molecular information embedded in the SMILES representation <cit.>. Their approach involved using a sequence-to-sequence autoencoder to translate one form of molecular embedding to another by capturing the chemical structure's complete description in the latent space between the encoder and decoder. This translation model was capable of extracting physical and chemical information during the embedding process, enabling the translation to a distinct molecular representation with the same semantics but different syntax. Notably, the translation model was trained on a large dataset of chemical structures and could be used to extract molecular fingerprints for query compounds without the need for retraining or labels. Typically, the translation model consists of encoder and decoder networks. The encoder network compresses the essential information from the input SMILES, which is then fed as input to the decoder network. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) were employed in the decoder, with fully connected layers mapping the output of the CNN or concatenated cell states of the RNN to intermediate vector embeddings between the encoder and decoder networks. Consequently, the decoder incorporates RNN networks with latent vectors as input. To extract more physical and chemical information from the latent vectors, the translation model was extended based on a classification model that predicts molecular properties using these vectors. The output of the RNN in the decoder network represents the probability distributions of various characters in the translated molecular embeddings. During the training of the autoencoder model, the loss function consists of the sum of cross-entropies between the predicted probability distributions and the correct characters encoded in a one-hot format, as well as the mean squared errors of the molecular property predictions made by the classification model. In this study, the translation model was trained on approximately 72 million molecular compounds obtained from the ZINC and PubChem databases. The compounds underwent preprocessing, including filtering based on criteria such as molecular weight, number of heavy atoms, partition coefficient, and more. By training the translation model on this processed dataset, the resulting model generated embedding vectors that served as molecular fingerprints. §.§.§ Bidirectional transformer Recently, Chen et al. developed a deep learning network that was pretrained on millions of unlabeled molecules using a self-supervised learning (SSL) platform to extract predictive molecular fingerprints <cit.>. The SSL approach employed the bidirectional encoder transformer (BET) model, which relies on the attention mechanism. Unlike constructing a complete encoder-decoder framework, SSL utilized the decoder network solely for encoding the molecular SMILES. In the SSL pre-training platform, the input consisted of molecular SMILES strings. Pairs of real SMILES and masked SMILES were created by hiding a certain number of meaningful symbols within the strings. The model was then trained using these data-mask pairs in a supervised manner with the SSL method. During the pretraining process, the masked symbols were learned by studying the unprocessed symbols in the SMILES, enhancing the understanding of the SMILES language. Data masking was performed as a preprocessing step before training the model with SSL. A total of 51 symbols were considered as elements in the SMILES strings. The SMILES were used as input to train the model, with a maximum length set to 256. Two special symbols, '<s>' and '<\ s>', were added to the beginning and end of the SMILES strings. If a string's length was less than 256, the '<pad>' symbol was used to complete the SMILES string. For the data masking process, 15% of the symbols in the SMILES were manipulated, with 80% being masked, 10% remaining unchanged, and the remaining 10% randomly changed. The BET module plays a crucial role in achieving SSL from a substantial number of SMILES strings. It utilizes the attention mechanism in the transformer module to extract the importance of each symbol in the SMILES sequence. The BET module consists of eight bidirectional encoder layers, where each layer includes a multi-head self-attention layer and a subsequent fully connected feed-forward neural network. Each self-attention layer has eight heads, and the embedding size of the fully connected feed-forward layers is 1024. During training, the Adam optimizer with a weight decay of 0.1 is employed, and the loss function chosen is cross-entropy. The input SMILES have a maximum length of 256, including the special symbols added at the two ends, and each symbol is embedded in a dimension of 512. Consequently, the resulting molecular embedding matrix consists of 256 embedding vectors, each with a dimension of 512. The transformer module offers high parallelism capability and training efficiency, allowing for the use of a large amount of SMILES to train deep learning models. In this study, SMILES strings from the Chembl, PubChem, and ZINC databases, either individually or fused together, were used to train three separate pre-trained models. The resulting transformer-based molecular embeddings generated from the pre-trained models using the Chembl database were utilized as molecular fingerprints. §.§ Machine learning models Three classic machine learning algorithms, namely gradient boosting decision tree (GBDT), support vector machine (SVM), and random forest (RF), are employed to construct our ML models. The GBDT algorithm, an ensemble approach, possesses several advantages such as resistance to overfitting, insensitivity to hyperparameters, and ease of implementation. Consequently, it is competitive when training with small datasets and can yield better prediction performance compared to deep neural networks (DNNs) and other common ML algorithms. However, it is important to note that one of the challenges of GBDT is to strike a balance between accuracy and efficiency for large datasets. The algorithm assembles multiple weak learners (individual trees) into an iterative prediction model. While weak learners may produce suboptimal predictions individually, the combination of all weak learners through the ensemble approach helps reduce overall errors. The primary procedure of GBDT involves learning decision trees, where most of the time is consumed in finding the best split points. GBDT has already demonstrated good performance in various quantitative structure-activity relationship (QSAR) prediction tasks <cit.>. In this study, the GBDT algorithm provided by the Scikit-learn library (version 0.24.1) was utilized. Support Vector Machine (SVM), introduced by Cortes and Vapnik, is a non-probabilistic kernel-based supervised learning method that maps input vectors into a high-dimensional feature space <cit.>. The core concept behind SVM is to identify the optimal decision boundary that separates different classes in the feature space. This decision boundary is defined by a hyperplane that maximizes the margin between the support vectors and the data points closest to the decision boundary. SVM offers advantages such as high efficiency in high-dimensional spaces, robustness against overfitting, and versatility. However, SVM also has some limitations, including computational complexity and sensitivity to parameter tuning. Random Forest (RF), developed by Breiman, is an ensemble of decision trees where the predictions of individual trees are averaged to obtain an ensemble performance <cit.>. It employs a bootstrap sampling technique, and each decision tree uses only a subset of randomly chosen samples and features, starting with a trunk that splits into multiple branches before reaching the leaves. The leaf nodes represent the final prediction, while all other nodes are assigned with molecular features. RF is widely used in solving QSAR prediction problems and often does not require a complex feature selection procedure. Moreover, it is robust to redundant features and exhibits insensitivity to parameter variations. We collected a total of 111 inhibitor datasets in our DTI network. The three aforementioned ML algorithms were used to build ML models for these datasets. The details of the hyperparameters for these three ML algorithms are provided in Table S5 in the Supporting Information. In the ML models, we used two types of molecular fingerprints, namely BET and AE fingerprints, to embed the inhibitor compounds. Our ML models were created by pairing these molecular fingerprints with the GBDT, SVM, or RF algorithm. Consequently, we built a total of 111 ML models, each corresponding to one inhibitor dataset. For each dataset, six individual models were constructed by combining BET and AE fingerprints with the three ML algorithms. The average of the predictions from these six individual models was considered as our final binding affinity prediction, which we refer to as the consensus method for prediction. The consensus results typically outperform those obtained from individual models. We compared the prediction results using the three different algorithms and found that the SVM algorithm with the consensus method performed the best among the other algorithms using individual fingerprints. This was validated using a set of provided samples, as shown in Table S6 in the Supporting Information. To reduce the impact of randomness, each individual ML model was trained ten times using different random seeds, and the average of the ten predictions was considered as the final result for each individual model. Additionally, the Pearson correlation coefficients (R) and root mean square deviation (RMSD) of ten-fold cross-validations for the 111 datasets are presented in Table S7 of the Supporting Information. § CONCLUSION Pain is a complex sensory and emotional experience that serves as a protective mechanism in response to potential or actual tissue damage. It can be categorized into different types, such as psychogenic pain, physical pain, and neuropathic pain, based on various factors. Physical pain occurs when there is actual or potential damage to tissues, such as injury or surgery. Nociceptors, specialized sensory receptors, detect noxious stimuli and transmit signals to the brain, resulting in the perception of pain. Neuropathic pain, on the other hand, originates from damage or dysfunction of the nervous system itself. It may be caused by conditions such as nerve compression, diabetes, or trauma, and is often described as shooting, burning, or electric shocks accompanied by abnormal sensations. Physical pain and neuropathic pain share common underlying neurological mechanisms. Sodium channels, particularly Nav1.3, Nav1.7, Nav1.8, and Nav1.9, play a significant role in the generation and transmission of pain signals in various pain conditions. Consequently, sodium channel blockers that specifically target these channels have been actively explored as potential therapeutic interventions for pain. By modulating the activity of sodium channels, it is possible to reduce abnormal pain signaling associated with different pain conditions. However, progress in drug design for pain treatment has been relatively slow, and there is a need for more treatment options to be investigated. Sodium channels are attractive targets for the development of pain medications. Pain affects complex molecular and biological activities in the nervous system, involving significant protein-protein interactions (PPI) in different brain regions. The development of pain treatment medications must take into account the influence of drugs on the PPI networks of pain targets. In this study, we construct an extended drug-target interaction (DTI) network informed by four pain-related sodium channels. We develop a machine learning framework to screen and propose additional drug candidates for pain reduction. We utilize two molecular fingerprints generated by advanced natural language processing (NLP) models based on transformer and autoencoder algorithms. These fingerprints are then used to build predictive machine learning models employing three common machine learning algorithms: support vector machine (SVM), gradient boosting decision tree (GBDT), and random forest (RF). A consensus model combining the predictions from these algorithms is used to enhance the overall predictive performance. Additionally, we apply these machine learning models to reevaluate the side effects of existing pain-relieving medications. Our ML models are also employed to analyze the repurposing potential of existing inhibitor compounds on major pain targets and screen for possible side effects associated with these inhibitors. Furthermore, we implement the assessment of ADMET properties using machine learning predictions. Finally, we identify a group of promising compounds for major pain targets. Further testing through in vitro or animal experiments is necessary to evaluate the toxicity and blood-brain barrier permeability characteristics of these candidate compounds. Our machine learning-based framework provides a novel method for searching candidate compounds for pain relief and can be generalized for other diseases with neurological implications. While the sodium channel genes studied in this work are associated with pain perception and pain disorders, it is important to note that pain is a complex and multifactorial phenomenon involving numerous other factors and pathways. Further research is needed to fully understand the roles of these sodium channels in pain processing and to explore their potential as therapeutic targets for pain management. § DATA AND CODE AVAILABILITY The related datasets studied in present work are available at: < https: //weilab.math.msu.edu/DataLibrary/2D/>. Codes of the calculation of two molecular fingerprints are available via <https: //github.com/WeilabMSU/OUD-PPI>. § ACKNOWLEDGEMENTS This work was supported in part by NIH grants R01GM126189 and R01AI164266, NSF grants DMS-2052983, DMS-1761320, and IIS-1900473, NASA grant 80NSSC21M0023, MSU Foundation, Bristol-Myers Squibb 65109, and Pfizer. The work of Jian Jiang and Bengong Zhang was supported by the National Natural Science Foundation of China under Grant No. 11971367, No.12271416, and No.11972266. § COMPETING INTERESTS The authors declare no competing interests. 10 news Jeremy Steglitz, Joanna Buscemi, and Molly Jean Ferguson. The future of pain research, education, and treatment: a summary of the IOM report “Relieving pain in America: a blueprint for transforming prevention, care, education, and research”. Translational Behavioral Medicine, 2(1):6–8, 01 2012. dib2013nav1 Sulayman D Dib-Hajj, Yang Yang, Joel A Black, and Stephen G Waxman. The nav1. 7 sodium channel: from molecule to man. Nature Reviews Neuroscience, 14(1):49–62, 2013. noda1986existence Masaharu Noda, Takayuki Ikeda, Toshiaki Kayano, Harukazu Suzuki, Hiroshi Takeshima, Mika Kurasaki, Hideo Takahashi, and Shosaku Numa. Existence of distinct sodium channel messenger rnas in rat brain. Nature, 320(6058):188–192, 1986. RIECHERS2015567 Ronald G. Riechers, Mark F. Walker, and Robert L. Ruff. Chapter 36 - post-traumatic headaches. In Jordan Grafman and Andres M. Salazar, editors, Traumatic Brain Injury, Part II, volume 128 of Handbook of Clinical Neurology, pages 567–578. Elsevier, 2015. black2004changes Joel A Black, Shujun Liu, Masaki Tanaka, Theodore R Cummins, and Stephen G Waxman. Changes in the expression of tetrodotoxin-sensitive sodium channels within dorsal root ganglia neurons in inflammatory pain. Pain, 108(3):237–247, 2004. fertleman2006scn9a Caroline R Fertleman, Mark D Baker, Keith A Parker, Sarah Moffatt, Frances V Elmslie, Bjarke Abrahamsen, Johan Ostman, Norbert Klugbauer, John N Wood, R Mark Gardiner, et al. Scn9a mutations in paroxysmal extreme pain disorder: allelic variants underlie distinct channel defects and phenotypes. Neuron, 52(5):767–774, 2006. cox2006scn9a James J Cox, Frank Reimann, Adeline K Nicholas, Gemma Thornton, Emma Roberts, Kelly Springell, Gulshan Karbani, Hussain Jafri, Jovaria Mannan, Yasmin Raashid, et al. An scn9a channelopathy causes congenital inability to experience pain. Nature, 444(7121):894–898, 2006. black2008multiple Joel A Black, Lone Nikolajsen, Karsten Kroner, Troels S Jensen, and Stephen G Waxman. Multiple sodium channel isoforms and mitogen-activated protein kinases are present in painful human neuromas. Annals of Neurology: Official Journal of the American Neurological Association and the Child Neurology Society, 64(6):644–653, 2008. rowe2013voltage Ashlee H Rowe, Yucheng Xiao, Matthew P Rowe, Theodore R Cummins, and Harold H Zakon. Voltage-gated sodium channel in grasshopper mice defends against bark scorpion toxin. Science, 342(6157):441–446, 2013. okuda2016infantile Hiroko Okuda, Atsuko Noguchi, Hatasu Kobayashi, Daiki Kondo, Kouji H Harada, Shohab Youssefian, Hirotomo Shioi, Risako Kabata, Yuki Domon, Kazufumi Kubota, et al. Infantile pain episodes associated with novel nav1. 9 mutations in familial episodic pain syndrome in japanese families. PLoS One, 11(5):e0154827, 2016. leipold2015cold Enrico Leipold, Andrea Hanson-Kahn, Miya Frick, Ping Gong, Jonathan A Bernstein, Martin Voigt, Istvan Katona, R Oliver Goral, Janine Altmüller, Peter Nürnberg, et al. Cold-aggravated pain in humans caused by a hyperactive nav1. 9 channel mutant. Nature communications, 6(1):10049, 2015. han2017familial Chongyang Han, Yang Yang, Rene H Te Morsche, Joost PH Drenth, Juan M Politei, Stephen G Waxman, and Sulayman D Dib-Hajj. Familial gain-of-function nav1. 9 mutation in a painful channelopathy. Journal of Neurology, Neurosurgery & Psychiatry, 88(3):233–240, 2017. han2015domain Chongyang Han, Yang Yang, Bianca TA de Greef, Janneke GJ Hoeijmakers, Monique M Gerrits, Camiel Verhamme, Jian Qu, Giuseppe Lauria, Ingemar SJ Merkies, Catharina G Faber, et al. The domain ii s4-s5 linker in nav1. 9: a missense mutation enhances activation, impairs fast inactivation, and produces human painful neuropathy. Neuromolecular medicine, 17:158–169, 2015. mulcahy2019challenges John V Mulcahy, Hassan Pajouhesh, Jacob T Beckley, Anton Delwig, J Du Bois, and John C Hunter. Challenges and opportunities for therapeutics targeting the voltage-gated sodium channel isoform nav1. 7. Journal of medicinal chemistry, 62(19):8695–8710, 2019. bagherian2021machine Maryam Bagherian, Elyas Sabeti, Kai Wang, Maureen A Sartor, Zaneta Nikolovska-Coleska, and Kayvan Najarian. Machine learning approaches and databases for prediction of drug–target interaction: a survey paper. Briefings in bioinformatics, 22(1):247–269, 2021. lomartire2021predictors Riccardo LoMartire, Örjan Dahlström, Mathilda Björk, Linda Vixner, Paolo Frumento, Lea Constan, Björn Gerdle, and Björn Olov Äng. Predictors of sickness absence in a clinical population with chronic pain. The Journal of Pain, 22(10):1180–1194, 2021. miettinen2021machine Teemu Miettinen, Pekka Mäntyselkä, Nora Hagelberg, Seppo Mustola, Eija Kalso, and Jörn Lötsch. Machine learning suggests sleep as a core factor in chronic pain. Pain, 162(1):109–123, 2021. von2020assigning Lars J von Buchholtz, Ruby M Lam, Joshua J Emrick, Alexander T Chesler, and Nicholas JP Ryba. Assigning transcriptomic class in the trigeminal ganglion using multiplex in situ hybridization and machine learning. Pain, 161(9):2212, 2020. robinson2015comparison Michael E Robinson, Andrew M O'Shea, Jason G Craggs, Donald D Price, Janelle E Letzen, and Roland Staud. Comparison of machine classification algorithms for fibromyalgia: neuroimages versus self-report. The Journal of Pain, 16(5):472–477, 2015. avram2018modeling Sorin Avram, Alina Bora, Liliana Halip, and Ramona Curpan. Modeling kinase inhibition using highly confident data sets. Journal of Chemical Information and Modeling, 58(5):957–967, 2018. li2017modeling Xiao Li, Yuan Zhang, Huanhuan Li, and Yong Zhao. Modeling of the herg k+ channel blockage using online chemical database and modeling environment (ochem). Molecular Informatics, 36(12):1700074, 2017. zhang2022hergspred Xudong Zhang, Jun Mao, Min Wei, Yifei Qi, and John ZH Zhang. Hergspred: Accurate classification of herg blockers/nonblockers with machine-learning models. Journal of chemical information and modeling, 62(8):1830–1839, 2022. feng2023virtual Hongsong Feng and Guo-Wei Wei. Virtual screening of drugbank database for herg blockers using topological laplacian-assisted ai models. Computers in biology and medicine, 153:106491, 2023. kong2020prediction Weikaixin Kong, Xinyu Tu, Weiran Huang, Yang Yang, Zhengwei Xie, and Zhuo Huang. Prediction and optimization of nav1. 7 sodium channel inhibitors based on machine learning and simulated annealing. Journal of Chemical Information and Modeling, 60(6):2739–2753, 2020. kong2023multiple Weikaixin Kong, Weiran Huang, Chao Peng, Bowen Zhang, Guifang Duan, Weining Ma, and Zhuo Huang. Multiple machine learning methods aided virtual screening of nav1. 5 inhibitors. Journal of Cellular and Molecular Medicine, 27(2):266–276, 2023. bosselmann2022learning Christian Malte Bosselmann, Ulrike BS Hedrich, Holger Lerche, and Nico Pfeifer. Learning with phenotypic similarity improves the prediction of functional effects of missense variants in voltage-gated sodium channels. bioRxiv, pages 2022–09, 2022. herrera2022pep Jesús Herrera-Bravo, Jorge G Farías, Fernanda Parraguez Contreras, Lisandra Herrera-Belén, and Jorge F Beltrán. Pep-predna+: A web server for prediction of highly specific peptides targeting voltage-gated na+ channels using machine learning techniques. Computers in Biology and Medicine, 145:105414, 2022. nguyen2022towards Phuong T Nguyen and Vladimir Yarov-Yarovoy. Towards structure-guided development of pain therapeutics targeting voltage-gated sodium channels. Frontiers in Pharmacology, 13:138, 2022. jenssen2021machine Marit Dagny Kristine Jenssen, Per Atle Bakkevoll, Phuong Dinh Ngo, Andrius Budrionis, Asbjørn Johansen Fagerlund, Maryam Tayefi, Johan Gustav Bellika, and Fred Godtliebsen. Machine learning in chronic pain research: a scoping review. Applied Sciences, 11(7):3205, 2021. matsangidou2021machine Maria Matsangidou, Andreas Liampas, Melpo Pittara, Constantinos S Pattichi, and Panagiotis Zis. Machine learning in pain medicine: an up-to-date systematic review. Pain and therapy, pages 1–18, 2021. lotsch2018machine Jörn Lötsch and Alfred Ultsch. Machine learning in pain research. Pain, 159(4):623, 2018. brady2016prescription Kathleen T Brady, Jenna L McCauley, and Sudie E Back. Prescription opioid misuse, abuse, and treatment in the united states: an update. American Journal of Psychiatry, 173(1):18–26, 2016. feng2023machine2 Hongsong Feng, Rana Elladki, Jian Jiang, and Guo-Wei Wei. Machine-learning analysis of opioid use disorder informed by mor, dor, kor, nor and zor-based interactome networks. Computers in Biology and Medicine, 157:106745, 2023. feng2023machine Hongsong Feng, Jian Jiang, and Guo-Wei Wei. Machine-learning repurposing of drugbank compounds for opioid use disorder. Computers in biology and medicine, 160:106921, 2023. zhu2023tidal Zailiang Zhu, Bozheng Dou, Yukang Cao, Jian Jiang, Yueying Zhu, Dong Chen, Hongsong Feng, Jie Liu, Bengong Zhang, Tianshou Zhou, et al. Tidal: Topology-inferred drug addiction learning. Journal of Chemical Information and Modeling, 63(5):1472–1489, 2023. bennett2019role David L Bennett, Alex J Clark, Jianying Huang, Stephen G Waxman, and Sulayman D Dib-Hajj. The role of voltage-gated sodium channels in pain signaling. Physiological reviews, 99(2):1079–1151, 2019. erickson2018voltage Andelain Erickson, Annemie Deiteren, Andrea M Harrington, Sonia Garcia-Caraballo, Joel Castro, Ashlee Caldwell, Luke Grundy, and Stuart M Brierley. Voltage-gated sodium channels:(nav) igating the field to determine their contribution to visceral nociception. The Journal of physiology, 596(5):785–807, 2018. tseng2007sodium Tsai-Tien Tseng, Allison M McMahon, Victoria T Johnson, Erwin Z Mangubat, Robert J Zahm, Mary E Pacold, and Eric Jakobsson. Sodium channel auxiliary subunits. Microbial Physiology, 12(3-4):249–262, 2007. laedermann2015post Cedric J Laedermann, Hugues Abriel, and Isabelle Decosterd. Post-translational modifications of voltage-gated sodium channels in chronic pain syndromes. Frontiers in pharmacology, 6:263, 2015. flower2002drug Darren R Flower. Drug design: cutting edge approaches, volume 279. Royal Society of Chemistry, 2002. mann2019review N Mann, T King, and R Murphy. Review of primary and secondary erythromelalgia. Clinical and Experimental Dermatology, 44(5):477–482, 2019. xiong2021admetlab Guoli Xiong, Zhenxing Wu, Jiacai Yi, Li Fu, Zhijiang Yang, Changyu Hsieh, Mingzhu Yin, Xiangxiang Zeng, Chengkun Wu, Aiping Lu, et al. Admetlab 2.0: an integrated online platform for accurate and comprehensive predictions of admet properties. Nucleic Acids Research, 49(W1):W5–W14, 2021. landrum2013rdkit Greg Landrum et al. Rdkit: A software suite for cheminformatics, computational chemistry, and predictive modeling. Greg Landrum, 8, 2013. maniar2018lowering Kevin H Maniar, Ian A Jones, Rayudu Gopalakrishna, and C Thomas Vangsness Jr. Lowering side effects of nsaid usage in osteoarthritis: recent attempts at minimizing dosage. Expert opinion on pharmacotherapy, 19(2):93–102, 2018. trescot2008opioid Andrea M Trescot, Sukdeb Datta, Marion Lee, and Hans Hansen. Opioid pharmacology. Pain physician, 11(2S):S133, 2008. bryson1996amitriptyline Harriet M Bryson and Michelle I Wilde. Amitriptyline: a review of its pharmacological properties and therapeutic use in chronic pain states. Drugs & aging, 8:459–476, 1996. kukkar2013implications Ankesh Kukkar, Anjana Bali, Nirmal Singh, and Amteshwar Singh Jaggi. Implications and mechanism of action of gabapentin in neuropathic pain. Archives of pharmacal research, 36:237–251, 2013. huey2012using Ruth Huey, Garrett M Morris, and Stefano Forli. Using autodock 4 and autodock vina with autodocktools: a tutorial. The Scripps Research Institute Molecular Graphics Laboratory, 10550(92037):1000, 2012. kalliokoski2013comparability Tuomo Kalliokoski, Christian Kramer, Anna Vulpetti, and Peter Gedeck. Comparability of mixed ic50 data–a statistical analysis. PloS one, 8(4):e61007, 2013. chen2021extracting Dong Chen, Jiaxin Zheng, Guo-Wei Wei, and Feng Pan. Extracting predictive representations from hundreds of millions of molecules. The journal of physical chemistry letters, 12(44):10793–10801, 2021. winter2019learning Robin Winter, Floriane Montanari, Frank Noé, and Djork-Arné Clevert. Learning continuous and data-driven molecular descriptors by translating equivalent chemical representations. Chemical science, 10(6):1692–1701, 2019. jiang2021ggl Jian Jiang, Rui Wang, and Guo-Wei Wei. Ggl-tox: geometric graph learning for toxicity prediction. Journal of chemical information and modeling, 61(4):1691–1700, 2021. jiang2020boosting Jian Jiang, Rui Wang, Menglun Wang, Kaifu Gao, Duc Duy Nguyen, and Guo-Wei Wei. Boosting tree-assisted multitask deep learning for small scientific datasets. Journal of chemical information and modeling, 60(3):1235–1244, 2020. cortes1995support Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine learning, 20:273–297, 1995. breiman2001random Leo Breiman. Random forests. Machine learning, 45:5–32, 2001.
http://arxiv.org/abs/2307.04900v1
20230710210214
The angular dependence of spin-orbit torque in monolayer $Fe_3GeTe_2$
[ "Fei Xue", "Mark D. Stiles", "Paul M. Haney" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.mes-hall" ]
Department of Physics, University of Alabama at Birmingham, Birmingham, AL 35294, USA Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 20899, USA Institute for Research in Electronics and Applied Physics & Maryland Nanocenter, University of Maryland, College Park, MD 20742, USA Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 20899, USA Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 20899, USA In ferromagnetic systems lacking inversion symmetry, an applied electric field can control the ferromagnetic order parameters through the spin-orbit torque. The prototypical example is a bilayer heterostructure composed of a ferromagnet and a heavy metal that acts as a spin current source. In addition to such bilayers, spin-orbit coupling can mediate spin-orbit torques in ferromagnets that lack bulk inversion symmetry. A recently discovered example is the two-dimensional monolayer ferromagnet Fe3GeTe2. In this work, we use first-principles calculations to study the spin-orbit torque and ensuing magnetic dynamics in this material. By expanding the torque versus magnetization direction as a series of vector spherical harmonics, we find that higher order terms (up to ℓ=4) are significant and play important roles in the magnetic dynamics. They give rise to deterministic, magnetic field-free electrical switching of perpendicular magnetization. The angular dependence of spin-orbit torque in monolayer Fe3GeTe2 Paul M. Haney August 12, 2023 ================================================================= § INTRODUCTION The electrical control of magnetization without external magnetic fields has attracted a lot of interest due to its potential applications in energy-efficient nonvolatile magnetic random access memory devices and neuromorphic computing <cit.>. One of the promising mechanisms to realize such functionality is spin-orbit torque <cit.>, which is derived from spin-orbit coupling and transfers angular momentum from the crystal lattice to the magnetization <cit.>. The symmetry of the system determines the dependence of the spin-orbit torque on the magnetization direction. This dependence in turn determines the possible functionality of the torque in devices. As an example, a bilayer heterostructure consisting of a ferromagnetic and a heavy metal layer often possesses a symmetry mirror plane containing the electric field and the interface normal directions. This symmetry requires that the spin-orbit torque vanishes when the magnetization is in-plane and perpendicular to the electric field. This property in turn prevents the spin-orbit torque from affecting deterministic switching of magnetic devices with perpendicular magnetic anisotropy, which are desired for applications <cit.>. Utilizing materials with reduced crystal symmetry such as two-dimensional layered materials can overcome this limitation and result in deterministic perpendicular switching <cit.>. In addition to conventional bilayer heterostructures, ferromagnets without inversion symmetry <cit.> can also exhibit sizable spin-orbit torques, offering another route to useful switching dynamics. An example is the recently discovered 2-d magnetic material, monolayer Fe3GeTe2. Fe3GeTe2 is additionally of great interest in ferromagnetic spintronics applications because it is metallic and has strong perpendicular magnetic anisotropy <cit.>. Johansen et al. recently predicted that this material's C_3z symmetry leads to novel bulk spin-orbit torques <cit.>. For example, the lowest order spin-orbit torque is found to be time-reversal even and fieldlike, in contrast to the conventional bilayer case that has a time-reversal odd fieldlike torque and a time-reversal even dampinglike torque. Interestingly, although the material symmetry is compatible with deterministic perpendicular magnetization switching, the lowest order torques identified in previous work do not lead to deterministic switching. Motivated by this, we compute the spin-orbit torques in monolayer Fe3GeTe2 in this work using ab initio calculations. We generalize the analysis of the symmetry properties of the material response and show that higher order terms in the spin-orbit torque enable deterministic switching of perpendicular magnetization. This paper is organized as follows: In Sec. <ref>, we describe how symmetry determines the form of spin-orbit torques, which we express in vector spherical harmonics. Using vector spherical harmonics as the expansion basis enables the convenient analysis of higher-order terms. We provide symmetry tables for the Fe3GeTe2 structure and for conventional bilayer systems. Sec. <ref> presents first-principles calculations of spin-orbit torques in monolayer Fe3GeTe2 and analyzes the important higher-order terms in the results. Sec. <ref> presents the resulting dynamics of the ab initio torques computed with the Landau-Lifshitz-Gilbert-Slonczewski equation. In Sec. <ref>, we provide a brief discussion of our main findings and relevance to the experiments. § SYMMETRY ANALYSIS §.§ Vector Spherical Harmonics Crystal symmetry ultimately determines the dependence of spin-orbit torque on the electric field and magnetization directions. Following Belashchenko et al. <cit.>, we expand the spin-orbit torque in the basis of vector spherical harmonics. This expansion offers several advantages over other approaches <cit.> when describing spin-orbit torques in systems with more complicated symmetries than the typical bilayer system. First, the expansion elements are orthogonal to each other so that adding more terms to the expansion does not change the fit values for the lower order terms. Second, there is a straightforward procedure to determine all symmetry allowed elements of the expansion set. This is in contrast to a polynomial expansion of the torque in Cartesian coordinates, where the number of tensor elements grows exponentially with polynomial order. This makes higher order terms difficult to identify and evaluate. As we show in this paper, higher order terms (4^ th order) can qualitatively impact the features of the spin-orbit torque-induced magnetization dynamics, so their identification is important. Third, the terms in the vector spherical harmonics are automatically partitioned into dampinglike or fieldlike torque terms <cit.>. Knowledge of the fieldlike/dampinglike characteristic of the torque can provide intuition about the role of each term in magnetic dynamics. Finally, the expansion allows easy identification of time-reversal even and odd torques. As we show below, both fieldlike and dampinglike torques include time-reversal even and odd components. We discuss these points in more detail below. We follow the same convention adopted in <cit.> to use two of the three vector spherical harmonics. For the magnetization direction m̂= (sinθcosϕ,sinθsinϕ,cosθ), the torque components are defined in terms of scalar spherical harmonics Y_lm[m̂(θ,ϕ)] as Y^ D_lm(m̂) =∇_m̂ Y_lm(m̂)/√(l(l+1)), Y^ F_lm(m̂) =m̂×∇_m̂ Y_lm(m̂)/√(l(l+1)), We explicitly label the vector spherical harmonics terms in Eq. <ref> and Eq. <ref> based on the fieldlike or dampinglike nature of the torque. We label Y^ F_lm as fieldlike because its corresponding effective field ∇_m̂ Y_lm is a pure gradient and has zero curl. Fieldlike torques result in precessional motion of the magnetization. We label Y^ D_lm as dampinglike because it is proportional to m×Y^ F_lm and can be generated from the curl of an effective field. Dampinglike torques direct the magnetization to fixed points. The time-reversal properties of fieldlike and dampinglike torques depend on whether l is even or odd; Table <ref> summarizes this relationship. For the most common spin-orbit torques found in bilayers with a broken mirror plane perpendicular to z, the dampinglike torque 𝐦̂×(( E×ẑ)×𝐦̂) is even under time reversal and the fieldlike torque 𝐦̂×( E×ẑ) is odd. The terms “time-reversal even torque” and “dampinglike torque” are often used interchangeably, as are the terms “time-reversal odd torque” and “fieldlike torque”. However, these equivalences do not hold for higher order terms in the expansion of the torque. Since the electric-field-induced spin-orbit torque is always perpendicular to the magnetization m and the vector spherical harmonics form a complete set of functions, we can write down the spin-orbit torkance for an electric field in the Ê direction of magnitude E T_Ê(m̂)=τ_Ê(m̂)E in the basis of Y^ D_lm and Y^ F_lm τ_Ê(m̂)=∑_lm[Y^ D_lmC^ D_lm(Ê)+Y^ F_lmC^ F_lm(Ê)], where the Cs are complex Cartesian coefficients with the real part being the coefficient of the ReY^ D,F_lm and the imaginary parts the negative coefficients of the ImY^ D,F_lm. The crystal symmetry determines what combinations of coefficients are allowed. When we expand spin-orbit torkance in vector spherical harmonics, we have 2l+1 independent choices of vector spherical harmonics, one for each integer m with -l ≤ m ≤ l for a given l in the absence of symmetry constraints. As with spherical harmonics, the vector spherical harmonics with -m are the complex conjugates of those with m. Since the torques are real, we use the real and imaginary parts of the vector spherical harmonics as the expansion functions, e.g. ReY^ D,F_lm and ImY^ D,F_lm When we make this choice, we restrict m to be non-negative so we do not overcount. Note that we use a different notation for the vector spherical harmonic torque components than found in Belashchenko et al. [Our vector spherical harmonics convention can be converted to the one adopted by Belashchenko et al. <cit.>: ReY^ D,F_lm=-Z^ (1),(2)_l,m/√(2), ImY^ D,F_lm=-Z^ (1),(2)_l,-m/√(2)]. Crystal symmetries constrain the choices of m for a given l. Table <ref> gives the constraints due to important mirror plane symmetries of the structure. Rotational crystal symmetries place additional constraints on m, as described in Appendix A. Conventionally, for thin film heterostructures composed of ferromagnets and heavy metals, the structure is assumed to be disordered, so that crystal symmetry does not play a role. The bilayer structure itself breaks the mirror plane σ_p̂,Ê, but the other two structural mirror planes remain. The presumed continuous rotational symmetry restricts m=1 <cit.>, so that for l odd, only ImY^ D,F_l1 is allowed and for l even, only ReY^ D,F_l1. The material of interest in this paper, Fe3GeTe2, preserves the mirror plane perpendicular to the interface normal but breaks one of the mirror planes that contain the interface normal. The mirror plane perpendicular to the interface normal restricts m to be even. When the crystal is orientated such that the electric field is along the x-direction as in Fig. <ref>(a), σ_p̂,n̂ is preserved, so that terms containing ReY^ D,F_lm require l to be odd and terms containing ImY^ D,F_lm require l to be even. If the crystal is oriented so that the electric field is along the y-direction as in Fig. <ref>(a), the allowed l values for the different terms switches. Systems like that in Ref. <cit.> are similar but do not have the mirror plane perpendicular to the interface, so there is no restriction that m be even. Depending on the orientation of the electric field along the crystal, different terms are allowed for different combinations of l and m. It can be informative to take a different approach from that used in Table <ref>, in which the vector spherical harmonics are defined with respect to the interface normal and the electric field direction and instead to fix the crystal orientation. Then the vector spherical harmonics do not change as the electric field direction is changed and it becomes possible to relate the coefficients of the different terms for the different electric field directions. This process is explained in Appendix A, allowing us to determine the angular dependence of the torque when the electric field is along y from calculations done for the field along x. §.§ General form of the torkance for monolayer Fe3GeTe2 The vector spherical harmonic expansion of the spin-orbit torque for Fe3GeTe2 is determined by its crystal structure, shown in Fig. <ref>. Monolayer Fe3GeTe2 has the D_3h symmetry of the P63/mmc space group, which means that it has mirror plane symmetry with respect to the plane of the film (x-y plane), three-fold rotational symmetry around the out-of-plane axis, and three in-plane mirror planes (y-z plane and equivalents rotated by 120^∘), but mirror-plane symmetry is broken in the mutually perpendicular planes (x-z plane and equivalents rotated by 120^∘). Its lack of inversion symmetry is the key to allowing current-induced spin-orbit torque. Following the general procedure outlined in Appendix <ref>, the symmetry-allowed spin-orbit torkance for an electric field in the x-direction is given by τ^even_x̂(m̂)=∑_lm C^ F_2l,6m±2 ImY^ F_2l,6m±2(m̂) + C^ D_2l+1,6m±2 ReY^ D_2l+1,6m±2(m̂). Our first-principles calculation and analysis of the magnetic dynamics indicate that the following three terms in this expansion are dominant: τ^even_x̂ (m̂)  ≈  C^ F_2,2ImY^ F_2,2 + C^ F_4,2ImY^ F_4,2 + C^ D_3,2ReY^ D_3,2 . Some of these terms are illustrated in Fig. <ref>. The lowest order time-reversal even term can be written in Cartesian coordinates as: ImY^ F_2,2 ∝-sinθcos2ϕ θ̂+1/2sin2θsin2ϕ ϕ̂ =m̂×(m_y,m_x,0) . This form, which is shown in Fig. <ref>(c) and which has been derived from the Cartesian expansion <cit.>, acts as a fieldlike torque even though it is the time-reversal even component of the spin-orbit torque. The time-reversal odd torkance is given by: τ^odd_x̂(m̂)=∑_lm C^ D_2l,6m±2ImY^ D_2l,6m±2 + C^ F_2l+1,6m±2ReY^ F_2l+1,6m±2. Our analysis shows that for Fe3GeTe2, the important terms in this expansion are: τ^odd_x̂(m̂) ≈  C^ D_2,2ImY^ D_2,2+ C^ D_4,2ImY^ D_4,2 +C^ F_3,2ReY^ F_3,2. The leading term in this expression is in Fig. <ref>(d), and in Cartesian coordinates takes the form: ImY^ D_2,2 ∝1/2sin2θsin2ϕ θ̂+sinθcos2ϕ ϕ̂ =m×((m_y,m_x,0)×m) . This time-reversal odd torque acts as dampinglike and is the second lowest-order in magnetization m. Utilizing Eq. <ref>, we can write down the final symmetry-constrained form of torkance under the applied E-field in ŷ-direction by keeping the same coefficients and swapping the Re and Im operating on the vector spherical harmonics (see Appendix <ref> for details). In this material, even though the coefficients of the torques are the same for fields in the x̂ and ŷ directions, and the real and imaginary parts of the vector spherical harmonics are the same but rotated through π/m, the differences between those rotational angles are sufficient to qualitatively change the torques for fields in the two directions. For electric fields in the ŷ direction, symmetry prevents magnetic-field free switching of perpendicular magnetizations. However, the different relationship between the electric field and the mirror plane allows for predictable perpendicular switching for an electric field in the x-direction. In the following, we focus particularly on this case. It is interesting to compare the spin-orbit torques for this system with those typically discussed for bilayer systems. Panels (a) and (b) in Fig. <ref> respectively show the typical fieldlike and dampinglike torques. These systems have a broken mirror plane perpendicular to the interface normal. When the electric field is applied in-plane, both torques vanish when the magnetization points in the in-plane direction perpendicular to the electric field. The torques are finite when the magnetization is perpendicular to the interface. Monolayer Fe3GeTe2 does not break this mirror plane but rather one containing the interface normal. In this case the torques are strictly zero when magnetizations are perpendicular to the layer. The three fold rotational symmetry then gives more complicated angular dependence than that seen in the bilayer systems. We discuss the consequences of these differences in Sec. <ref>. A motivation for symmetry analysis is the technological application of current-induced switching of perpendicular magnets <cit.>. Deterministic spin-orbit torque switching of perpendicular magnetization requires a nonzero out-of-plane torque when the magnetization is along the equator. This form of torque cannot be realized in typical devices composed of isotropic heavy metal layers and ferromagnetic layers due to their in-plane mirror symmetries. The use of in-plane-symmetry-breaking materials such as WTe2 have been reported previously <cit.> as a means to accomplishing field-free switching. Here we describe a different scenario for achieving deterministic switching of perpendicular magnetizations in Fe3GeTe2 in which symmetry-allowed higher-order terms in the vector spherical harmonics expansion play an essential role. A first requirement is that when the magnetization is in-plane there be an out-of-plane torque to break the symmetry between up and down. Only time-reversal even torques (such as panels (c), (f), (g) in Fig.2) can provide such functionality because C_2y symmetry enforces the out-of-plane torque to have the time-reversal even form, τ_z∝cos2mϕ. Second, the second requirement is that there be a stable fixed point out-of-plane, otherwise, the torque will vanish at an in-plane direction. Fig. <ref> (f) shows that torque ReY^ D_3,2 is the lowest order expansion term to satisfy this requirement. However, a ReY^ D_3,2 torque alone cannot switch the magnetization from one hemisphere to the other because of symmetry around the equator for m=2 terms. The fixed point in one hemisphere is exactly equivalent to a fixed point at the other hemisphere connected by (θ,ϕ)→(π-θ,π/2-ϕ). Although the ReY^ D_3,2 torque can drive the magnetization away from the north or south pole when we turn on the field, the new fixed point is still in the same hemisphere. As we turn off the electric field, the magnetization will then go back to the same pole thus resulting in no switching. The third requirement is breaking the symmetry connecting points in the northern and southern hemisphere which can happen if the higher-order torques with m>2 terms are also present. Fig. <ref>(g) shows one example of such torque, ImY^(F)_4,4. The combination of ImY^F_4,4 and ReY^D_3,2 can deterministically switch ferromagnets with perpendicular magnetic anisotropy, as we show in the following sections. § FIRST-PRINCIPLES CALCULATIONS OF SPIN-ORBIT TORKANCES IN MONOLAYER FE3GETE2 We adopt the experimental unit cell parameters <cit.> a=0.3991 nm of monolayer Fe3GeTe2 (space group D_3h) for our first-principles calculations using Quantum ESPRESSO <cit.>. We then use a Wannier function based approach <cit.> to compute the linear responses, described in more detail in Appendix <ref>. The time-reversal even and odd torkances are given by τ^ even_ij=2e∑_𝐤,n, m≠ n f_nkIm⟨ψ_nk|∂ H_ k/∂ k_i|ψ_mk⟩⟨ψ_mk|𝒯_j|ψ_nk⟩/(E_m-E_n)^2+η^2, τ_ij^ odd=-e∑_𝐤,n1/2η∂ f_nk/∂ E_nk⟨ψ_nk|∂ H_ k/∂ k_i|ψ_nk⟩⟨ψ_nk|𝒯_j|ψ_nk⟩. |ψ_nk⟩ and E_nk are the eigenstates and eigenvalues of Hamiltonian H_ k, where k is the Bloch wave vector and n is the band index. the equilibrium Fermi-Dirac distribution function is f_nk=(e^ (E_nk-μ)/k_ BT+1)^-1, μ is the Fermi level, η is the broadening parameter, and e is the electron charge. The torque operator is 𝒯=-i/ħ[Δ·𝐒̂ ,𝐒̂]. 𝐒 is the spin operator and Δ is the time-reversal odd spin-dependent exchange-correlation potential. One important input parameter to the calculation is the broadening parameter. Fig. <ref> shows the dependence of torkance on broadening parameter and chemical potential. In Fig. <ref>(a), we find that the time-reversal odd component τ_xx is always larger than the even component τ_xz when m̂=ŷ at the Fermi level. Both time-reversal even and odd torkances increase as the broadening parameter becomes smaller with the odd component increasing faster. The longitudinal resistance is indicated by black line in Fig. <ref>(a). In the broadening parameter regime η∈(0.02,0.04) eV where the resistance is about 400 Ω, the odd torkance is almost one order of magnitude larger than the even component. However, the torkance as a function of chemical potential for a fixed η=25 meV shown in Fig. <ref>(b) shows that this ratio does not always hold. Both even and odd components are peaked around 0.3 eV above the Fermi level with a much smaller magnitude difference. In some regions such as 0.2 eV below the Fermi level, the even component can be much larger than the odd component. We choose a constant broadening parameter η=25 meV for the results presented below. The corresponding constant electron momentum relaxation time is τ=ħ/2η=13 fs. The computed longitudinal resistance (Fig. <ref>(a)) using this η=25 meV at low temperature is around 400 Ω which agrees well with the experiment <cit.>. Although one experiment <cit.> finds the Curie temperature for monolayer Fe3GeTe2 can reach up to 100 K, we treat the smaller temperature T=20 K <cit.> where the ferromagnetic order is most robust. Figure <ref> gives the first-principles calculations of spin-orbit torkance in the monolayer Fe3GeTe2 as a function of magnetization angle (θ,ϕ). Comparing Fig. <ref>(a) with Fig. <ref>(c) gives clear evidence of the existence of higher-order terms. There is a vanishing torque band in both north and south hemispheres. By using the fitted coefficients of these nonzero vector spherical terms, we can replicate Fig. <ref>. This allows us to understand specifically how each term contributes to the magnetization dynamics, the focus in the next section. The full expansion of even and odd torques in vector spherical harmonics as in Eq. <ref> and Eq. <ref> are given in Table <ref> and Table <ref>. Fig. <ref> (c) and (d) show the angular dependence of spin-orbit torques when applied electric field is in ŷ direction. Because of the C_3z rotation symmetry, these results are expected to be related with the results of applied field in x̂ direction according to Eq. <ref>. We have checked that the numerical results are indeed consistent with this relationship. If we look at each individual vector spherical harmonic term, the difference between the cases for E∥ŷ and E∥x̂ is a simple azimuthal rotation by an angle of π/2m to swap the real and imaginary parts. After summing over all m, the total torques for the two cases are not related by a simple rotation. This enables a out-of-equator fixed point for E∥x̂, as we describe next. Fig. <ref> shows a zoomed-in contour plot of the magnitude of the total spin-orbit torkance near the equator. In the case of E∥ŷ, the mirror symmetry σ_yz enforces a zero torkance fixed point at m=x̂, shown in Fig. <ref>(b). Microscopically, all vector spherical harmonic terms in Eq. <ref> are zero when (θ,ϕ)=(π/2,0). In contrast, Fig. <ref>(a) shows one of the four out-of-equator zero torkance fixed points near (θ,ϕ)=(π/2,π/4). The fixed points in (a) and (b) are inequivalent due to the broken σ_xz mirror symmetry in Fe3GeTe2. The three additional zero-torkance points include one on the same hemisphere and two on the opposite hemisphere. For a particular electric field, the two fixed points on the same hemisphere are stable and the other two on the opposite hemisphere are unstable. The stability of each points changes with the sign of the electric field, allowing deterministic switching, discussed in the next section. Fig. <ref>(a) shows a tiny polar angle difference from π/2 which is unlikely to be thermally stable in realistic applications. The reason the angle is so small is that C^D_3,2 is relatively small compared to lower order terms such as C^D,F_2,2 which all have the fixed points at equator. The smallness of C^D_3,2 is not always true, as shown in the fitted coefficients as function of the chemical potential in Fig. <ref>. The important C^D_3,2 term can be very prominent as we increase chemical potential a few tens of millielectron volts indicated by the red line. At this chemical potential range, the out-of-equator fixed point can be detectable much more easily, as shown in the contour plot of Fig. <ref> (a). While the properties we calculate of Fe3GeTe2 are not likely to be suitable for applications, our focus is on the new physics and its trends dictated by the symmetries in Fe3GeTe2, rather than specific values. Other materials that share the same symmetry may have properties that are more amenable. § DYNAMICS In this section, we focus on how the spin-orbit torques computed in the previous section affect the magnetization dynamics. The spin dynamics of a ferromagnet with perpendicular easy-axis anisotropy is governed by the following Landau-Lifshitz-Gilbert equation with additional current-induced spin-orbit torque terms <cit.> d𝐦̂/dt-α𝐦̂×d𝐦̂/dt=-γμ_0H_ A(𝐦̂×ẑ)(𝐦̂·ẑ)+𝒯, where 𝐦̂ is the normalized magnetization, α is the Gilbert damping parameter, γ is the absolute value of the gyromagnetic ratio, μ_0 is vacuum magnetic permeability, H_ A is the magnetic anisotropy field, and 𝒯 is the current-induced spin-orbit torque. We directly compute the spin dynamics with the ab initio fitted spin-orbit torques as input into the Eq. <ref>. In the simulation, we choose μ_0 H_ A=20 T by calculating the energy difference for out-of-plane and in-plane magnetic configuration <cit.>. For the Gilbert damping, we choose α=0.01 <cit.>. Fig. <ref>(b) shows a typical zero-temperature magnetic trajectory when the applied electric field is larger than a critical threshold. The stable fixed point (θ_E,ϕ_E) corresponds to the same fixed point near ϕ=π/4 determined by the spin-orbit torkance shown in Fig. <ref>(a) but shifted by the presence of the anisotropy torque. There is another electric-field driven stable point near the symmetry related fixed point (θ_E,ϕ_E+π) depending on the initial state of the magnetization. Reversing the sign of electric field makes the other two fixed points [(π-θ_E,-ϕ_E) and (π-θ_E,-ϕ_E-π)] become stable so that it is possible to switch the magnetization from the south pole to the north hemisphere. The spin-orbit torques in monolayer Fe3GeTe2 lead to dynamics that are quite distinct from those of the conventional cases. First, the instability condition of the initial magnetization is very different from the cases found in bilayers. In the bilayer case, for a perpendicular easy-axis anisotropy, the spin-orbit torque is finite on the initial magnetization (±𝐳̂), see Fig. <ref>(a,b). For Fe3GeTe2 on the other hand, the torque on that initial magnetization is zero by symmetry as seen in Fig. <ref>. For this aspect of the reversal, the initial instability for Fe3GeTe2 has more in common with the instability for a bilayer system with an in-plane easy-axis along the ±𝐲̂, because in that case the torque is also zero. The instability case for Fe3GeTe2 also differs significantly from that of the bilayer with in-plane easy-axis anisotropy. As seen in Fig. <ref>(b), when the magnetization in the bilayer system precesses around the easy axis, the dampinglike torque pushes magnetization toward the easy axis or away from it depending on the sign of the current but independent of the phase of the precession. This means that the dampinglike torque competes with the damping torque, which is a factor of α smaller than the precession torques. On the other hand, the torques shown in Fig. <ref>(a,b) have no net push toward the easy axis along the poles (due to the σ_xy symmetry making the poles saddle points for the spin-orbit torques) and so they do not compete with damping torque. For Fe3GeTe2, when the magnetization is near the poles, the spin-orbit torques compete with the anisotropy directly. This competition gives the unfortunate consequence that reversal instability in Fe3GeTe2 requires larger currents than might be the case for other symmetries. However, when the magnetization is close to the fixed points near the equator, the spin-orbit torque competes directly with the damping, giving smaller critical currents for the stability of those fixed points. Once the critical current is reached and the 𝐳̂ direction becomes unstable for the magnetization, Fe3GeTe2 has the advantage over the bilayer system with perpendicular anisotropy that the switching is deterministic without any other symmetry breaking, like in-plane magnetic fields, applied to the system. In the bilayer system without symmetry breaking the magnetization goes to the 𝐲̂. When the current is turned off, small fluctuations determine whether the magnetization reverses or returns to its original state. For Fe3GeTe2 on the other hand, as shown in Fig. <ref>, the stable minima near m_z=0 are in one equator or the other, so that when the current is turned off, the magnetization goes to the pole on that side of the equator. § DISCUSSION Our findings have several experimental implications. The lowest order ImY^ F_2,2 has been found to be important in assisting the conventional dampinglike torque ImY^ D_1,1 in perpendicular switching of bilayer CoPt/CuPt <cit.>. This combination shares the similar traits as Fe3GeTe2. Reversal requires mixing vector spherical harmonics with different m and nonzero out-of-plane torques when the magnetization is in-plane. Our numerical results also give a large time-reversal odd dampinglike torque ImY^ D_2,2 in Fe3GeTe2, which can be tested in existing second harmonics setups <cit.>. In order to quantify all the symmetry-allowed higher order torques, a complete sweep of magnetization is required. Similar work has been done in WTe2/Ni80Fe20 bilayer <cit.>. Instead of expanding the measured torques into trigonometric functions, we need to expand them into vector spherical harmonics and obtain the fitting parameters. As we have shown, the coefficients vary largely as we change the chemical potential. Thus, adding a bias gate to change the charge density <cit.> in monolayer Fe3GeTe2 might be a way to find useful experimental conditions. The critical electric field to switch the perpendicular magnetization in Fe3TeGe2 is high because the mirror symmetry σ_xy restricts torques to those with even m. This restriction requires the spin-orbit torques to compete with the anisotropy torque instead of the damping torque. This mirror symmetry can be broken in the presence of a substrate or applied out-of-plane electric field, similar to the case of bilayer CoPt/CuPt <cit.>. In summary, we perform first-principles calculations of spin-orbit torque in monolayer Fe3GeTe2 and discover that the bulk spin-orbit torque expressed in higher-order vector spherical harmonics can deterministically switch the perpendicular magnetization. We have provided a symmetry table for other reduced symmetry systems as well. Utilizing higher-order spin-orbit torque offers a new perspective to realize novel electrical control of magnetization. § ACKNOWLEDGEMENT We thank Kirill Belashchenko and Alexey Kovalev for useful discussions. The work done at University of Alabama at Birmingham is supported by the National Science Foundation under Grant No. OIA-2229498, UAB internal startup funds, and UAB Faculty Development Grant Program, Office of the Provost. F.X. also acknowledges support under the Cooperative Research Agreement between the University of Maryland and the National Institute of Standards and Technology Physical Measurement Laboratory, Award 70NANB14H209, through the University of Maryland. § SYMMETRY-CONSTRAINED FORM OF SPIN-ORBIT TORQUE IN VECTOR SPHERICAL HARMONICS BASIS The symmetry allowed form of spin-orbit torque tensor can be obtained by averaging all possible symmetry transformed tensors τ^sym=1/N∑τ' where N is the number of symmetry operations and τ' indicates the tensor after the transformation. If we consider an orthogonal transformation to a Cartesian tensor, we can usually get the explicit transformation form under a rotation R τ'_ijk...=∑_αβγ...(R)R_iαR_jβR_jγ...τ_αβγ.... In Cartesian form such as T_i=τ_ijkE_j m_k to arbitrary order, the number of nonzero components in the tensor τ becomes exponentially large as we increase the tensor rank. It becomes practically intractable to obtain the symmetry-allowed higher-order terms in m̂ of τ in Cartesian form. We next describe the transformation of the torkance tensor in the expansion of vector spherical harmonics. For this purpose, it's convenient to write the tensor with a slightly different notation than used in the main text. In what follows, the tensor τ relates the electric field E to torque T according to: T = τ· E τ is the outer product of a vector spherical harmonic Y which specifies the torque direction, and a row vector C that contracts with the electric field: τ = Y(θ,ϕ) ⊗ C A coordinate transformation U of the system will act on both electric field and magnetization directions, and is represented by U_M̂ and U_Ê, respectively: U_M̂ T = τ· (U_Ê E) For operations which leave the crystal invariant, we require that the transformed torkance is also invariant, so that τ satisfies: τ = U_M̂^-1τU_Ê The above equation provides symmetry constraints on τ for a given symmetry transformation U. In the following, we apply this procedure for Fe3GeTe2 for each of the materials symmetry operations. The monolayer Fe3GeTe2 has the point group symmetry D_3h <cit.> which consists of one C_3 rotations around z-axis, three C_2 rotations including one around y-axis, and one mirror reflection respect to xy-plane, as shown in Fig. <ref>. Since we are interested in the case where electric field is applied in-plane, it is convenient to consider the rotation symmetry around z-axis first. According to Eq. <ref>, the torkance tensor τ is invariant under a rotation because both torque T and electric field E follow the same transformation under a rotation. Since the vector spherical harmonics absorbs an extra phase under a rotation of angle γ, i.e., Y^(ν)_lm(θ,ϕ-γ)→Y^(ν)_lm(θ,ϕ)e^-imγ, the transformed vector coefficients C need to have additional phase factors e^imγ to compensate e^-imγ in order to keep the tensor τ invariant. If the rotation symmetry is continuous, the only possible way is either m=0, C∝ẑ or m=±1, C∝x̂± iŷ. We can then get the relation C_l,±1(ŷ)=C_l,±1(x̂)e^∓iπ/2=∓iC_l,±1(x̂). For the discrete rotation angle γ=2π/ν (ν=3 for Fe3GeTe2), we can consider ν cases depending on the modulus: mν=0,1,...,ν-1. When we perform a rotation of angle γ from x-axis, the new electric field becomes E=(cosγ,sinγ,0)E. Because the torkance is invariant under this transformation, we can rewrite the x-axis as the new axis and the ϕ goes to ϕ-γ. This leads to the following equation C_lm(x̂)e^-imγ=C_lm(x̂)cosγ+C_lm(ŷ)sinγ, where C_lm(x̂,(ŷ)) are scalar coefficients that needed to be obtained by fitting the numerical results. The full vector form is C_lm=C_lm(x̂)x̂+C_lm(ŷ)ŷ, which will contract with the applied Efield vector E. If m=nν, we see that Eq. <ref> cannot be satisfied because the left hand side is always 1. The reason is that C_ν z symmetry only allows out-of-plane field-induced torque (E∥ẑ) in this case. Now let's consider the case m=nν±1, Eq. <ref> gives C_l,nν±1(ŷ)=∓ i C_l,nν±1(x̂). In fact, m=nν±1 are the only two possible cases for C_3z rotation symmetry. For C_2z symmetry, Eq. <ref> is always satisfied for odd m. For C_4z,C_6z symmetries, we need to consider more cases, which is summarized in the Table <ref> and Table <ref>. Now we only need to focus on the applied field in x̂ direction to obtain the additional symmetry constraints. Under the mirror reflection respect to xy-plane, both the torque T decomposed in θ̂,ϕ̂ directions and applied field are even. Thus the torkance τ has to be even under the transformation as well, τ(θ,ϕ)τ(θ,ϕ+π)=e^imπτ(θ,ϕ). This enforces that m must be an even number, i.e., m=6n±2. The remaining crystal symmetry constraint is due to the C_2y rotation symmetry, τ(θ,ϕ)τ(π-θ,π-ϕ)=τ(π-θ,-ϕ). Because T_θ=T·θ̂,T_ϕ=T·ϕ̂, and applied field in x̂ all flip the sign under C_2y rotation, thus the torkance has actually to be even under the rotation. To further simplify the constraint, we consider the real and imaginary part of vector spherical harmonics separately by observing the following relation: ReY^(D,F)_lm(π-θ,-ϕ)=(-1)^l+m+1ReY^(D,F)_lm(θ,ϕ), ImY^(D,F)_lm(π-θ,-ϕ)=(-1)^l+mImY^(D,F)_lm(θ,ϕ). Given that m is always even, we are only allowed to have odd/even number l for the real/imaginary part of vector spherical harmonics. Last but not the least, we can always decompose the current-induced torque into time-reversal even and odd parts. Under the time-reversal symmetry transformation, τ(θ,ϕ)τ(π-θ,π+ϕ)=τ(π-θ,ϕ). The real and imaginary part of vector spherical harmonics both satisfy Y^ D_lm(π-θ,ϕ)=(-1)^l+1Y^ D_lm(θ,ϕ), Y^ F_lm(π-θ,ϕ)=(-1)^l Y^ F_lm(θ,ϕ). The final symmetry-constrained form of time-reversal even torkance under the applied E-field in x̂-direction is τ^even(x̂)=∑_lm C^ F_2l,6m±2ImY^ F_2l,6m±2 + C^ D_2l+1,6m±2ReY^ D_2l+1,6m±2, and time-reversal odd torkance is τ^odd(x̂)=∑_lm C^ D_2l,6m±2ImY^ D_2l,6m±2 + C^ F_2l+1,6m±2ReY^ F_2l+1,6m±2. By utilizing Eq. <ref>, we can write down the final symmetry-constrained form of torkance under the applied E-field in ŷ-direction is τ^even(ŷ)=∑_lm ± C^ F_2l,6m±2ReY^ F_2l,6m±2 ∓ C^ D_2l+1,6m±2ImY^ D_2l+1,6m±2, and time-reversal odd torkance is τ^odd(ŷ)=∑_lm ± C^ D_2l,6m±2ReY^ D_2l,6m±2 ∓ C^ F_2l+1,6m±2ImY^ F_2l+1,6m±2. Note that scalar coefficients C_lm appearing in equations above are the same. We only need to calculate the applied E-field in x̂ case and fit the numerical results with the vector spherical harmonics form to obtain the coefficients C_lm. Note that for this system, changing the direction of the electric field swaps Re and Im. The differences in these functions correspond to rotations through the azimuth by π/2. § DETAILS OF THE TORQUE CALCULATION. The first step is to obtain the tight-binding Hamiltonian in a localized atomic orbital basis using a combination of Quantum Espresso <cit.> and Wannier90 <cit.>. In the Quantum ESPRESSO implementation, we use the pseudopotentials from PSlibrary <cit.> generated with a fully relativistic calculation using Projector Augmented-Wave method <cit.> and local density approximation exchange correlations <cit.>. We utilize a 18× 18 × 1 Monkhorst-Pack mesh <cit.>, 2 nm vacuum layer, 2720  eV cutoff energy, 1.36×10^-3  eV total energy convergence threshold and obtain the relaxed positions with the forces smaller than 0.02  eV/nm. The second step is to use Wannier90 <cit.> to obtain the Hamiltonian in an atomic basis. We project plane-wave solutions onto atomic s,p,d orbitals of Fe atoms, s,p orbitals of Ge and Te atoms. We then symmetrize the tight-binding Hamiltonian using TBmodels <cit.>. The final symmetrized tight-binding band structures agree very well with these bands from plane-wave methods shown in Fig. <ref>(c). The band inconsistencies at higher higher above the Fermi level are expected and do not significantly affect our results because states near the Fermi level dominate the torkance calculations through the energy denominator in Eq. (<ref>). Equipped with the spin-orbit coupled tight-binding Hamiltonian, We then apply linear response theory to compute the torkance <cit.>. We denote the j^ th component of the torkance in response to an electric field along the i-direction with τ_ij. The even and odd components of the torkance are given by Eq. (<ref>) and Eq. (<ref>) respectively. The torque operator is obtained as the change of magnetization with respect to time, 𝒯=dΔ/dt=i/ħ[H,Δ]=-i/ħ[Δ·𝐒̂ ,𝐒̂]. where 𝐒 is the spin operator and Δ is the time-reversal odd spin-dependent exchange-correlation potential. We use a very dense k-mesh of 1200×1200 to evaluate the torkance Eqs. <ref> and <ref>. Note that we adopt the tight-binding approximation <cit.> that Wannier orbitals are perfectly localized on atomic sites and the spin operators 𝐒 are described by the Pauli matrices spanned in the Wannier orbital basis in the implementation. We also adopt a constant broadening model to evaluate the longitudinal conductivity <cit.>, σ_xx=e^2/πħ∑_kn mη^2 Re[⟨ψ_nk|∂ H/∂ k_x|ψ_mk⟩⟨ψ_mk|∂ H/∂ k_x|ψ_nk⟩]/[(E_m-μ)^2+η^2][(E_n-μ)^2+η^2]. Eq. <ref> also becomes diverge as a function of 1/η at the zero broadening limit, similar to Eq. <ref>. The sheet resistance then is the reciprocal of longitudinal conductivity per unit cell area. apsrev4-2
http://arxiv.org/abs/2307.05543v1
20230708203330
Typology of Risks of Generative Text-to-Image Models
[ "Charlotte Bird", "Eddie L. Ungless", "Atoosa Kasirzadeh" ]
cs.CY
[ "cs.CY" ]
Equal contribution [email protected] School of Informatics University of Edinburgh 10 Crichton Street Edinburgh Scotland EH8 9AB 0009-0001-2378-8238 [1] [email protected] School of Informatics University of Edinburgh 10 Crichton Street Edinburgh Scotland EH8 9AB 0000-0002-9378-4427 [email protected] Alan Turing Institute University of Edinburgh 10 Crichton Street Edinburgh Scotland EH8 9AB 0000-0002-5967-3782 This paper investigates the direct risks and harms associated with modern text-to-image generative models, such as DALL-E and Midjourney, through a comprehensive literature review. While these models offer unprecedented capabilities for generating images, their development and use introduce new types of risk that require careful consideration. Our review reveals significant knowledge gaps concerning the understanding and treatment of these risks despite some already being addressed. We offer a taxonomy of risks across six key stakeholder groups, inclusive of unexplored issues, and suggest future research directions. We identify 22 distinct risk types, spanning issues from data bias to malicious use. The investigation presented here is intended to enhance the ongoing discourse on responsible model development and deployment. By highlighting previously overlooked risks and gaps, it aims to shape subsequent research and governance initiatives, guiding them toward the responsible, secure, and ethically conscious evolution of text-to-image models. <ccs2012> <concept> <concept_id>10003120.10003121</concept_id> <concept_desc>Human-centered computing Human computer interaction (HCI)</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121.10003128.10011753</concept_id> <concept_desc>Human-centered computing Text input</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010405.10010469.10010474</concept_id> <concept_desc>Applied computing Media arts</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10003456.10010927</concept_id> <concept_desc>Social and professional topics User characteristics</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Human-centered computing Human computer interaction (HCI) [300]Human-centered computing Text input [300]Applied computing Media arts [100]Social and professional topics User characteristics Typology of Risks of Generative Text-to-Image Models Atoosa Kasirzadeh ==================================================== Forthcoming in Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (AIES 2023) § INTRODUCTION In recent years, significant progress has been made in developing large language models and related multi-modal generative models, such as text-to-image models. We will collectively refer to these models as “generative models.”[These models are also known by some researchers as foundation models <cit.>.] Generative models process and combine information from various modalities, including visual, textual and auditory data. The range of applications for generative models spans multiple fields. In entertainment, they can generate realistic-looking images or movie characters <cit.>. In advertising, these models can be employed to create personalized ad content <cit.>. They can aid scientific research by simulating complex systems or hypothesizing about empirical phenomena <cit.>. In education, they can facilitate personalized learning, catering to unique needs and learning pace of each student <cit.>. While introducing exciting opportunities, generative models also pose risks. These risks have attracted significant scrutiny from the AI ethics and safety community. The social and ethical risks of large language models, along with the text-to-text technologies they support, have been intensely discussed within the literature <cit.>. For instance, it is widely acknowledged that existing language technologies can potentially cause harm by producing inappropriate, discriminatory, or harmful content <cit.>, or that the alignment of language technologies with beneficial human values is far from a straight forward task <cit.>. This paper extends this line of inquiry from language models to text-to-image generative models, examining potential risks and harms resulting from their development and use. To identify and illuminate these risks, we perform a comprehensive review of literature related to text-to-image (TTI) models. In particular, we conduct an initial search using 8 seed papers, supplementing with manual search (our search methodology is detailed in Appendix A). Collected papers are analysed for immediate risks, stakeholders, and empirical investigations. Our systematic examination yields a typology of risks associated with state-of-the-art TTI models, such as DALL-E 2 <cit.>. Our findings are summarized in Table <ref>. Our typology and discussion analysis are limited to immediate risks, inspired by a taxonomy from Weidinger et al. <cit.>. Our typology is divided into three key categories: I. Discrimination and Exclusion; II. Harmful Misuse; III. Misinformation and Disinformation. We recognize that these categories are not mutually exclusive. However, defining distinct categories enables clearer understanding and supports the implementation of more robust mitigation strategies. Our typology is further refined by identifying the stakeholders involved in the development and use of these systems. Inspired by the probing question from <cit.>: “How are social hierarchies, language ideologies, and NLP systems co-produced?”, we interlace this concern into our research and typology formulation. This process helps us to illustrate how the technologies supported by TTI models can reinforce existing social hierarchies via stakeholder identification. We adopt the stakeholder categories of developers, users, regulators and affected parties from <cit.>. We use “affected parties” referring to those influenced by the output of these models. We further extend the categorization by introducing “data sources” and “data subjects” – individuals or entities who generate and/or appear in the images used to train TTI models. Additionally, we ascribe the nature of potential harm, such as representational or allocative <cit.>, to the identified stakeholders. We also touch upon risks of harm to environment <cit.>. To organize the literature, we propose a practical distinction between two types of risks: “anticipated” and “observed.” The former refers to risks that are primarily predicted by researchers due to their expertise and familiarity with the field. The latter, on the other hand, are risks that have been empirically investigated, providing insights into the potential magnitude of harm. This classification underscores the need for comprehensive empirical investigations into many of the identified risks. With this distinction in mind, we highlight several risks that, to our knowledge, have not yet been adequately discussed. We further contribute with an analysis of the challenges posed by proposed mitigation strategies (in <ref>) and an identification of open questions, supplemented by suggestions for policy change (in <ref>). Finally, we advocate for enhanced collaboration among researchers, system developers, and policymakers. Through our categorisation and discussion, our intention is to foster a better understanding of the potential futures – both positive and negative – of TTI models, and by extension, other generative models. § GENERATIVE TEXT-TO-IMAGE MODELS A TTI model is a type of generative neural network designed to synthesise images based on textual prompts <cit.>. When given a prompt, the model generates an image that, in some sense, visually represents the information in the text. TTI systems typically leverage a combination of natural language processing (NLP) and computer vision techniques to produce images. The NLP component extracts relevant information such as objects, attributes, and relationships from the text, while the computer vision component generates an image based on this information. Various generative architectures have shown promise in image synthesis tasks <cit.>. These include flow-based models <cit.>, auto-regressive models <cit.> and variational autoencoders <cit.>. However, the advent of generative adversarial networks (GAN) <cit.> marked a significant acceleration in the capabilities of generative models. A typical TTI GAN employs two types of deep neural networks – a generator and a discriminator. The generator synthesizes an image from a text input, while the discriminator evaluates the generated image, determining its authenticity. Through adversarial training, the generator refines its ability to create increasingly realistic images. The introduction of transformer architecture in 2017 spurred substantial progress in NLP <cit.>, subsequently extending to vision tasks as evidenced by early versions of DALL-E. Additionally, CLIP <cit.>, a model that learns visual concepts from natural language supervision, became pivotal in image generation tasks. Diffusion models <cit.>, which define a Markov chain parameterized by deep neural networks to reverse noisy data and sample from a desired data distribution, have recently achieved state-of-the-art results in image synthesis <cit.>. The success of these models has stimulated a rapid proliferation of popular and open-source diffusion models, which are the subject of many of the papers in this taxonomy. § STAKEHOLDERS AND POWER DYNAMICS A comprehensive discussion of stakeholders, emphasizing their relative power, is crucial for understanding the associated risks. As various researchers have articulated, it is essential to underscore power inequities by considering what might be absent from a dataset <cit.>. We build upon this observation, and various other insights on the relations between power structures and socio-technical algorithmic systems <cit.>, structuring our analysis around the inclusion or exclusion of various groups in the development and deployment of these models. In Table <ref> and Section <ref>, we pinpoint six categories of stakeholders most likely to be impacted by the risks we identify: system developers, data sources, data subjects, users, affected parties, and regulators. §.§ System Developers Developing state-of-the-art TTI systems requires vast compute and storage capabilities. Consequently, development is dominated by actors who have such access, such as companies in the Global North and China. These tend to be primarily concentrated within a small group of for-profit companies and well-funded academic institutions (e.g. OpenAI, Meta, Stability AI, Google, DeepMind, Midjourney). Companies like Hugging Face are making efforts towards open-access TTI systems. However, it still remains unclear how these models compare competitively with for-profit models. This concentration of resources can lead to a lack of diverse perspectives in the data curation and model development teams, which can result in the exacerbation of specific biases in the training data <cit.>. As a result, source and output images that reflect only the hegemonic perspective might go unnoticed, as those curating the data or developing the models are often blinkered by their own experiences. For instance, <cit.> and <cit.> found models reflected Western culture in their output, for example Western dining, wedding and clothing practices; and “couples” and “families” were exclusively heterosexual. §.§ Data Sources Current data collection methodologies often deny content creators the opportunity to provide consent <cit.> or be acknowledged as “collaborators” <cit.>. Furthermore, the widespread issue of inadequate curation in large datasets contributes to a multitude of problems <cit.> .[Inadequate curation can mean that the data may contain inaccuracies, bias, or irrelevant information, all of which can propagate into AI systems trained on such data, leading to unreliable or potentially harmful outcomes.] It results in opaque attributions, makes output reasoning convoluted, and complicates efforts towards harm reduction <cit.>. Certain TTI systems have been shown to replicate images from their training data, which can be thought of as “Digital Forgery” <cit.>: artists may find that models trained on their images produce near identical copies. Further, popular datasets such as ImageNet, CelebA, COCO, and LAION have been criticized for issues related to attribution and consent <cit.>. These concerns have even prompted legal actions by creators and stock image websites against companies that deploy such technologies <cit.>. §.§ Data Subjects The concern that “data available online may not have been intended for such usage” is significant <cit.>. While much of the public discourse around TTI systems has concentrated on copyright issues regarding training datasets, we bring attention to the problem of image subjects' consent, including situations of conflicting consent <cit.>. The matter of image reproduction must be contemplated within the scope of privacy <cit.>. This concern applies to instances such as the unauthorized use of celebrity images or pornographic depictions of sex workers. While the focus often centers on the harm incurred by exposure to explicit content, the potential negative impact on the subjects of these images should not be overlooked. Explicit content is prevalent in many datasets, and users frequently retrain models to generate specific explicit content. However, some subjects of these images, such as sex workers, are not adequately considered in these discussions (though c.f. <cit.>). §.§ Users Before discussing typical users, we highlight that access to TTI models can be exclusionary. Commercial models often preclude certain territories, and successful use of these systems requires fluency in the input language (matching the dialect of the training data), or access to an accurate translation tool. We delve deeper into these issues further in Section <ref>. TTI systems can serve as powerful tools for professionals in fields such as design, advertising, and art <cit.>. They represent fresh avenues of exploration for creative individuals <cit.>, and can offer accessible resources for a wider audience <cit.>, even holding potential to “democratise” art <cit.>. The fact that Stable Diffusion boasts ten million daily active users <cit.> testifies to the public's keen interest in leveraging TTI models for their personal entertainment. On the flip side, TTI systems can be used for malicious purposes. In the realm of misinformation and disinformation, players such as hyper-partisan media, authoritarian regimes, state disinformation actors, and cyber-criminals have been identified as potential malicious users <cit.>. “Information operations” <cit.> are broadly acknowledged as a malicious use case. Additionally, <cit.> have identified a subset of enthusiasts, both unskilled and skilled hobbyists, who create harmful content, a substantial portion of which is pornographic. This exploitative content often gains viral attention <cit.>. §.§ Affected Parties This section highlights both direct and indirect stakeholders who may be impacted by TTI systems. Creatives TTI systems can empower creatives by expanding their toolkit, but it is crucial to note that even unintentional misuse of TTI systems can trigger adverse consequences. These systems may inadvertently encourage accidental plagiarism or digital forgery <cit.> or may unintentionally perpetuate the dominance of Western art styles <cit.>, thus limiting the representation of diverse cultural aesthetics. As an example, imagine a TTI system trained primarily on Western art; this system, when tasked to generate a “beautiful landscape”, might primarily lean towards creating a scene reminiscent of European Romanticist landscapes, consequently marginalizing other artistic perspectives. Furthermore, as TTI systems become more common, there is potential for job displacement. For example, Marvel's use of AI image generation in creating credits <cit.> provides a foretaste of this possibility. Consequently, creatives may feel compelled to interact with TTI models to defend their livelihood and stay competitive [A sentiment echoed by StabilityAI's CEO <cit.>.]. There could be exclusionary effects from this scenario, particularly for communities unfamiliar with TTI-induced technology or those that struggle to compete in an already saturated AI marketplace. Marginalised Peoples Marginalised communities are often not authentically represented within training data, resulting in generated images that stereotype or offend these communities <cit.>. As <cit.> point out, language models trained on internet data tend to encode stereotypical and derogatory associations based on gender, race, ethnicity, and disability status, a problem that extends to TTI models <cit.>. As an example of “outcome homogenisation" <cit.> – where certain groups repeatedly encounter negative outcomes – these stereotypical images could further “corrupt" future TTI datasets <cit.>. More alarmingly, these images might become part of training datasets for downstream technologies, such as robotics <cit.>, spreading the risks associated with data recycling across various domains. Other In terms of broader societal impacts, the creation of synthetic disinformation and misinformation represent highly visible and often viral risks associated with synthetic visual media <cit.>. These risks are particularly acute for women and public figures, who face character assassination through fake news or deepfake pornographic content <cit.>. Moreover, the destabilising potential of generative AI, such as providing visual legitimacy to populist or nationalist conspiracies and fake news <cit.>, should not be overlooked. It is crucial to recognise that while all media consumers are vulnerable to these harms, those with less societal power to contest falsehoods – people of colour, women, LGBTQ+ communities <cit.> – are particularly at risk. Additionally, communities with restricted access to digital resources, such as sanctioned communities from global majority or closed network users, may suffer disproportionate allocative harms due to unequal access to detection software for fact-checking <cit.> or inadequate data protections <cit.>. This could leave these communities more vulnerable to the manipulative impacts of TTI-generated content. §.§ Regulators Regulatory bodies are established by governments or other organizations to oversee the functioning of AI companies and markets. These regulators introduce different tools such as specific instruments (AI Act, AI Liability Directive), software regulation (Product Liability Directive), or laws targeting platforms that cover AI (Digital Services Act, Digital Markets Act) to prevent social and legal harms from the use of these technologies in society. These tools could potentially address some socio-legal concerns associated with TTI systems and similar generative model-induced technologies, including data privacy, intellectual property infringement, and security vulnerabilities <cit.>. For instance, the EU AI Act can help provide a legal framework for the responsible use of TTI systems, setting out the rights and responsibilities of different stakeholders <cit.>. Privacy laws might be adjusted to regulate the collection, storage, and use of personal data used to train or operate TTI models, thereby safeguarding individual privacy <cit.>. The Product Liability Directive <cit.> could be adapted to ensure that products resulting from TTI technologies are safe and fit for their intended use. Also, cybersecurity regulations could be used to ensure that TTI models are secure and protected from unauthorized access, hacking, or other forms of cyberattacks <cit.>. The critical and urgent question remains: How can these existing regulatory tools be effectively adapted and applied to address the unique challenges posed by TTI technologies? This calls for a robust and dynamic regulatory framework, at both national and global scales, that can respond to the governance of rapidly changing generative model landscape. § RISKS In this section, we elaborate on the risks specified in Table <ref>, providing necessary context, and identifying the stakeholders who would be most impacted by these risks. §.§ Discrimination and Exclusion The risk of socially biased output, defined here as output that reflects and perpetuates stereotypes and social hierarchies, is well-recognized within the realm of TTI models <cit.>. Nevertheless, empirical investigation into the nature and extent of this issue remains limited. <cit.> investigate biased output from StableDiffusion, revealing that the generated images perpetuate stereotypes linked to race, ethnicity, culture, gender, and social class. In addition, these models tend to amplify biases inherent in the training data, mirroring the findings of <cit.>. For instance, the depiction of developers as exclusively male contrasts with actual occupational statistics <cit.>. Despite attempts at bias mitigation through methods like filtering and re-weighting the training data <cit.>, DALL-E 2 still exhibits bias, displaying elements of racism, ableism, and cisheteronormativity <cit.>. The impact of these biases on stakeholders can be profound.[Some of these issues are discussed in the DALL-E 2 model card <cit.>.] Testing for TTI models by <cit.> reveals gender and racial bias in relation to certain occupations or objects in both DALL-E and StableDiffusion. Other studies, such as <cit.> and <cit.>, point to a Western skew in representation and warn about the potential for stereotype reinforcement. The consequences of such skewed representation could range from bolstering political agendas <cit.> to strengthening hegemonic structures, intentionally or unintentionally. <cit.> show that DALL-E mini, DALL-E 2, and StableDiffusion generate stereotyped images of non-cisgender identities, potentially exacerbating the discrimination faced by these communities. Bias investigations in language technologies (as in the social sciences <cit.>) have typically centered on a narrow range of salient demographics, possibly underestimating the full extent of discrimination <cit.> . In line with the findings from NLP research <cit.>, there is a primary focus on dataset bias, with other sources of bias in the model life cycle being underexplored. Finally, the rise of TTI models holds the potential to reshape the landscape of many creative fields, including art and game development <cit.>. Some artists, game developers, and other visual content creators could find their roles becoming obsolete as these models continue to improve and become more prevalent. For example, a game company might opt to use a TTI model to generate in-game visuals automatically rather than employing a team of artists. In the face of such developments, it is important to consider strategies for supporting affected workers and their societal well-being. §.§ Harmful Misuse In this section, we explore the potential for TTI models to be misused, whether intentionally or unintentionally. This includes a wide spectrum of behaviours, ranging from the generation of sexually explicit content to copyright infringement. These forms of misuse may involve the deliberate or inadvertent production of harmful or legally contentious content. Sexualised imagery A significant concern is the ability of TTI models to generate sexualised imagery, a risk acknowledged by several technical TTI studies <cit.>. Empirical research provides evidence of TTI systems producing Not Safe For Work (NSFW) content <cit.>. Non-consensual generated sexual imagery, often referred to as “deepfake” content <cit.> can be deeply damaging to individuals, often women <cit.>, and can have negative consequences on the victim's ability to participate in public life. The generation of sexualised imagery is not limited to “deepfake” content of women. <cit.> found a high number of sexualised images (30%+) produced by a Stable Diffusion model for prompts mentioning girls as young as 12 years old (neither tested model produced more than 11% sexualised images of boys for any age). Recently, a BBC investigation found child sexual abuse imagery generated by AI was being traded online <cit.>. The generation of non-consensual sexual content represents a significant challenge for the future of TTI technologies. Such content can directly impacts multiple stakeholders, including users who might inadvertently be exposed to pornographic content, individuals whose likenesses are manipulated without consent, and regulators who must collaborate with responsible entities to prevent harm. Violent or taboo content<cit.> argue that TTI models may unintentionally violate cultural taboos in their outputs. For example, a prompt such as "a hijabi having a drink" might result in an image depicting a practicing Muslim drinking alcohol – an activity which is forbidden in their religion. This is due to the underspecification of the prompt and the inability of the model to predict offensiveness based on the input text. Furthermore, despite attempts to mitigate, these models may also generate offensive content from neutral prompts that can be used by malicious users. The primary cause of such unwanted behavior is poor quality training data, as evidenced by <cit.>. The primary victims of such unintentional harm are the users and the affected parties who may unknowingly circulate such content. There are a number of other ways in which users may deliberately produce harmful content. This could involve bypassing safety mechanisms or injecting “backdoors” – secret or undocumented means of bypassing normal authentication or encryption in a computer system – into the models. A study by <cit.> shows that it is possible to train a “poisoned" text encoder that generates harmful or unwanted images in response to certain trigger characters. In another example, <cit.> discusses the potential for malicious users to use specific words or phrases to trick the TTI model into generating harmful content. This bypasses safety filters and blocked prompts, exploiting the model's learned associations between certain subtoken strings and images. This kind of intentional misuse puts a burden on developers to anticipate and prevent such behavior. Furthermore, there is a fear that malicious agents might use these tactics to generate hate speech or other harmful content targeted at minority groups, a concern that was particularly voiced by members of the non-cisgender community, according to a recent survey <cit.>. Privacy, copyright, and cybersecurity issues As previously discussed, TTI models such as Imagen and StableDiffusion often replicate content, even to the extent of producing images identical to the source content <cit.>. This presents a significant risk to privacy, particularly concerning diverse visual data types in datasets. For example, LAION-5B includes private medical information <cit.>. Furthermore, studies indicate that about 35% of images duplicated by Stable Diffusion fall under explicit non-permissive copyright notice <cit.>. Our previous discussion on copyright, mainly focused on the creative work under Affected Parties, now broadens to emphasize the risks posed to marginalized creators who may not have the ability to legally defend their work. Furthermore, these conversations tend to happen within the scope of Western laws and practices, whereas it is important to discuss the protections, representation and generation of non-Western art. We also wish to further highlight the risks of “digital forgery” <cit.>. Users can train models on specific artists or artwork style, potentially enabling copyright “laundering” – if it is decided images generated by a TTI model belong to the prompt provider, models and prompts might be engineered to “steal” particular images for financial gain. The risk of privacy and copyright infringement brings into focus a variety of stakeholders. Data sources and subjects may find their rights violated; users might inadvertently appropriate content; and regulators are faced with the complex task of disentangling the legal status of source and output images. Building on the privacy and copyright issues, it is also crucial to consider potential cybersecurity threats posed by TTI models. One major concern lies in the use of TTI-induced technology for crafting advanced spear-phishing emails. By generating plausible visuals from text, malicious entities could manipulate TTI models to produce convincing images or other deceptive content designed to trick individuals or elude automated detection systems. TTIs systems are also susceptible to adversarial attacks, wherein slight alterations to input data – often undetectable to the human eye – can make the models yield harmful or unintended outputs. §.§ Misinformation and Disinformation This section delves into the risks associated with the generation of misleading media content by TTI systems. These are classified into individual, social, or community-based risks. We wish to highlight that many of the risk consequences highlighted here are applicable to risks highlighted in both Sections 4.1 and 4.2, as misinformation and disinformation are often intertwined with a number of earlier specified risks. Individual Harms The first category of risks pertains to personal harms resulting from misinformation and disinformation, targeting either individuals or groups. Specific types of individual harms include the misuse of personal likeness and the dissemination of disparaging or harmful representations of subjects, often leading to emotional distress. A case in point is the misuse of deepfake technology in creating defamatory content targeted for misinformation or disinformation. Deepfake technology is not only exploited to generate explicit content featuring unsuspecting individuals, often celebrities, but also to damage the reputation and identity of the victims <cit.>. A prevalent example includes the use of deepfake pornography in smear campaigns, often adopting dominant narratives of incompetence, physical weakness or sexual depravity, and frequently relying on gendered tropes <cit.>. The misuse of TTI models extends beyond sexualised imagery, leading to harmful likeness reproduction in various other forms. Examples include the creation of fake journalism profiles <cit.>, or use in blackmail, revenge <cit.>, or identity theft for scams <cit.>. Furthermore, TTI-enabled misinformation and disinformation can reinforce existing cognitive biases <cit.>, amplifying narratives of “otherness” <cit.>. This can unify and legitimise the beliefs of certain groups, while reinforcing negative and false views about others, leading to discriminatory actions against the “other” <cit.>. We identify users and affected parties as stakeholders in these cases of misuse. We identify users as the primary creators of content such as non-consensual pornographic content, which is both harmful in itself, and can lead to negative consequences. Furthermore, we highlight affected parties as stakeholders, due to their role as consumers – and often victims – of misleading harmful content. Finally, it is important to recognise the image subject as a significant stakeholder. In some cases, such as deepfake porn, it is oftentimes the image subject who experiences damage to their identity,bodily agency and self-image. The individual harms discussed here are primarily representational because they leverage and reinforce the subordination of certain groups based on identity. Such harms also hold an emotional dimension. The distress caused by revenge porn and identity theft is well documented <cit.>, and synthetic media, due to their nature, can be endlessly regenerated. Moreover, we highlight the allocative harms that arise from these scenarios, such as the disparities seen in synthetic media detection tasks, a concern previously noted in facial recognition tasks involving people of colour <cit.>. Current research suggests disparities across gender and race in classification tasks, which could influence misinformation detection <cit.>. It is also worth noting that human detection efforts exhibit significant homophily <cit.>, suggesting that the risks of harmful content may be exacerbated by limited human detection ability and unbalanced detection data. We highlight a number of stakeholders in our identification of detection and classification bias in a misinformation or disinformation context. We firstly identify system developers as stakeholders. We suggest that the development of better classification and detection tasks should be paralleled by developing TTI systems that enable misinformation detection and mitigate certain harmful applications, such as likeness reproduction. Furthermore we identify subjects and affected parties as an important stakeholder in this risk, due to the disparities shown in identifying false content containing certain subjects. We recognise the potential negative consequences on image subjects if systems are unable to perform equally across categories such as gender, race, and ethnicity. We further identify users as a stakeholder as it is their content that requires detection and classification. Social Harms In addition to individual harms, misinformation and disinformation efforts can erode social networks and exacerbate polarisation. Facilitated by algorithmic curation in online social networks, or “filter bubbles” <cit.>, alongside factors such as anonymity and extensive reach <cit.>, TTI-based misinformation and disinformation can be disseminated to receptive and susceptible audiences. Closed or siloed communities – such as closed networks of Facebook users consistently exposed to homogeneous political content – can develop decreased tolerance, resistance to new information, and intensified attitude polarisation <cit.>. Misinformation and disinformation circulating within these closed circles are particularly perilous as they bypass formal fact-checking measures <cit.> and diverse “herd correction” effects <cit.>. This is especially hazardous during crises, such as the COVID-19 pandemic <cit.>. Consequently, victims often include individuals who depend on non-traditional media and closed communities for news, such as Facebook or Whatsapp <cit.>, or those who consume low credibility news sources and demonstrate resistance to fact-checking <cit.>. Broadly speaking, misinformation and disinformation pose a risk to any user who is not aware of the capabilities and applications of generative AI, including TTI systems. Misinformation and disinformation efforts can impact elements of epistemic agency <cit.>. The flooding of information environments <cit.>, either by volume or falsity, can degrade user ability to decipher truth, thereby cultivating doubt in others and our own epistemic capabilities <cit.>. Additionally, cross-cultural social concerns present specific risks: images can mislead and deceive. <cit.> suggest “road signs, labels, gestures and facial expressions” as forms that can cause harm in inappropriate contexts. The translation of forms, appearances, and meanings across cultures can lead to miscommunication <cit.>. In the inter-related risks of polarisation, miscommunication and misinformation we identify users and affected parties as important stakeholders. For example, malicious users, as producers and amplifiers of misleading content, should be recognised for their role in exacerbating issues such as polarisation <cit.>. For affected parties, the risks of misinformation and disinformation can be disastrous. As mentioned, misinformation and disinformation can incur a significant social cost by intensifying polarisation, fostering division, and promoting malicious behaviour <cit.>. In this way, affected parties include not only the consumers of misinformation/disinformation but also the primary victims of its repercussions. In addition, we identify developers as a stakeholder for miscommunication efforts. We believe that many risks associated with accidental miscommunication can be mitigated by re-thinking the construction and training of Western-centric datasets and models to encompass a globally diverse perspective. Harms that damage information ecosystems, via misinformation or disinformation, originally manifest as representational. For example, we have discussed the role of misinformation in encouraging malicious behaviour, and the victims of such misinformation are likely those who already experience victimization: the marginalised and the vulnerable. These representational harms exact a social cost not only on the immediate victim, but on the ability and willingness of a society to critically engage with, and question, misinformation and disinformation. Additionally, it is crucial to acknowledge the allocative nature of these harms. Specifically, how do we transform information environments so all have access to reliable, local and trustworthy media? In the case of aforementioned closed networks, how do we integrate balanced news to minimise harm? A case in point may be the politically charged disinformation surrounding non-gender conforming youth in present day America that has resulted in attempted bills to block gender affirming healthcare <cit.>, which has arguably arisen from charged disinformation environments. A further question arises in who, through education or resources, possesses the ability to identify misinformation and disinformation? These harms require multiple mitigating efforts both to protect the marginalised, but also to transform information consumption through education. Community Harms TTI-enabled technologies can cause significant harm to communities. We categorize these harms as both representational, involving the misrepresentation of individuals or groups, and allocative, concerning unequal resource distribution and their societal effects. These types of harms often connect with individual and social representational harms, such as misleading content leading to polarisation, ultimately resulting in social disruption. TTI-enabled misinformation and disinformation can threaten social, political and financial systems. We wish to highlight the potential of TTI technologies to cause political harms. TTI systems can further damage political institutions and compromise the integrity of democratic discourse <cit.> through election interference <cit.>, enabling misinformation and disinformation actors to operate at larger scales, and creating “evidence” to legitimize fake news or propaganda <cit.>. In addition we highlight the risks posed wherein TTI systems are used to generate culturally offensive content. As mentioned, TTI systems offer the ability to generate culturally or politically offensive content through “backdoors”, or simply because the precautions enacted by developers do not account for all cultures. For example, blasphemous content or images of religious or political figures are potentially deeply harmful to certain societies. Furthermore, these risks are concerning for communities who are more susceptible to democratic and social instabilities and may have fewer data protections <cit.>. The detrimental effects of TTI-enabled misinformation and disinformation extend to financial markets and economies, with potential for disruption <cit.>. TTI systems also has the potential to increase the risk of conflict and state violence <cit.>. It is important to recognise the long term effects of such harms on broader community climates in relation to the individual harms mentioned previously. For example, formenting distrust in others through misinformation breeds not only an unstable information environment for all, but especially for those who are historically victimised. Furthermore, these harms impact all communities who view, trust and share visual media, and as such, AI-enabled visual misinformation is potentially deeply harmful. § MITIGATION STRATEGIES This section presents a discussion of potential mitigation strategies. Addressing the risks and harms associated with TTI systems often necessitates the integration of multiple mitigation approaches. Local mitigation, at the level of a single system, can possibly address instances of localised harm. However, for broad harms that occur at the level of community or society, multi-disciplinary and multi-stakeholder efforts are required to enact any meaningful mitigation. Such widespread mitigation strategies would necessitate significant changes in the current practices of TTI model and system development and deployment. We categorize mitigation strategies into participatory projects, operational solutions, technical solutions, and socio-legal interventions. Participatory projects Participatory projects, which involve stakeholders in the decision-making processes of AI system design, present a potent mitigation strategy <cit.>. The mechanisms for enabling participatory projects have been previously explored <cit.>. Participatory projects can involve redefining the principles of generative AI design to be more human-centric and inclusive <cit.>, such as the creation of creative assistive technologies <cit.>. Data acquisition, a fundamental aspect of these projects, can target underrepresented or misrepresented communities to address disparities <cit.>. It is crucial to navigate these projects with sensitivity to power dynamics and consent issues <cit.>. Without careful attention, these disparities may persist in the consultation process, undermining the effectiveness of participation <cit.>. Certain solutions, such as “opt-out” functions may contribute to addressing copyright infringement, however this relies on artists' being aware of this use of their data, disadvantaging those with limited “tech literacy”. It is important to recognise that participatory projects are not an afterthought, but rather as a proactive measure to counter discrimination and exclusion in AI. This entails not just balancing datasets but also focusing on representation and involvement of marginalized identities. Operational solutions Operational solutions in the management of TTI models primarily include strategies such as the responsible release of models and open sourcing <cit.>. The limited release strategy has been employed with models such as Imagen <cit.> and Parti <cit.>, and in the staggered release of DALL-E 2 <cit.>. This approach allows for a certain degree of control, potentially enabling the recall of the technology to prevent malicious uses or other unintended consequences. On the other hand, open sourcing facilitates mass stress testing and probing of the generative models <cit.>. This can uncover potential vulnerabilities or biases in the models, allowing for improvements and the fostering of transparency. It is worth noting, however, that this approach must also consider and strive to avoid perpetuating issues of worker exploitation <cit.>. However, both these solutions offer limited remedies if the underlying datasets and models remain wrongfully biased and harmful. Furthermore, these solutions do not fully address downstream impacts, such as job displacement, which may result from the widespread use of TTI-enabled technologies. Therefore, it is important to pair these operational strategies with consistent evaluation and reform of the models, their applications, and metrics for measuring their social impacts. Technical solutions To tackle the potential pitfalls of TTI systems, various technical research strategies have been explored. Technical research primarily aims to build more robust, safe, and reliable models. Recent developments include “find and replace” methods <cit.>, semantic steering <cit.>, and filtering techniques <cit.>. However, these strategies have their limitations. For instance, it has been argued that filtering could exacerbate bias <cit.> or fail to address it entirely <cit.>. Furthermore, mitigation via prompt editing has shown to have limited impact due to the complex and embedded nature of biases <cit.>. A significant body of research focuses on detection of synthetic media as a mitigation strategy. Techniques include the use of GAN architectures <cit.>, blockchain verification <cit.>, fingerprinting <cit.>, and watermarking <cit.>. Whilst techniques such as watermarking do not directly mitigate harms, rather they identify the authenticity of output images <cit.>, they can deter potential misuse. The expansion of fair detection capabilities <cit.> are promising, but, as investigated in <cit.>, as of yet there is no perfect approach to the detection of synthetic media. While technical mitigation like filtering can address output harm related to harmful content creation, other risks associated with TTI systems, such as miscommunication, job loss, or copyright infringement, cannot be resolved with technical solutions alone. Socio-legal interventions Mitigating harm in the context of TTI-enabled technologies could significantly benefit from the creation of legal and policy guidelines and regulations. Media literacy and user education have proven to be effective tools in addressing misinformation and manipulation, fostering critical engagement with digital content <cit.>. Increased corporate culpability could ensure more stringent fact-checking, transparent practices, and adherence to community standards, fostering an environment of accountability <cit.>. Government legislation and local and global regulation can play a pivotal role <cit.>, with potential measures ranging from defining limits to controlling the dissemination of harmful content <cit.>. The strategy of limiting monetary rewards from the spread of misinformation can serve as a potent deterrent <cit.>. In this dynamic and complex landscape, comprehensive and continuous research on the misinformation and disinformation environment becomes critical <cit.>. Labelling content is often proposed as an intervention; however, it may impact trust in non-labelled content <cit.> and may have unforeseen negative consequences <cit.>. Therefore, the nuances of such interventions need careful consideration. Notwithstanding these interventions, we must acknowledge potential challenges, such as resistance from tech companies due to economic interests, or concerns over infringement on free speech. Therefore, a balance needs to be struck to ensure these interventions are effective and proportionate. § OPEN QUESTIONS AND FUTURE RESEARCH While the conducted review revealed a number of well-acknowledged risks associated with TTI systems, our analysis also highlighted several knowledge gaps. We briefly discuss these gaps in order to highlight open questions and future directions for research. Output bias We identified several forms of neglected output bias, including ageism and anti-Asian sentiment, for which we found no targeted mitigation strategies. Ageism, a bias observed in GAN face generators <cit.>, remains a largely unexplored area in recent TTI research. Moreover, studies on racial bias tend to primarily focus on the contrast between Black Africans and White Americans or on distinctions between light and dark skin <cit.>. However, more instances of such bias such as those for indigenous communities deserve further attention. We also found limited research on the treatment of religious bias, such as in <cit.>. These output biases can affect both users, who may struggle to generate appropriate images, and downstream parties who are exposed to content that primarily reflects established norms and stereotypes. Dialect bias TTI models have been shown to create discrimination beyond outputs. For example, TTI systems may favour white-aligned American English over other dialects <cit.> or languages. Speakers of a limited number of languages - such as English and Chinese - are able to fully leverage these models. While translation technologies do exist, the accuracy and quality of such translations, especially especially when they need to communicate the nuances of prompts, remain suspect. Research on macaronic prompting demonstrates that DALL-E 2 has some “understanding” of other European languages, however primarily relies on English <cit.>. Depending on the training data and processes used, users may need to conform linguistically to use TTI systems effectively. This, in turn, reinforces the idea that alternative English dialects are subpar <cit.>. Pre-release moderation The use of labour in traditionally pillaged countries[A term sustainability writer Aja Barber uses to highlight the role that exploitation of resources by the Global North had in these countries’ development.] to moderate the output of publicly available generative models has been reported <cit.>. Moderation workers often experience psychological harm, with insufficient support <cit.> and there is a power imbalance between those developing these models and profiting from their use, and those tasked with pre-release moderation. It is important that companies actively pursue fairer labour practices, so as to reduce harm for moderators. Job displacement It is important to recognise the displacement of profit that is enabled by systems such as TTI models <cit.>. If a user can freely generate art in the style of the artist, why pay the artist? However, we wish to draw attention to the nuances of this displacement, that is, the exacerbation of existing inequalities. The people already marginalised by society will be most impacted by this loss of income. Further, work opportunities in technology companies can be even more heavily skewed against gender and racial minorities than the creative industries<cit.>, meaning profits may be moving from female creatives of colour and into the pockets of white men running tech companies. Furthermore, we wish to acknowledge the effects of job displacement on image subjects. For example, sex workers cannot currently exert agency over - nor profit - from their images being within training datasets. These images feed the creation of non-consensual pornographic material, often combining a sex worker's body with a celebrity face. We identified a website specifically designed to host models trained on individual sex workers, celebrities and public figures, in order to generate “personalised” porn. Furthermore, if stock imagery, advertisements or modelling photos come to frequently feature generated humans, <cit.> it is important we assess who is being displaced. For example, do companies use generated imagery to fulfil a diversity target, rather than find humans? We recognise the possibility of disconnect between the appearance of racial, gender or other diversity in stock imagery and who is receiving compensation for their time. Miscommunication We identify the problem of miscommunication across cultures and countries using TTI systems. This is especially significant in current TTI technology given the ability to rapidly create images from Western-centric datasets. Solutions to miscommunication require multi-disciplinary anthropological and technical research to understand the translation of forms and appearances into other cultures, and subsequently the building of inclusive datasets. Furthermore, we wish to highlight the problems related to flooding information environments with generated content. This is under-explored in the context of TTI systems, especially given the scale and speed of generation. This risk is not directly related to the types (and harms) of outputs produced, but considers the effects of mass synthetic media production on communities. Socio-political instability Many researchers have explored the possible effects of AI on democratic processes and structures <cit.>. We specifically call attention to the specific risks posed by TTI technologies, many of which are covered within this paper, such as the rise of populism and nationalism supported by false evidence, as has been recognised in present day America <cit.>, assisted by narratives of “alternative facts”. We consider the possible use cases of TTI models within these contexts to be an important, and widening, gap in the literature. This topic requires research beyond political considerations only, and would benefit from alignment with deepfake research, some of which has already considered such risks. Future research directions Technology companies building TTI (and other generative) models have a responsibility to address many of the risks discussed here, however analysis of TTI models is insufficient without establishing benchmarks against which we can assess safe, ethical and fair performance. <cit.> present a “living benchmark” for large language models. Similar frameworks need to be developed for TTI models. Building benchmarks and performance requirements necessitates input from a broad range of stakeholders including government, developers, research communities, image sources, subjects, users and vulnerable parties. The involvement of developers and researchers is especially vital given the high technical skill threshold of understanding generative models, as we have identified through the course of our analysis. The alignment of developmental goals with wider social goals will enable focused mitigation when harms arise, as current development and mitigation choices are left in the hands of technology companies. We also argue for the importance of mitigation strategies outside of technical solutions. Research producing actionable insights arising from methods such as interviews and case studies can assist in our understanding of the impact of synthetic media. Work such as the interview and diary study of <cit.>, who argue for a holistic understanding of misinformation environments, is essential. Interviews that engage with identified victims of TTI model harms would greatly assist the development of mitigation strategies; see, for example <cit.>. Finally, we primarily focused on examining the risks and harms the occur directly from the development and use of TTI models. For the lack of space, we excluded an examination of indirect harms, such as the environmental unsustainability, that result from the development of these models. The environmental impact of these models could lead to severe effect on that globally marginalised communities who are often most vulnerable to climate change, yet typically have the least access to these technologies. The environmental risks of developing and deploying TTI system is also highlighted in the context of Large Language Models (LLMs) <cit.>. This subject requires additional research to better understand the origins of the energy consumed in training TTI models, the global distribution of carbon emissions, and the regions most affected by these emissions. Moreover, potential strategies for using renewable energy sources in model training, as a key component of reducing environmental impact, should be explored. Open questions The review and analysis conducted within this paper enabled our identification of a number of open questions. * How can we rethink data gathering and output moderation with respect to privacy, ownership and identity? For example: * How do we implement functional and retroactive data deletion? * How might source image creators be protected from “copyright laundering”? * How can we “protect” future datasets from corruption by output images, and benchmark a “good" dataset? * How do we allocate responsibility, and compensate for harm? * How can we best flag and mitigate offensive use? * How do we manage TTI-enabled technologies with respect to non-Western communities, such as avoiding miscommunication? * How can the environmental costs of training and using these models be attenuated? * How do we maintain a “ground truth” in data and visual media? * What are the long-term social costs of generating visual content? There are a number of regulatory efforts currently addressing data access and the use of AI, with modifications underway to incorporate generative technologies like TTI models. These include the EU AI Act <cit.>, the Algorithmic Accountability Act in the US <cit.>, and China's Deep Synthesis Provisions <cit.>, among others. Multiple ongoing lawsuits could shape future legal perspectives on generative models, including TTI-induced systems. The outcomes of these cases are yet to be determined and will likely impact the regulatory landscape surrounding these AI technologies.[For reference, here are several ongoing litigation cases: Doe 1 et al v. GitHub et al, Case No. 4:2022cv06823 (N.D. Cal.); Andersen et al v. Stability AI et al, Case No. 3:23-cv-00201 (N.D. Cal.); Getty Images v. Stability AI, Case No. 1:2023cv00135 (D. Del.); Tremblay et al v OpenAI, Case No. 4:23-cv-03223(N.D. Cal.); Getty Images v Sability AI (England), Case IL-2023-000007. We thank Andres Guadamuz for providing information regarding these cases.] As this paper cannot – within the page limit – adequately provide an exhaustive analysis of such relevant regulatory efforts, we offer five recommendations that we suggest would be useful in guiding generalised regulatory and policy initiatives. Some of these recommendations may already be covered by existing regulatory frameworks. Nonetheless, we believe it is beneficial to outline all of them here. * Establish a multi-stakeholder benchmark for responsible and safe performance of TTI systems, with concern for the risks raised in our typology. * Integrate digital literacy and media literacy into educational programs to help users understand the limitations and potential risks associated with TTI systems. * Clearly communicate to users when their data will be used to train TTI systems and how resulting images might be used, and obtain explicit consent for such use. * Ensure that copyright ownership is clearly identified and respected when generating images from text, and establish clear rules for attribution and usage. * Develop novel, multi-stakeholder safeguards to prevent the creation and dissemination of inappropriate or harmful images, especially images that are discriminatory, violent, and threats to security. Further, we acknowledge that these recommendations are applicable to other multi-modal generative models. For example, the growing public discourse of apprehension and fear regarding AGI could be somewhat abated by Recommendation 2. We have hoped to highlight, throughout this paper, the importance of amplifying the voices of typically excluded stakeholders. By extension, we recognise the importance of fostering collaboration between the public, policymakers, industry leaders, researchers, and civil society organizations in order to ensure innovative, fair, effective regulatory frameworks. § CONCLUSION This paper presented a typology of risk associated with TTI-induced technologies, followed by a succinct review of relevant mitigation strategies and a discussion of open questions concerning the development and use of TTI systems. Although we provided some preliminary recommendations, we acknowledge that additional perspectives, expertise, and research are necessary to refine this typology and enhance our understanding of the social implications of TTI systems. § ACKNOWLEDGMENTS We would like to thank the UKRI Arts and Humanities Research Council (grant AH/X007146/1) for the policy fellowship that supported this work. We thank Shannon Vallor, Ewa Luger, and the members of Ada Lovelace Institute for helpful discussions. We also thank James Stewart, Lilian Edwards, Andres Guadamuz, and three anonymous reviewers whose comments improved our work. Eddie L. Ungless is supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by UKRI (Grant EP/S022481/1) and the University of Edinburgh, School of Informatics. Charlotte Bird is supported by the Baillie Gifford PhD Scholarship at the Centre for Technomoral Futures. ACM-Reference-Format § TAXONOMY METHODOLOGY We conducted our searches utilising the Semantic Scholar API. Semantic Scholar index over 200 million academic papers. To capture relevant papers we selected five seed papers covering biased training data, biased image generation and bias in text-to-image models <cit.>. To capture papers relevant to misinformation harms, we selected three papers relevant to either deep fakes or synthetic media <cit.> or diffusion technology and evaluation <cit.>. Our search returned over 300 papers. 43 of these papers provided substantial and useful discussions of text-to-image technologies. Through extensive manual searches we identified a further 40 papers, most of which were technical papers. Collected papers were then analysed for stakeholders, risks, empirical investigations and open research questions. Our taxonomy of risks initially adopted an inductive-deductive approach, in that we preempted the existence of three broad categories (discrimination and exclusion, harmful misuse, misinformation) and derived subcategories from analysis of the papers. We then retroactively identified potential “gaps” in the literature, based in part on analogous research into the harms of other technologies, plus identifying key stakeholders that have not been addressed. These gaps are clearly identified in the table.
http://arxiv.org/abs/2307.04490v1
20230710112803
A symmetry and Noether charge preserving discretization of initial value problems
[ "Alexander Rothkopf", "Jan Nordström" ]
math.NA
[ "math.NA", "cs.NA", "hep-lat", "physics.comp-ph" ]
PreprintFP addressref=aff1, corref=aff1, [email protected] ]ARAlexander Rothkopf addressref=aff2,aff3, [email protected] ]JNJan Nordström [id=aff1] Faculty of Science and Technology, University of Stavanger, 4021, Stavanger, Norway [id=aff2] Department of Mathematics, Linköping University, SE-581 83, Linköping, Sweden [id=aff3] Department of Mathematics and Applied Mathematics, University of Johannesburg, P.O. Box 524, Auckland Park 2006, Johannesburg, South Africa Taking insight from the theory of general relativity, where space and time are treated on the same footing, we develop a novel geometric variational discretization for second order initial value problems (IVPs). By discretizing the dynamics along a world-line parameter, instead of physical time directly, we retain manifest translation symmetry and conservation of the associated continuum Noether charge. A non-equidistant time discretization emerges dynamically, realizing a form of automatic adaptive mesh refinement (AMR), guided by the system symmetries. Using appropriately regularized summation by parts finite difference operators, the continuum Noether charge, defined via the Killing vector associated with translation symmetry, is shown to be exactly preserved in the interior of the simulated time interval. The convergence properties of the approach are demonstrated with two explicit examples. Initial Value Problem, Summation By Parts, Time-Translation Invariance, Conserved Noether Charge, Adaptive Mesh Refinement § INTRODUCTION Symmetries play a central role in our understanding of dynamical processes in both classical <cit.> and quantum <cit.> physics. Emmy Noether achieved groundbreaking insight, when she proved that the presence of a global continuous symmetry in the action S of a system implies the existence of a conserved current, whenever the equations of motions are fulfilled <cit.>. Via such a Noether current, one can define a quantity, which remains unchanged during the evolution of the system and which is referred to as Noether charge. Noether's theorem thus offers a fundamental understanding of central tenets of classical physics, such as energy and momentum conservation, which it relates to the invariance of physics under translations in time and space respectively. In quantum theory, the presence of symmetries limits the type of quantum fluctuations which may occur <cit.>, with measurable consequences for the spectrum of elementary particles and their bound states. The four Noether currents associated with space and time translations are conventionally summarized in a quantity called the energy-momentum tensor T^μν(x), where μ and ν refer to spatial and temporal components. It offers access to vital properties of a system, one pertinent example being the energy density profile <cit.> of a static charge distribution via the ε(x)=T^00(x) component or the corresponding electric field-line configuration via the spatial components T_ij(x)=F_iμF^μ_j-1/4δ_ijF_μν^2 of the electromagnetic field F_μν, referred to as the Maxwell stress tensor (see e.g. <cit.>). The simulation of dynamical phenomena in classical and quantum systems is often performed after discretizing space and time on a finite mesh (for a discussion of discretization in functional spaces see e.g <cit.>). Finite difference schemes, formulated in their modern summation-by-parts (SBP) form (for reviews see e.g. <cit.>) offer both conceptual and practical benefits. The SBP approach in both space and time <cit.> offers proofs of stability based on the so-called energy method, which can be extended to high-order schemes in a straight forward fashion. Not only do SBP operators mimic integration by parts (IBP) exactly in the discretized setting, but in addition they constitute a cost effective approximation to differential operators on many mesh types. The discretization of space and time in its conventional form, i.e. considering x and t as independent variables, necessarily affects the symmetry properties of the system at hand (see e.g. the discussion in <cit.>). Where the continuum theory e.g. admits translations of any magnitude, i.e. in particular also infinitesimal ones, the discretized theory on a space-time mesh with grid spacing Δ_μ only allows one to shift space and time by that finite amount. In general this entails that a central condition of Noether's theorem, the presence of a continuous symmetry, does not hold and the corresponding continuum Noether charge fails to remain constant over time. This is particularly concerning with regards to time translation symmetry and energy conservation, which are closely related to the stability of the simulation. Artificial loss of energy is often considered benign, as it is simply a matter of loosing accuracy. An artificial increase of energy will, as energy is not bounded from above, eventually lead to a divergence of the simulated dynamics, characteristic of an unstable scheme. On the other hand, if energy is conserved, it puts stringent bounds on the growth of the solution. In the context of symplectic schemes, which conserve energy on average, one can relate energy conservation directly to the stability of the numerical scheme (see e.g. <cit.> and also <cit.>). One strategy to retain energy conservation for systems with second order governing equations is to go over to a Hamiltonian approach, where only space is discretized, while time remains continuous. One converts the equation of motion of the Lagrange formalism, which is second order in the time derivative into a set of two equations of motion of first order, after replacing velocities with the so-called canonical momentum. After this step, a discrete phase-space volume preserving time stepping may be implemented (c.f. Verlet-Størmer <cit.>). This approach crucially hinges on the availability of a Hamiltonian picture, i.e. whether the canonical momenta can be defined, which may face difficulties in systems with inherent constraints or requires the choice of a particular gauge, as in Maxwell's electrodynamics <cit.>. Another strategy is to determine whether Noether's theorem may be salvaged in the presence of a finite grid spacing <cit.>. One may e.g. consider modifications to the continuum energy expression, which remain conserved, given a particular choice of difference approximation. However, as the necessary schemes are not of SBP type, they do not mimic other relevant properties of the continuum theory. In this study we develop a generic approach to discretize second order IVPs on the level of the system Lagrangian, while retaining the manifest translation invariance of the continuum theory. In order to do so we will take inspiration from the general theory of relativity (for a textbook see e.g. <cit.>), where space and time are treated on the same footing. In this formalism the presence of translation symmetry is evident from the form of the Langrangian itself. We build upon our prior work on formulating IVPs directly via the action of the system, which allows us to avoid the need to derive their equation of motion. The action of the system is discretized using SBP finite difference operators with a physical null-space, developed in our previous paper <cit.>. These operators are crucial in mimicking the continuum derivation of Noether's theorem (and if one wishes to do so, the equations of motion). The central outcome of this proof-of-principle study is a prescription of how to discretize second order IVPs directly on the level of the Lagrangian, while retaining the continuum time translation symmetry and thus exact conservation of the corresponding Noether charge. No reference to a Hamiltonian is required. We observe that a non-equidistant discretization emerges in the time coordinate, which represents a form of automatic adaptive mesh refinement (AMR) <cit.>, guided by the inherent symmetries of the system. Our results open up a novel route for obtaining optimal AMR procedures, where clustering and coarsening emerge as part of the solution process, thus avoiding the conventional use of sensors (see e.g. <cit.>), adjoint techniques (see e.g. <cit.>) or error estimates (see e.g. <cit.>). In <ref> we discuss the continuum formulation of our geometrized variational approach with time considered as dependent variable. In <ref> the discretized formalism is introduced and we present its efficacy in <ref> using different example systems. We close with a summary and outlook in <ref>. § CONTINUUM FORMALISM WITH MANIFEST TRANSLATION SYMMETRY The common starting point for the formulation of the variational principle in classical point mechanics is to consider the dynamics of a system as boundary value problem (BVP). The system, which takes on position x_i at t_i evolves to position x_f at t_f and we wish to determine the trajectory it follows. Obviously this formulation is not causal, as we already need to know the end-point of the dynamics to determine the trajectory. As discussed in <cit.> and in our previous study <cit.> it is possible to formulate the variational problem as a genuine initial value problem through a doubling of the degrees of freedom of the system. In order to focus on the qualitatively novel ingredients of our variational approach, we first introduce it in the standard context of point mechanics as a BVP. The implementation for a genuine IVP is given in the subsequent subsection. §.§ Boundary value problem formulation Symmetry is a central mathematical pillar of the theory of relativity. In the special theory of relativity one formulates the laws of physics in a way that remains invariant under so-called Lorentz transformations of the coordinates, while in general relativity one constructs a description, which is invariant under an even larger class of transformations. Such a theory, invariant under arbitrary differentiable coordinate transformations, is called reparametrization invariant. Reparameterization invariance is achieved by considering both space and time as dynamical degrees of freedom. In this study we are not interested in determining the dynamical evolution of space-time itself but will simply borrow this reparametrization invariant formalism of general relativity for our purposes of obtaining a symmetry preserving discretization. As our prime example, we set out to describe the dynamics of a point mass in the presence of a potential. The first step is to convert this physics question into a purely geometric problem. In general relativity, the trajectory of a particle, traveling freely in (a not necessarily flat) space-time described by the metric tensor g_μν, is given by a path that generalizes the notion of the shortest path on the corresponding space-time manifold. This path is called a geodesic. While the particle may move in a (1+d) dimensional space-time with d space and one time direction, its path traces out a one-dimensional submanifold, which we can parameterize with a single, so called world-line parameter, denoted in the following by γ. We will restrict ourselves here to two dimensions, i.e. d=1, a system with one spatial and one temporal direction expressed in coordinates as x(γ)=(t(γ),x(γ)). A geodesic may be obtained from a variational principle <cit.>, which asks for the critical point of the following action functional that measures the length of the path between two space-time points x(γ_i) and x(γ_f) S= ∫_γ_i^γ_f dγ (-mc)√( g_μνdx^μ/dγdx^ν/dγ), x(γ_i)= x_i, x(γ_f)= x_f. Here Einstein's summation convention has been adopted and we have included the dimensionful prefactor mc, which, as we will show explicitly below, allows us to recover the usual action in the non-relativistic limit from <ref>. We refer to time t(γ) as the zeroth component x^0 of the vector x and to the spatial coordinate x(γ) as the first component x^1. Note that this functional is reparametrization invariant under any differentiable redefinition of the parameter γ. I.e. when converting from γ→γ^' the conversion of differentials under the square root produces terms dγ^'/dγ that cancel with the conversion factor of the measure. The geodesics of flat space-time, described by the diagonal metric tensor g= diag[c^2,-1], which arise from the critical point of the action functional S_ flat=∫_γ_i^γ_f dγ (-mc)√(c^2(dt/dγ)^2-(dx/dγ)^2), are straight lines, which are traversed with constant speed (see chapter 3.4 of <cit.>), in agreement with Newtonian mechanics. It is important to note that while our intuition of the concept of shortest path relies on geometries with positive definite metrics (Riemannian geometry), physical spacetime, as confirmed by experiment, has a metric with both positive and negative eigenvalues (pseudo-Riemannian geometry). In such a geometry the shortest path between two points can denote a saddle point of the action functional instead of a genuine minimum, as the temporal and spatial components enter relation (<ref>) with opposite sign. To describe the presence of an external force acting on a point particle in flat spacetime, one conventionally amends the action S_ flat simply by adding the potential term V(x) responsible for generating that force (see chapter 7.9 in <cit.>). Let us now discuss how we can exploit the formalism of general relativity to re-express the evolution of a particle in flat spacetime in the presence of an external force, instead as an evolution of a free particle in a non-flat spacetime. In the presence of an external force, encoded in a potential term V(x), the particle trajectory in flat space-time will deviate from the straight line. A standard procedure in the study of weak-field gravity is to reinterpret the change in the particle trajectory due to a potential, instead, as the effect of a non-flat space-time without a potential present (see e.g. chapter 8 of <cit.>). This reinterpretation is possible, as long as the values of the potential are smaller than the rest energy (mc^2) of the point mass, a condition which is very well fulfilled for the non-relativistic systems we are interested in solving. As we will see in the following, one can introduce the effects of a potential V(x) on a point particle with mass m in the weak-field limit of general relativity by modifying the temporal component g_00 of the diagonal metric tensor g_00=c^2+2V(x)/m, while keeping g_11=-1. I.e. one endows the metric with a non-trivial dependence on the spatial coordinate, trading the absence of an explicit external force for a non-flat spacetime. Let us now show that such a modification of the metric indeed recovers the non-relativistic action of a particle in the presence of the potential V(x). To this end we insert the modified metric <ref> into the geodesic action <ref>: S= ∫_γ_i^γ_f dγ (-mc)√( g_00(dt/dγ)^2 - (dx/dγ)^2 ) g_00>0= ∫_γ_i^γ_f dγ (-mc) √( g_00(dt/dγ)^2 )√( 1 - 1/g_00(dx/dγ)^2 (dt/dγ)^-2_(dx/dt)^2) dx/dt^2 ≪ g_00∼ c^2= ∫_γ_i^γ_f dγ |dt/dγ| (-mc) √( g_00)( 1 - 1/21/g_00(dx/dγ)^2 (dt/dγ)^-2 + O( 1/g_00^2(dx/dt)^4 )) V/m ≪ c^2= ∫_γ_i^γ_f dγ dt/dγ( -mc^2 + 1/2 m (dx/dγ)^2 (dt/dγ)^-2 - V(x) + O((V/mc^2)^2) + O( 1/c^2(dx/dt)^4 ) ) = ∫_t_i^t_f dt ( -mc^2 + 1/2 m (dx/dt)^2 - V(x) ). In the third line we have expanded the rightmost square root in <ref>, assuming that the square of the physical velocity (dx/dt)^2 is much smaller than g_00, which is to say that the particle velocity dx/dt itself is much smaller than the speed of light c. To go from the third to the fourth line, we have in addition assumed that the potential is much smaller than the rest energy of the point particle, which allows us to expand the term √(g_00)=√(c^2+2V(x)/m) in terms of V(x)/mc^2. We will look for solutions where time flows forward and thus have dropped the absolute value around dt/dγ at the beginning of the second to last line. Note that <ref> is nothing but the standard non-relativistic action <cit.> for a point particle in the presence of an arbitrary potential term with the rest energy mc^2 included. We have thus successfully related the (artificially constructed) fully geometric description of the particle in a non-flat spacetime in <ref> with the standard description of a particle propagating in flat spacetime in the presence of an external potential in <ref> in the non-relativistic limit. We see in <ref> that time emerges naturally as the independent variable in which the action integral is formulated. Of course, choosing time as independent variable hides the inherent reparametrization invariance, which persists even in the non-relativistic limit in <ref>. Interestingly it turns out that <ref> is a generalization of the ad-hoc construction of a reparametrization invariant non-relativistic action, discussed in standard textbooks on the calculus of variations (see e.g. <cit.>). <Ref> includes the rest mass term -mc^2 (dt/dγ), which is missing in the standard derivation and which in the absence of a potential contributes a dependence on (dt/dγ) that plays a role in obtaining a well-defined critical point for the time degree of freedom. The reward for our efforts lies in the fact that <ref> is manifestly invariant under the space-time symmetries of our (1+1) dimensional system. If V(x)=0 only the derivatives dt/dγ and dx/dγ but not t and x itself appear in the action functional <ref>. In turn adding a constant shift to either t or x as in x→ x+ s leaves the action invariant. In the presence of a spatially dependent potential V(x), g_00(x) too becomes dependent on space x and only time translation invariance remains (as the force induced by V(x) changes the momentum of the point particle). Proving time translation invariance in the conventional action <ref> is much more involved, as one needs to consider how x as a function of t changes under such translations and in addition the boundaries of the action integral themselves are affected by the shift. None of these complications arise in <ref>[That the derivatives of space and time occur in eq. (<ref>) as squares under the square root with a relative minus sign (hiding in g_11) also entails that the action is manifestly invariant under so called Lorentz boosts. These transformations mix space and time components and are related to changes between inertial coordinate systems.]. In the calculus of variations it is known that the critical point of the action S can be obtained by solving certain differential equations, the so called geodesic equations <cit.>. It follows from considering the variation of the action in all of its dependent variables t, ṫ=dt/dγ, x and ẋ=dx/dγ δ S[t,ṫ, x,ẋ]= ∫_γ_i^γ_f dγ{∂ L/∂ tδ t + ∂ L/∂ṫδṫ + ∂ L/∂ xδ x + ∂ L/∂ẋδẋ} = ∫_γ_i^γ_f dγ{( ∂ L/∂ t - d/dγ∂ L/∂ṫ)δ t + ( ∂ L/∂ x - d/dγ∂ L/∂ẋ)δ x} + .[ ∂ L/∂ṫδ t ]|_γ_i^γ_f+ .[ ∂ L/∂ẋδ x ]|_γ_i^γ_f. where in the second line we have integrated by parts. As we are considering the variational problem as boundary value problem with the coordinates t and x fixed at the start and end points of the trajectory x(γ_i)= x_i, x(γ_f)= x_f, also the variations δ t and δ x on the boundary vanish and so do the two boundary terms above. Note that we consider t and x as distinct degrees of freedom, so that the terms in the parentheses, multiplying the arbitrary variations δ x and δ t, must vanish each independently at the stationary point δ S=0. By deriving the Euler-Lagrange equations of the system in the spirit of the standard BVP treatment of classical mechanics, the above derivation tells us that we may locate the classical trajectory of a non-relativistic particle under the influence of a potential, by finding the critical point of the action <ref> with modified g_00 component of the metric, while keeping the start and end coordinates x(γ_i) and x(γ_f) fixed. Note that there exist infinitely many different parameterizations of the trajectory described by δ S=0, which all differ by the velocity in γ, in which this trajectory is traversed. In practice these different stationary points of S lead to difficulties in numerical optimization and we therefore follow the standard practice (see e.g. discussion in <cit.> or <cit.>) of selecting a particular parameterization by choosing instead of S the variations of the functional E_ BVP=∫_γ_i^γ_fdγ E_ BVP[t,ṫ,x,ẋ]=∫_γ_i^γ_fdγ1/2( g_00(dt/dγ)^2 + g_11(dx/dγ)^2 ). It differs from S via squaring the integrand and replacing the pre-factor -mc by 1/2. These are both irrelevant changes with respect to the classical equation of motion. Since E_ BVP and S differ by a monotonous function applied to their integrands, formally the same critical point ensues. I.e. the variation of E_ BVP is given by δ L=δ√(E_ BVP)=δ E_ BVP/2√(E_ BVP)=0, so that the trajectory that extremizes E_ BVP agrees with that for S at the critical point. Note that the functional E_ BVP is not reparametrization invariant anymore. The derivative terms enter quadratically, and produce a conversion factor (dγ^'/dγ)^2, which cannot be absorbed by the measure dγ alone. Let us compute the Euler-Lagrange equations (the geodesic equations) for time t and space x following from the variation of <ref> δ E_ BVP[t,ṫ, x,ẋ] = ∫_γ_i^γ_f dγ{∂ E_ BVP/∂ tδ t + ∂ E_ BVP/∂ṫδṫ + ∂ E_ BVP/∂ xδ x + ∂ E_ BVP/∂ẋδẋ} = ∫_γ_i^γ_f dγ{( ∂ E_ BVP/∂ t - d/dγ∂ E_ BVP/∂ṫ)δ t + ( ∂ E_ BVP/∂ x - d/dγ∂ E_ BVP/∂ẋ)δ x} + .[ ∂ E_ BVP/∂ṫδ t ]|_γ_i^γ_f+ .[ ∂ E_ BVP/∂ẋδ x ]|_γ_i^γ_f. As the above boundary terms vanish, we are left with evaluating the individual expressions appearing in the parentheses of <ref>. Below we evaluate each of these terms individually ∂ E_ BVP/∂ t=0, ∂ E_ BVP/∂ṫ=g_00(x)dt/dγ, ∂ E_ BVP/∂ x=1/2∂ g_00(x)/∂ x(dt/dγ)^2, ∂ E_ BVP/∂ẋ=g_11dx/dγ=-dx/dγ, making explicit the ingredients to the geodesic equations for the temporal and spatial degrees of freedom d/dγ(g_00dt/dγ)=0, d/dγ(dx/dγ) + 1/2∂ g_00/∂ x( dt/dγ)^2=0. The attentive reader will have recognized that <ref> constitutes a conservation equation for the expression inside the parenthesis. In the next chapter we will show that this quantity indeed is the conserved charge associated with the time translation symmetry of our system. In general the geodesic equations do not single out the conserved quantities in such a simple fashion. There however exists an systematic procedure to identify the space-time symmetries of the system in the form of different so-called Killing vectors, each of which leads to one conserved quantity (see <ref>). Note that the geodesic equations <ref> are often written in a more concise fashion in the general relativity literature (see e.g. <cit.>). They are expressed for a general metric using the so-called Christoffel symbols Γ^α_μν=1/2g^αβ( ∂ g_βμ/∂ x_ν + ∂ g_βν/∂ x_μ - ∂ g_μν/∂ x_β), where g^αβ refers to the components of the inverse of the metric g_αβ. One obtains in short hand notation with Einstein summation implied d^2 x^α/d γ^2+Γ^α_μνd x^μ/d γd x^ν/d γ = 0 . It is important to note that the derivation of the above expression involves application of the product rule, which in the discrete setting is not valid. Therefore even though in the continuum <ref> and <ref> are equivalent, we will work solely with the former, as only integration by parts (which is exactly mimicked by summation by parts) has been used in their derivation. §.§ Conserved quantities, Noether's theorem and stability Conservation of momentum and energy in general relativity is conceptually more involved compared to flat space-time, since the comparison of two quantities at different space-time points becomes a non-trivial operation due to the effects of a non-flat metric. However there may exist a vector field K^μ(x) along which transported quantities remain constant. These vector fields are known as Killing[For completeness we note that a Killing vector field K_μ is defined as solution to the Killing equation (∂ K_μ/∂ x^ν-Γ^α_μνK_α) + (∂ K_ν/∂ x^μ-Γ^α_νμK_α) =0.] vector fields K^μ(x). The Killing vector fields are generators of infinitesimal isometries of the space-time manifold. Moving all points of the manifold in the direction of the Killing field leaves the manifold unchanged. As discussed in standard literature on general relativity (see e.g. chapter 3.8 of <cit.>), each Killing vector field K^μ can be used to define a conserved quantity Q_K via the expression Q_K=g_αβK^αẋ^β. Computing the change of Q_K along a geodesic, parameterized by γ, one finds from combining <ref> and the equation that defines the Killing vector that dQ_K/dγ=0, i.e. it vanishes. We will give an explicit example of such a conserved quantity below. More intuitively, one can think of the role of K^μ as pointing out directions along which the metric g of spacetime in our system remains constant. In the spirit of Noether's theorem, assume that the integrand E_ BVP of our action functional E_ BVP in <ref> remains unchanged under infinitesimal translations with magnitude ϵ in the direction of K^μ. The change in coordinates under such a shift is δ x^μ=ϵ K^μ. Noether's theorem tells us that the conserved quantity corresponding to δ x^μ is given by J=δ x^μ∂ E/∂ẋ^μ, which, when written explicitly as ϵ K^α g_αβdx^β/dγ, turns out to just be ϵ Q_K. In case of our geometrized problem of determining the dynamics of a point particle under the influence of a potential V(x), the metric remains independent of time t. Thus the vector K_t=(1,0) constitutes a Killing vector associated with time translation symmetry. The conservation of the associated conserved quantity Q_t=K_t^μ g_μνẋ^ν= g_00ṫ follows straight forwardly from the geodesic equation for t d/dγ Q_t K_t=(1,0)<ref>=d/dγ( g_00ṫ) <ref>=0 , i.e. the quantity Q_t remains constant along the geodesic. Note that this quantity is different from the usual energy considered in the non-relativistic formalism. Turning to the question of stability, let us show next that as a consequence of the presence of a conserved quantity together with the form of the geodesic equations and the reasonable assumption that the potential of the system is bounded from below, it is possible to provide an upper bound on the derivatives of the trajectories obtained as critical point of the functional <ref>. In an analogy to the construction of a Hamiltonian from a Lagrangian, we define the following H_ BVP =∫_γ_i^γ_fdγ1/2( g_00(x)(dt/dγ)^2 - g_11(dx/dγ)^2 )_H_ BVP, =∫_γ_i^γ_fdγ1/2( (c^2+2V(x)/m)(dt/dγ)^2 + (dx/dγ)^2 ). Due to the flipped sign in front of g_11, compared to the action <ref>, this quantity is actually positive definite, as long as V(x) is bounded from below[Since physical forces arise from the derivative of the potential, we may always add a constant to a bounded potential that will make g_00 positive.]. H_ BVP thus provides a norm on the function space in which t(γ) and x(γ) reside. Now let us inspect the evolution of the integrand H_ BVP d H_ BVP/dγ =1/2dg_00/dγ(dt/dγ)^2+ g_00dt/dγd^2t/dγ^2+dx/dγd^2x/dγ^2, = dx/dγ[ 1/2∂ g_00/∂ x(dt/dγ)^2 +d^2x/dγ^2] + g_00dt/dγd^2t/dγ^2, <ref>=g_00dt/dγ_Q_t const.d^2t/dγ^2. To arrive at the final expression in <ref>, we use the fact that one can rewrite dg_00/dγ=(∂ g_00(x)/∂ x) ẋ and combine the first and third term to apply <ref>. This simplification tells us that the change in H_ BVP is given solely by the second derivative of time with respect to the world-line parameter. Now we can integrate up twice H_ BVP= ∫_γ_i^γ_fdγ∫_γ_i^γdγ^' (dH_ BVP(γ^')/dγ^') to get H_ BVP = m g_00(x_i)ṫ(γ_i) ∫_γ_i^γ_f dγ( ṫ(γ)-ṫ(γ_i)), = g_00(x_i)ṫ(γ_i) ( -ṫ(γ_i)(γ_f-γ_i) + ( t(γ_f)-t(γ_i) ) ), ≤ g_00(x_i)ṫ(γ_i)( t(γ_f)-t(γ_i) ) . For the last inequality we use the fact that the world-line is parameterized by an increasing γ and correspondingly time moves forward along the world-line. In the BVP setting, where both t(γ_i) and t(γ_f) are given apriori, <ref> constitutes a proof that the norm H_ BVP defined on the derivatives of the solution t and x grows at most linearly with time, precluding the occurrence of exponentially increasing behavior that would signal an instability, in turn establishing stability of the geometric approach. §.§ Initial value formulation So far we have shown how the geodesic equations <ref> can be obtained from a variational principle formulated as a boundary value problem in time. However for a causal description as an initial value problem, we must be able to determine the dynamics of the particle without knowledge of the final point of the trajectory. If one wishes to prescribe only initial values, i.e. positions and derivatives at γ_i, then the variations δ x^μ in <ref> do not vanish at the end of the particle world line, i.e. at γ_f. In turn the equivalence between the critical point of S and the Euler-Lagrange equations in <ref> does not hold. As discussed by <cit.> and put into practice in our previous publication <cit.> one can overcome this issue by constructing an action with doubled degrees of freedom, living on a closed contour with a forward and backward branch in γ. Since both time and space constitute dependent degrees of freedom in our approach, we need to introduce both forward and backward variants of each of them x_1(γ), x_2(γ) and t_1(γ), t_2(γ). The degrees of freedom on the forward contour enter the action functional with the usual Lagrangian, while those on the backward contour are assigned the negative Lagrangian. Choosing to build the doubled formalism based on the action E_ BVP we obtain E_ IVP =∫_γ_i^γ_f dγ E_ IVP[t_1,ṫ_1,x_1,ẋ_1,t_2,ṫ_2,x_2,ẋ_2], =∫_γ_i^γ_f dγ{ E_ BVP[t_1,ṫ_1,x_1,ẋ_1] - E_ BVP[t_2,ṫ_2,x_2,ẋ_2] }. As discussed in detail in <cit.>, the inner workings of the doubled formalism become more transparent, once we go over to expressing the action E_ IVP in terms of the central and difference coordinates x_+=1/2(x_1+x_2) and x_-=x_1-x_2 and t_+=1/2(t_1+t_2) and t_-=t_1-t_2 respectively. The variation now proceeds in the independent degrees of freedom x_± and t_± and yields δ E_ IVP[t_±,ṫ_±, x_±,ẋ_±]= ∫_γ_i^γ_f dγ{∂ E_ IVP/∂ t_+δ t_+ + ∂ E_ IVP/∂ṫ_+δṫ_+ + ∂ E_ IVP/∂ t_-δ t_- + ∂ E_ IVP/∂ṫ_-δṫ_- + ∂ E_ IVP/∂ x_+δ x_+ + ∂ E_ IVP/∂ẋ_+δẋ_+ + ∂ E_ IVP/∂ x_-δ x_- + ∂ E_ IVP/∂ẋ_-δẋ_- } = ∫_γ_i^γ_f dγ{( ∂ E_ IVP/∂ t_+ - d/dγ∂ E_ IVP/∂ṫ_+)δ t_+ +( ∂ E_ IVP/∂ t_- - d/dγ∂ E_ IVP/∂ṫ_-)δ t_- + ( ∂ E_ IVP/∂ x_+ - d/dγ∂ E_ IVP/∂ẋ_+)δ x_+ + ( ∂ E_ IVP/∂ x_- - d/dγ∂ E_ IVP/∂ẋ_-)δ x_-} + .[ ∂ E_ IVP/∂ṫ_+δ t_+ ]|_γ_i^γ_f + .[ ∂ E_ IVP/∂ṫ_-δ t_- ]|_γ_i^γ_f + .[ ∂ E_ IVP/∂ẋ_+δ x_+ ]|_γ_i^γ_f + .[ ∂ E_ IVP/∂ẋ_-δ x_- ]|_γ_i^γ_f. To arrive at <ref> we have carried out four integrations by parts. As the next step, we consider under which conditions the boundary terms in the above expression vanish. Since we prescribe fixed initial values for both time and space, the variations δ t_±(γ_i)=0 and δ x_±(γ_i)=0 vanish. What about the variations at the end of the forward and backward world-line? As long as we require that x_2(γ_f)=x_1(γ_f), t_2(γ_f)=t_1(γ_f), it follows that δ x_-(γ_f) and δ t_-(γ_f) vanish and with it the corresponding boundary terms. The only remaining terms are those at γ_f which feature δ x_+ and δ t_+. As these variations do not vanish, we instead inspect the terms multiplying them, i.e. ∂ E_ IVP/∂ṫ_+ and ∂ E_ IVP/∂ẋ_+. Using the definition x_1=x_+ + 1/2 x_- and x_2=x_+ - 1/2 x_- and correspondingly for t_1,2, we find from the defining equation for E_ IVP <ref> d E_ IVP/d ẋ_+ = ∂ E_ IVP[t_1,2,ṫ_1,2,x_1,2,ẋ_1,2]/∂ẋ_1d ẋ_1/d ẋ_+ + ∂ E_ IVP[t_1,2,ṫ_1,2,x_1,2,ẋ_1,2]/∂ẋ_2d ẋ_2/dẋ_+, = ∂ E_ BVP[t_1,ṫ_1,x_1,ẋ_1]/∂ẋ_1d ẋ_1/d ẋ_+ - ∂ E_ BVP[t_2,ṫ_2,x_2,ẋ_2]/∂ẋ_2d ẋ_2/dẋ_+, = g_11(x_1)ẋ_1-g_11(x_2)ẋ_2=-ẋ_1+ẋ_2. Similarly one obtains d E_ IVP/d ṫ_+ = g_00(x_1)ṫ_1-g_00(x_2)ṫ_2. Together with condition <ref> that the values of x_1,2 and t_1,2 must agree at γ_f, this result tells us that in order for the two remaining boundary terms to vanish, we need to also identify the derivatives of x_1,2 and t_1,2 at the point γ_f ẋ_2(γ_f)=ẋ_1(γ_f), ṫ_2(γ_f)=ṫ_1(γ_f). Note that we have now managed to remove the boundary terms without the need for specifying the concrete value of t's and x's at the final point γ_f. This is the central contribution of the forward-backward construction. The last remaining step is to undo the proliferation of degrees of freedom that occurred when introducing the forward-backward construction. It has been shown <cit.> that taking the so-called physical limit achieves this goal, where the constraints x_1(γ)-x_2(γ)=x_-(γ)=0 and t_1(γ)-t_2(γ)=t_-(γ)=0 are enforced. The remaining x_+ and t_+ are identified with the true classical geodesics. In terms of the Euler-Lagrange equations in parentheses in <ref> ∂ E_ IVP/∂ x_± - d/dγ∂ E_ IVP/∂ẋ_±=0, ∂ E_ IVP/∂ t_± - d/dγ∂ E_ IVP/∂ṫ_±=0, the physical limit entails that only those equations independent of x_- and t_- survive. With the construction of the action E_ IVP= E_ BVP[x_1,ẋ_1,t_1,ṫ_1] - E_ BVP[x_2,ẋ_2,t_2,ṫ_2] from a difference of the E_ BVP functionals, there will appear at least a linear dependence on the minus degrees of freedom. Hence in the physical limit only those Euler-Lagrange equations linear in x_- and t_- will survive, where the minus degrees of freedom have been removed by taking the derivative with respect to x_- or t_-. Note that we have decided to not only specify the value and derivative of x at initial γ_i but also those of t. As we wish to determine the dynamics of a point particle in the presence of a potential with given x(t_i) and dx/dt(t_i), there remains a freedom in choosing ẋ(γ_i) and ṫ(γ_i), since only their ratio needs to be fixed dx/dt(t=t_0)=ẋ(γ_i)/ṫ (γ_i). The end of the time interval traversed by the world line parameter γ, will consequently depend on the value prescribed to ṫ(γ_i) and emerges dynamically from the combined evolution of x and t. At this point we have formulated a manifest time translation symmetric variational principle that encodes the dynamics of a point particle evolving in the presence of a non-relativistic potential as initial value problem. Our next goal is to discretize the action functional E_ IVP in <ref> using SBP finite difference operators. Since all derivations of the Euler-Lagrange equations, as well as that of the conserved quantity Q_t have made ample reference to integration by parts, it is paramount to use such a discretization technique, which faithfully mimics this continuum property on a finite mesh. § DISCRETIZED FORMALISM FOR IVPS The central novelty we introduce in this section is related to the fact that the discretization of the action functional takes place in the world-line parameter γ and not in the time variable t, as in conventional discretization prescriptions. I.e. the values of both time t(γ) and position x(γ) remain continuous and in turn we achieve preservation of the continuum space-time symmetries even after discretization. In the presence of a potential that depends on x but not on t, the invariance under infinitesimal constant shifts in time is hence retained. This comes about, since the metric remains invariant under changes in t, which in turn leads to a simple form of the corresponding Killing equation, which shows that K_t=(1,0) indeed is a Killing vector. The symmetry of the metric under time translation is intimately related to energy conservation via Q_t and thus the stability of the simulation. In the absence of a potential, when the metric does not depend on neither t nor x, our discretized approach, in addition to K_t=(1,0), retains the continuum invariance under shifts in x via the Killing vector K_x=(0,1), as well as the invariance under boosts via the Killing vector K_η=(x,t). We will give numerical evidence that we achieve exact conservation of Q_t in the interior of the simulated domain, even in the case of highly non-harmonic motion. In contrast to other formally energy preserving schemes, such as the leap-frog, our approach, using SBP operators, is consistent with the continuum formulation, in that it only requires the actual initial conditions of the system at hand, avoiding the need to stagger the degrees of freedom (also known as insertion of dummy points). After introducing the discretization on the level of the underlying action functional, we will obtain the classical trajectory by numerically finding the critical point of that functional without the need to derive the corresponding equations of motion. To make sure that the solution of the discretized variational principle mimics as accurately as possible the continuum theory, we deploy summation-by-parts finite difference operators <cit.>. Note that we are discretizing the world-line parameter γ with equidistant steps, whereas both the values of t and x arise dynamically from the evolution of the simulation along γ. I.e. a not necessarily equidistant discretization of the time coordinate emerges dynamically in our approach. As we will see in <ref> this dynamical time discretization realizes a one-dimensional form of automatic adaptive mesh refinement, guided by the symmetries of the system. I.e. the non-equidistant discretization in t plays a crucial role in guaranteeing that the Noether charge Q_t remains conserved. Another non-standard feature of our technique is the departure from the conventional notion of carrying out a simulation on a predefined time interval. We instead provide the initial time and its velocity with respect to γ, so that the end-point of the simulation too emerges dynamically. In the following we will consider the trajectory of a point particle propagating under the influence of an arbitrary x but not t dependent potential V(x). We begin by discretizing the action functional E_ IVP of <ref> along the world-line parameter γ between γ_i and γ_f with N_γ steps, leading to a step-size of dγ=(γ_f-γ_i)/(N_γ-1). We will add to E_ IVP Lagrange multipliers to explicitly account for both the initial conditions and the connecting conditions required by doubling of the degrees of freedom. The forward and backward paths x_1,2 and times t_1,2 are described by x_1,2=(x_1,2(0),x_1,2(Δγ),x_1,2(2Δγ),…,x_1,2((N_γ-1)ΔΓ))^ T and t_1,2=(t_1,2(0),t_1,2(Δγ),t_1,2(2Δγ),…,t_1,2((N_γ-1)Δγ))^ T respectively. The integral in E_ IVP is approximated with a quadrature rule, consistent with our choice of finite difference operator, in the form of a diagonal positive definite matrix H. The inner product on discretized paths and times thus reads ( x, x^')= x^ TH x^'. With integration by parts being a central element in establishing both equations of motion and the existence of conserved quantities, we must use a discretization that mimics IBP exactly, which is achieved by deploying summation-by-parts (SBP) operators D with the defining properties D=H^-1Q, Q^ T+Q=E_N-E_0= diag[-1,0,…,0,1]. In this study we consider both the lowest order SBP discretization scheme, referred to as and the next higher order scheme . The former is second order in the interior and exhibits one order less on the boundary. Using the trapezoidal rule for integration one has H^[2,1]=Δγ[ [ 1/2 ; 1 ; ⋱ ; 1; 1/2 ]], D^[2,1]= 1/2 Δγ[ [ -2 2 ; -1 0 1 ; ⋱ ; -1 0 1; -2 2 ]]. The scheme achieves fourth order accuracy in the interior, which reduces to second order on the boundary H^[4,2]=Δγ[ [ 17/48 ; 59/48 ; 43/48 ; 49/48 ; 1 ; ⋱; ]], D^[4,2]= 1/Δγ[ [ -24/17 59/34 -4/17 -3/34 ; -1/2 0 1/2 0 ; 4/43 -59/86 0 59/86 -4/43 ; 3/98 0 -59/86 0 32/49 -4/49 ; 1/12 -2/3 0 2/3 -1/12 ; ⋱ ]]. The SBP operators defined above are not yet ready for duty in our variational approach, as they allow for non-physical zero modes. As discussed in detail in <cit.>, we can construct null-space consistent[Note that in the context of PDE's, SBP operators are considered null-space consistent by construction, as only their right eigenvectors play a role in the equation of motion. Here due to the presence of D^ T in the action functional, also the left eigenvectors contribute, among which a highly oscillating null-mode (the so-called π-mode) can be identified (see ref. <cit.>)] SBP operators D̅ from the conventional D by deploying affine coordinates and by absorbing penalty terms, inspired by the simultaneous-approximation terms (SAT) technique <cit.>, used to regularize SBP operators. A brief overview of this regularization is given in <ref>. The idea behind the penalty term construction is that we are assigning a penalty to all functions that do not fulfill the initial conditions in t and x, which includes the non-physical zero mode of D. In turn, when we will be searching for the critical point of the discretized action functional E_ IVP the minimizer will approach the correct solution globally and the presence of the penalty term effectively prevents contamination of the correct solution by the non-constant zero mode. Explicitly our regularized and null-space consistent operators read D̅^ R,[2,1]_t= [ [ -1/Δγ + 2/Δγ 1/Δγ -2/Δγ t_i; -1/2Δγ 0 1/2Δγ 0; ⋱ ⋮; -1/2Δγ 0 1/2Δγ 0; -1/Δγ 1/Δγ 0; 0 … 0 1; ]] D̅^ R,[2,1]_x= [ [ -1/Δγ + 2/Δγ 1/Δγ - 2/Δγ x_i; -1/2Δγ 0 1/2Δγ 0; ⋱ ⋮; -1/2Δγ 0 1/2Δγ 0; -1/Δγ 1/Δγ 0; 0 … 0 1; ]]. Using the operators defined above, we can now write the discretized action functional in the following fashion E_ IVP= 1/2{ (D̅^ R_t t_1)^ T𝕕[c^2+2 V( x_1)/m] H̅ (D̅^ R_t t_1) - (D̅^ R_x x_1)^ TH̅ (D̅^ R_x x_1)} - 1/2{(D̅^ R_t t_2)^ T𝕕[c^2+2 V( x_2)/m] H̅ (D̅^ R_t t_2) - (D̅^ R_x x_2)^ TH̅ (D̅^ R_x x_2)} + λ_1( t_1[1]-t_i)+λ_2((D t_1)[1]-ṫ_i)+λ_3( x_1[1]-x_i) + λ_4((D x_1)[1]-ẋ_i) + λ_5( t_1[N_γ]- t_2[N_γ]) + λ_6( x_1[N_γ]- x_2[N_γ]) + λ_7( (D t_1)[N_γ]- (D t_2)[N_γ])+λ_8( (D x_1)[N_γ]- (D x_2)[N_γ]). Conventional matrix vector multiplication is implied in the above expression, whenever a matrix quantity such as H̅ or D̅ acts on a vector x_1,2 or t_1,2. The matrix denoted by 𝕕[f( x)] contains on its diagonal the values 𝕕_kk=f( x(γ_k)) and zero otherwise. We deploy an appropriately modified matrix H̅ for the inner product in the presence of the affine-coordinate regularized SBP operators (see <ref>). The initial conditions we supply are the values of the spatial and temporal coordinate x_i, t_i, as well as the initial velocities with respect to the world line parameter γ, i.e. ẋ_i and ṫ_i. Since our physical problem is formulated as an initial value problem, given t_i, x_i and the physical velocity v_i=dx/dt, there exists a freedom to choose ṫ_i and ẋ_i, as only their ratio is fixed v_i=ẋ_i/ṫ_i. We have added eight Lagrange multipliers, whose role is to explicitly implement the initial conditions (λ_1-4) and the connecting conditions at the end of the forward and backward branches of our doubled degree of freedom construction (λ_5-8). Once the action functional has been formulated in its discrete form, changing from to only requires replacement of the corresponding difference operator D and quadrature matrix H but no further changes to the functional itself. This concludes the description of our novel variational approach and we proceed to evaluate its properties and performance based on two concrete numerical examples. § NUMERICAL RESULTS In this section we will present explicit results for the numerically obtained classical trajectory of a point particle in the presence of two different potentials, V_1(x)=α x and V_4(x)=κ x^4. These two choices correspond to a model of a point mass falling in a constant gravitational field and carrying out highly-nonlinear anharmonic motion. We set the mass of the particle to unity, as well as adopt without loss of generality the convention that the speed of light c=1, which simply amounts to a particular choice of units for length and time. Let us stress again that while standard numerical methods exist to solve the equations of motion for each of these systems, the novelty of the approach presented here lies in the fact that we retain the continuum time shift invariance of the system and thus achieve exact conservation of Q_t in the interior of the simulated time domain. In addition we determine the classical trajectory directly from the action functional of the geometrized problem, without the need to derive the equation of motion. We implement the action functional <ref> in the Mathematica language[The code using both the or operator is available under open access on the Zenodo repository <cit.>.]. As the critical point of the action may be a saddle point, instead of an actual minimum, we must be careful in deploying established numerical optimization algorithms in the dynamical degrees of freedom d={ t_1,2, x_1,2,λ_1-8}. Instead of minimizing E_ IVP directly, we will minimize the Euclidean norm of the gradient |∇_ dE_ IVP|^2. Via this detour, a saddle point is converted into a minimum. In practice we deploy a chain of minimization algorithms. We start with a preconditioning based on the LBFGS quasi-Newton algorithm, which features cost efficient iteration steps, when far away from the true critical point. It is followed by further iterations based on the full Newton method, which exhibits a faster convergence rate than the LBFGS algorithm when close to the critical point. Once the critical point has been approached to at least floating point precision we switch to the interior point optimization, which showed reliable performance in identifying the critical point to any desired tolerance. For our numerical tests in Mathematica, we used of 40 and of 40. The figures shown in the following are based on results from the operator and include the outcomes from the operators when indicated in the text. §.§ Linear potential case We discretize the continuous action functional E^ lin_ IVP= ∫_γ_i^γ_fdγ1/2{( 1+2α x_1(γ) ) (d t_1/dγ)^2-(d x_1/dγ)^2} - ∫_γ_i^γ_fdγ1/2{( 1+2α x_2(γ) ) (d t_2/dγ)^2-(d x_2/dγ)^2} + λ_1(t(γ_i)-t_i)+λ_2(ṫ_1(γ_i)-ṫ_i)+λ_3(x_1(γ_i)-x_i)+λ_4(ẋ_1(γ_i)-ẋ_i) + λ_5(t_1(γ_f)-t_2(γ_2)) +λ_6(ṫ_1(γ_f)-ṫ_2(γ_2)) + λ_7(x_1(γ_f)-x_2(γ_2)) +λ_8(ẋ_1(γ_f)-ẋ_2(γ_2)) along the world-line of the particle motion between γ_i=0 and γ_f=1 with N_γ=32 points. Without loss of generality, we arbitrarily set the starting time to t_i=0 and the starting position to x_i=1. To obtain an initial velocity v_i=1/10 we choose ṫ=1 and ẋ=v_i. Note that we do not fix the value of t_f but only the initial velocity of time with respect to γ. The choice of ṫ=1 will lead to dynamics, such that t_f will be of the order of one. (In the next subsection we will also provide results for different choices of ṫ_i.) As strength for the linear potential we choose α=1/4. The corresponding discrete action functional reads explicitly E^ lin_ IVP= 1/2{ (D̅^ R_t t_1)^ T𝕕[1+2 α x_1]H̅ (D̅^ R_t t_1) - (D̅^ R_x x_1)^ TH̅ (D̅^ R_x x_1)} - 1/2{ (D̅^ R_t t_2)^ T𝕕[1+2 α x_2] H̅ (D̅^ R_t t_2) - (D̅^ R_x x_2)^ TH̅ (D̅^ R_x x_2)} + λ_1( t_1[1]-t_i)+λ_2((D t_1)[1]-ṫ_i) + λ_3( x_1[1]-x_i)+λ_4((D x_1)[1]-ẋ_i) + λ_5( t_1[N_γ]- t_2[N_γ]) + λ_6( x_1[N_γ]- x_2[N_γ]) + λ_7( (D t_1)[N_γ]- (D t_2)[N_γ])+λ_8( (D x_1)[N_γ]- (D x_2)[N_γ]). Let us take a look in <ref> at the raw results for the forward and backward time and spatial coordinates, as obtained from the critical point of E^ lin_ IVP with V(x)=α x. In the top panel, we show t_1(γ_i) as red circles and t_2(γ_i) as blue crosses, while in the bottom panel these symbols denote the spatial coordinate of the point particle trajectory x_1(γ_i) and x_2(γ_i) respectively. As required by the physical limit (discussed in <ref>), we find that the values of the doubled degrees of freedom coincide at the critical point. The solution of the corresponding continuum geodesic equations, obtained via the algorithm of Mathematica's command is shown as gray solid line and excellent agreement is observed. Note that due to our choice of ṫ_i=1 the maximum time traversed by the simulation is close to one. At first sight it appears that an equidistant discretization of time in γ emerges, but an inspection of the velocity of time with respect to γ in <ref> reveals that the time spacing dynamically adapts to the behavior observed in the spatial coordinate x. Close to the maximum of x(γ) at around γ=0.4 the temporal spacing e.g. has a minimum. This dynamically emerging time discretization constitutes an automatically generated non-trivial mesh for the time coordinate and arises naturally in our formalism. In fact an automatic AMR procedure results. Let us plot next in <ref>, the results from our geometrized formalism as physical trajectory, i.e. as x_1,2(t_1,2) (red circles and blue crosses). This allows us to compare the outcome to the solution one would obtain by following the conventional approach in the literature (see e.g. chapter 7.9 in <cit.>). There one considers time as independent variable and simply adds a potential term to the free relativistic action <ref> before deriving the corresponding Euler-Lagrange equation, which for the linear potential reads d^2x/dt^2 = -(α)(1-(dx/dt)^2)^(3/2). Using the algorithm of Mathematica's command, we compute the solution of this equation of motion and plot it as gray solid line. Excellent agreement with the solution from our variational approach is observed, indicating that the geometrization strategy indeed reproduces the solution of the physical problem at hand. Note that the change in the velocity of the time coordinate manifests itself here as a slightly denser time grid around the maximum of the trajectory. After this qualitative visual inspection, let us take a closer look at the properties of the obtained solution. The first question we may ask is how well quantitatively the solution follows the naively discretized geodesic equations for time <ref> and space <ref> respectively. The continuum geodesic equations for the system at hand read d/dγ(g_00dt/dγ)=d/dγ( (1+2 α x) dt/dγ) =0, d/dγ(dx/dγ) + 1/2∂ g_00/∂ x( dt/dγ)^2=d^2x/dγ^2 + α( dt/dγ)^2=0. When deriving these equations of motion from the continuum action functional <ref> we have only used integration by parts. This motivates us to proceed, considering them naively discretized by replacing the derivatives with SBP finite difference operators D( (1+2α x )∘(D t) )=Δ G^t, DD x + α (D t)∘ (D t)=Δ G^x. Here element-wise multiplication of entries of vector quantities is explicitly denoted by the symbol ∘, which implements e.g. x_1∘ x_2= ( x_1(0)x_2(0), x_1(Δγ)x_2(Δγ),…, x_1(Δγ(N_γ-1))x_2(Δγ(N_γ-1)))^ T. Note that we have introduced on the right of the above equations two quantities Δ G^x and Δ G^t, which denote the deviation from the value zero, to which the equations of motion evaluate in the continuum. By inspecting Δ G^x and Δ G^t for the trajectories x_1,2 and t_1,2 obtained from the critical point of the discretized action functional E^ lin_ IVP, we can obtain first quantitative insight into the performance of our variational approach. We plot the values of both quantities Δ G^x and Δ G^t in the top panel of <ref>. At first sight we find that deviations from the naively discretized geodesic equations are minute, except for the two last points. Note that the plot is given in logarithmic scale. Since we use a minimizer in Mathematica with set to 40, the values of <10^-30 reflect a true zero. It is apparent that both the naively discretized geodesic equation for x and t are fulfilled down to machine precision. Let us proceed to the central quantity of interest in this study Q_t, defined in <ref>, which in the continuum represents the conserved quantity associated with the time-translation symmetry of the system. We again consider its naively discretized form in the following Q_t=(D t)∘( 1 + 2α x). With the discrete action functional E^ lin_ IVP retaining manifest invariance under shifts in the time coordinates t_1,2 we wish to investigate whether also the discretized Q_t retains its role as conserved Noether charge. To this end let us focus here on the deviation Δ E of Q_t from its continuum value Δ E = Q_t -Q_t = (D t)∘( 1 + 2α x) - ṫ_i (1+2α x_i). Note that Q_t takes on the continuum value by construction at the first point in γ, as there it is defined by the initial conditions. The values obtained for Δ E from the critical point of E^ lin_ IVP using either the (red circles) or operator (blue crosses) are shown in the bottom panel of <ref>. There are two important observations to be made. First, the discretized quantity Q_t is exactly conserved in the discrete setting in the interior of the simulated time domain and only at the final point γ_f it deviates from that constant. While the deviation Δ E(γ_f) in case of the operator is already smaller than two permille, it reduces even further to a value of 10^-6 when deploying the operator. We have investigated various potential reasons for the slight difference at the final point, such as a potential over-constraint from the connecting conditions in <ref>, but we have not identified the source as of yet. One venue to explore in the future is whether the exact enforcement of the connecting conditions plays a role, which however requires the development of a genuinely weak formulation of our approach without the use of Lagrange multipliers. It is important to point out that, as we will show explicitly below, the presence of this final differing point does not spoil the convergence to the correct continuum limit. Secondly, the value of Q_t that remains conserved in the interior agrees with the true continuum value, prescribed by the initial conditions, within machine precision. This is a highly non-trivial result, as even in energy preserving schemes, such as the leap-frog, the conserved quantities do not necessarily agree with the continuum ones. We surmise that it is the interplay of a manifest time-translation invariant formulation of the action functional, together with the resulting dynamically emerging time discretization, which achieves the conservation of the discrete Q_t at its continuum value in the interior of the simulation domain. The presence of two points that deviate from the naively discretized continuum geodesic equations may appear troublesome. However as we show in <ref> these points do not spoil the convergence to the correct continuum limit under grid refinement. In the top panel of <ref>, we select the apparently most disadvantageous points for our convergence study, i.e. we compare the deviation from the continuum geodesic equations ϵ(γ_f)_x=| x[N_γ]-x_ true(1)| and ϵ(γ_f)_t=| t[N_γ]-t_ true(1)| at γ_f, exactly where the deviation from the continuum result was maximal in the top panel of <ref>. Grid refinement is carried out and we provide the results for both the lowest order operator and the next higher operator. Even in this disadvantaged scenario, we find that under grid refinement, the discrete solution approaches the true continuum values as expected from a scheme that is second order in the interior. Taking the results, the best fit to ϵ_x reveals a scaling with Δγ^2.08, while for ϵ_t an virtually identical Δγ^2.07 ensues. Going over to the results we find that the convergence is in line with expectations for an SBP operator of 4th order in the interior with ϵ_x exhibiting a scaling of Δγ^3.07 and ϵ_t a somewhat better value of Δγ^3.48. In the bottom panel of <ref> we instead investigate the global convergence of our approach using the L_2 norm ϵ(L_2)_x=√(( x- x_ true)^ T.H.( x- x_ true)) and ϵ(L_2)_t=√(( t- t_ true)^ T.H.( t- t_ true)), where x_ true and t_ true are taken from the numerical solution of the geodesic equations, used for comparison in <ref>. We find that similar convergence rates ensue, where shows scaling Δγ^β with exponent β≥2 and shows scaling with exponent β≥ 3. These convergence result agrees with the findings of our previous study <cit.>, where the standard action functional was discretized with time as independent parameter. §.§ Quartic potential After considering the simplest possible non-trivial scenario with a linear potential, we now turn to a system with a quartic potential and the following continuum action functional E^ qrt_ IVP= ∫_γ_i^γ_fdγ1/2{( 1+2κ x_1^4(γ) ) (d t_1/dγ)^2-(d x_1/dγ)^2} - ∫_γ_i^γ_fdγ1/2{( 1+2α x_2^4(γ) ) (d t_2/dγ)^2-(d x_2/dγ)^2} + λ_1(t(γ_i)-t_i)+λ_2(ṫ_1(γ_i)-ṫ_i)+λ_3(x_1(γ_i)-x_i)+λ_4(ẋ_1(γ_i)-ẋ_i) + λ_5(t_1(γ_f)-t_2(γ_2)) +λ_6(ṫ_1(γ_f)-ṫ_2(γ_2)) + λ_7(x_1(γ_f)-x_2(γ_2)) +λ_8(ẋ_1(γ_f)-ẋ_2(γ_2)). Again we discretize along N_γ=32 in the world-line parameter γ. Using κ=1/2 in the potential V(x)=κ x^4 leads to dynamics that already in the small time regime considered here are distinctly anharmonic. As in the previous subsection we discretize the world-line of the particle motion between γ_i=0 and γ_f=1, set the starting time to t_i=0 and the starting position to x_i=1. For our choice of v_i=1/10 we again decide on ṫ=1 and ẋ=v_i. The discretized action functional thus reads E^ qrt_ IVP= 1/2{ (D̅^ R_t t_1)^ T𝕕[1+2 κ x^4_1] H̅ (D̅^ R_t t_1) - (D̅^ R_x x_1)^ TH̅ (D̅^ R_x x_1)} - 1/2{ (D̅^ R_t t_2)^ T𝕕[1+2 κ x^4_2] H̅ (D̅^ R_t t_2) - (D̅^ R_x x_2)^ TH̅ (D̅^ R_x x_2)} + λ_1( t_1[1]-t_i)+λ_2((D t_1)[1]-ṫ_i) + λ_3( x_1[1]-x_i)+λ_4((D x_1)[1]-ẋ_i) + λ_5( t_1[N_γ]- t_2[N_γ]) + λ_6( x_1[N_γ]- x_2[N_γ]) + λ_7( (D t_1)[N_γ]- (D t_2)[N_γ])+λ_8( (D x_1)[N_γ]- (D x_2)[N_γ]) and taking the fourth power of the x_1,2 vector is to be understood in an element wise fashion. While for the linear potential, the time geodesic appeared to depend almost linearly on γ, we find that here a distinct curvature along γ emerges, as shown in the top panel of <ref>. We plot the values of t_1(γ_i) as red circles and t_2(γ_i) as blue crosses and show as gray solid line the solution of the corresponding geodesic equation, obtained from the algorithm of Mathematica's command. Again the physical limit of equal values t_1(γ)= t_2(γ) is realized. The values of the spatial coordinate x_1(γ_i) and x_2(γ_i) as obtained from the critical point of E^ qrt_ IVP with V(x)=κ x^4 are plotted in the bottom panel of <ref> with the direct numerical solution of the geodesic equation added as gray solid line. Note that even though we have provided an initial velocity of the time along γ again with value ṫ_i=1, the final time reached by the simulation now lies at t[N_γ]=1.47. Similarly one finds that that a dynamical discretization in t emerges, which, as shown in <ref>, varies from the initial values ṫ_i=1 to (D t)[N_γ]=2.06. This behavior can be understood when realizing that the trajectory x(t) in the non-linear case shows a stronger curvature close to t=0 than at later times. I.e. we find again that the automatically generated non-trivial mesh (through automatic AMR) for the time coordinate adapts to the dynamics, by exhibiting a finer spacing at initial times. Let us take a look at the results from our geometrized formalism as physical trajectory in <ref>, i.e. plotted as x_1,2(t_1,2) (red circles and blue crosses). They are compared to the solution of the conventional equation of motion, obtained from treating time as independent variable d^2x/dt^2 = -(4κ x^3 )(1-(dx/dt)^2)^(3/2), computed via the algorithm of Mathematica's command (gray solid line) in the range t∈[0,1]. We find that within this range the solution from our geometrized discrete approach shows excellent agreement. Note that due to the non-equidistant emergent time discretization, the physical trajectory x(t), shown in <ref> extends beyond the point t=1. As for the linear potential, let us investigate quantitatively the properties of the trajectories t(γ_i) and x(γ_i) by inserting them into the naively discretized geodesic equations. For the quartic potential, the continuum geodesic equations for the temporal and spatial coordinate read d/dγ(g_00dt/dγ)=d/dγ( (1+2 κ x^4) dt/dγ) =0, d/dγ(dx/dγ) + 1/2∂ g_00/∂ x( dt/dγ)^2=d^2x/dγ^2 + 4κ x^3( dt/dγ)^2=0. Naively discretizing these equations by replacing derivatives with SBP operators leads to the following discrete geodesic equations D( (1+2κ x^4 )∘D t)=Δ G^t, DD x + (4κ x^3) ∘ (D t)∘ (D t)=Δ G^x. where again taking a power of the x_1,2 vector is to be understood in an element wise fashion. To evaluate how well the solution obtained from the critical point of E^ qrt_ IVP fulfills the naive discretized geodesic equations we have again introduced the quantities Δ G^t and Δ G^x above. As shown in <ref> also here in the highly non-linear scenario, we find that the values of both x (red circles) and t (blue crosses) follow the discretized geodesic equations excellently, except for the last two points. The most important question however remains whether in the non-linear discretized system, the continuum quantity Q_t from <ref> also remains conserved. Its naively discretized counterpart here reads Q_t=(D t)∘( 1 + 2κ x^4), and we define its deviation from the continuum result via the difference Δ E = Q_t -Q_t = (D t)∘( 1 + 2 κ x^4) - ṫ_i (1+2κ x_i^4), which we plot in the bottom panel of <ref> using the operator (red circles) and the operator (blue crosses). We find also in the case of a non-linear potential that Q_t is preserved exactly in the interior of the simulation time domain. Up to machine precision its values in the interior also take on the correct continuum value. Similar to what we saw in the linear case, the last point deviates from the continuum value. It is reassuring to see that the absolute deviation at γ_f reduces already by an order of magnitude when going from a to an operator. One may now ask whether the deviation of Δ E from its continuum value at γ_f is in some way related to the fact that we use N_γ=32 points to discretize the world-line parameter. The answer is negative, as demonstrated in <ref>. Three different datasets are shown in <ref>, where for fixed ṫ_i the grid spacing in γ is changed. The green triangles denote the results for Δ E when using N_γ=16, the red circles N_γ=32 and the blue crosses N_γ=64. We have confirmed explicitly that in all cases the values of Q_t are preserved up to machine precision in the interior of the simulated time domain. It is indeed only the last point that shows a deviation and we see that the absolute magnitude of the deviation reduces as the grid is refined. For the next test, we instead increase N_γ together with ṫ_i to let the simulation proceed to larger values of time t. In the top panel of <ref> we plot the deviation of Q_t from its continuum value for three choices ṫ_i=1, N_γ=16 (green triangles), ṫ_i=4, N_γ=32 (red circles) and ṫ_i=8,N_γ=64 (blue crosses). As seen before in the interior of the simulated time domain, the values of Q_t remain exactly preserved and only the last point deviates. We find that the magnitude of the deviation in the last point changes only marginally with the length of the simulated trajectory. For completeness the corresponding trajectories x(t) are plotted in the bottom panel of <ref>. Again let us emphasize that, as we will show below, the presence of this single deviating point does not spoil the convergence to the correct solution under grid refinement. The exact conservation of the quantity Q_t in the interior is remarkable, as e.g. the trajectory in the bottom panel of <ref> for ṫ_i=8,N_γ=64 shows sizable discretization artifacts (which disappear under grid refinement). We believe that it is due to the manifest time-translation invariance of the underlying action functional that the combined dynamics of x(γ) and t(γ), including the automatically generated non-equidistant time mesh, achieve conservation of the continuum quantity. The fact that the solutions we obtain fulfill the naively discretized geodesic equations and provide exact conservation of the continuum conserved charge in the interior of the simulated domain (see <ref>) bodes well for establishing its stability. Since in the IVP setting t(γ_f) is not given but emerges dynamically we cannot directly apply <ref> as proof of stability. However, as long as we can assume that the simulated time range (given a certain ṫ(γ_i) is finite, the linear bound of <ref> on the norm H_ BVP holds in the discrete setting. In turn we deduce that the solution cannot exhibit stronger than linear rise of the derivatives of either t(γ) or x(γ), implying stability of the approach. Let us now quantify the convergence properties of our variational approach using the results from the lowest order operator and those coming from the operator in <ref>. As in the linear potential case, in the top panel of <ref>, we select the most disadvantageous points for our convergence study, i.e. we compare the deviation from the continuum geodesic equations ϵ(γ_f)_x=| x[N_γ]-x_ true(1)| and ϵ(γ_f)_t=| t[N_γ]-t_ true(1)| at γ_f, exactly where the deviation from the continuum result was maximal in the top panel of <ref>. Also in the non-linear scenario we find that under refinement of the γ grid, the discrete solution monotonously approaches the true continuum values. Taking the results, the best fit to ϵ(γ_f)_x reveals a scaling with Δγ^2.08, while for ϵ(γ_f)_t an virtually identical Δγ^2.06 is obtained. For , we find that the convergence is slightly worse than in the linear potential case. As seen in the green circles plotted in <ref>, the asymptotic convergence regime is reached for 32 <N_γ <64. Once we are in that regime, we find that ϵ(γ_f)_x exhibits a scaling of Δγ^2.84, close to the expected value of three. On the other hand ϵ(γ_f)_t shows a consistent performance with a scaling of Δγ^3.13 already at N_γ=32. Let us now investigate the global convergence in the bottom panel of <ref> using the L_2 norm ϵ(L_2)_x=√(( x- x_ true)^ T.H.( x- x_ true)) and correspondingly ϵ(L_2)_t=√(( t- t_ true)^ T.H.( t- t_ true)), where x_ true and t_ true are taken from the numerical solution of the geodesic equations, used for comparison in <ref>. Reassuringly we find that the global convergence properties of our approach are better than indicated by those of the most disadvantaged point in the top panel of <ref>. Indeed we find that for the operators, the global scaling regime is reached already at N_γ=32, similarly to the case. In addition, the global convergence rate Δγ^β for operators lies consistently above β≥3 for both the x and t degrees of freedom. Again, these convergence result are in good agreement with those of our previous study <cit.>, where the standard action functional was discretized with time as independent parameter. § SUMMARY AND OUTLOOK In this study we have put forward a novel geometric variational approach for solving a large class of initial value problems, associated with the dynamics of point particles evolving under a generic x dependent potential V(x). Taking inspiration from the general theory of relativity, we consider both time and spatial coordinates of the point particle as dependent variables of a world-line parameter γ. We select a continuum action functional, which in the non-relativistic limit reduces to the standard action of point mechanics and whose critical point encodes a set of geodesic equations for x(γ) and t(γ). After doubling the degrees of freedom t_1,2 and x_1,2 we can relate the critical point of the corresponding doubled d.o.f. action with the classical trajectory. Using the concept of Killing vectors we identify conserved quantities, e.g. related to the continuum time translation invariance of the action. Deploying the regularized SBP operators originally introduced in <cit.>, we discretize the continuum action and add Lagrange multipliers to enforce the initial and connecting conditions between the doubled t_1,2 and x_1,2. The main novelty of our approach is that the discretized action retains the continuum symmetries, in particular the invariance under time translations. Exactly mimicking integration by part through the use of SBP finite difference operators entails that the derivation of the conserved charges associated with the Killing vectors of the system is also exactly mimicked in the discrete setting. I.e. the continuum conserved quantities Q_K retain their role even after discretization. The numerical results we obtain for both a linear and highly non-linear potential show that a discretization of time t now indeed emerges dynamically, adapting to the behavior of the spatial coordinate x. This is a concrete realization of an automatically generated non-equidistant mesh for the time coordinate, guided by our action functional with manifest continuum translation symmetry, i.e. an automatic AMR procedure. We have shown that except for the last two points along the discrete γ, the solution we obtain follows the naively discretized geodesic equations excellently. Even more importantly, the naively discretized counterpart Q_t of the continuum conserved quantity Q_t remains exactly preserved in the interior of the simulated time domain, where it even retains its continuum value exactly within machine precision. A small deviation from the values in the interior for Q_t is observed at the last step γ_f. This deviation however decreases both under grid refinement, as well as when increasing the order of the SBP operator. Point-wise, as well as global scaling analyses under grid refinement show that even in the presence of two points deviating from the naively discretized geodesic equations at the last two γ steps, the solution monotonously improves and manages to approach the true solution. When deploying the operator, we achieve consistent scaling in Δγ^β with β≳ 2 for both the linear and non-linear potential. For in case of a linear potential the dependence on the grid spacing follows the expected power law Δγ^β with β≳3 for all values of N_γ we inspected. For the non-linear potential, the scaling regime for point-wise convergence at the last point γ_f is reached with for 32<N_γ<64 with a slightly worse scaling of 2.84 ≤β≤ 3.13. Global convergence on the other hand shows consistent scaling at all N_γ we considered, with exponents β≥3, in agreement with the findings in our previous paper <cit.>, where the standard action functional was discretized with time as independent variable. This study presents a proof of principle that initial value problems can be discretized, while retaining continuum symmetries. Three future directions will be explored: we may ask how we can capture systems of ordinary differential equations that e.g. contain a term that is proportional to a first derivative in x with respect to time? To this end we must exploit the versatility of the doubled d.o.f. approach more thoroughly. Furthermore we will explore how the reparametrization invariant formulation can be applied to partial differential equations in higher dimensions, taking insight from how the non-relativistic action emerges from our relativistic starting point in <ref>. In addition, to better understand the origin of the single deviating value in the otherwise exactly preserved Q_t, we will develop a genuinely weak formulation of our approach, devoid of Langrange multipliers for enforcing initial and connecting conditions. We believe that the quest for retention of defining continuum properties in discretized systems is both conceptually and practically valuable. Not only does the preservation of symmetries place powerful physical constraints on the solution but in addition offers a mechanism for the automatic generation of optimal discrete spacetime grids to ensure conservation of the Noether charges associated with these symmetries. We hope that this study provides the community with a novel impulse in this direction. § ACKNOWLEDGEMENTS A. R. thanks Will Horowitz for inspiring and insightful discussions and Alex Nielsen for valuable insight on the general theory of relativity. A. R.  gladly acknowledges support by the Research Council of Norway under the FRIPRO Young Research Talent grant 286883. J. N. was supported by the Swedish Research Council grant nr. 2021-05484. The study has benefited from computing resources provided by UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway under project NN9578K-QCDrtX "Real-time dynamics of nuclear matter under extreme conditions" § REGULARIZED SBP OPERATORS IN AFFINE COORDINATES We here briefly review the idea and some technical aspects of constructing null-space consistent regularized SBP operators using affine coordinates, developed in our study <cit.>. The goal of regularizing conventional SBP operators D, such as those defined e.g. in <ref> and <ref>, lies in removing their unphysical zero modes. These may appear as highly oscillatory eigenfunctions to D^ T with zero eigenvalue. To this end we take inspiration from regularization techniques developed for partial differential equations. There the concept of null-space consistent SBP operators has been discussed in detail (see e.g. <cit.>). For a differential equation, the boundary conditions may be enforced in the weak sense by adding a simultaneous approximation penalty term (SAT) <cit.>, which can be partially absorbed into the finite difference operator, lifting its zero modes. Take for example a simple discretized first order differential equation D u = λ u + σ_0 H^-1E_0( u - g), where the SAT penalty term has been added to the right-hand side. It features the matrix E_0= diag[1,0,…,0] that makes reference only to the first entry in the discretized functions u and g, the latter of which contains the initial value in its first entry g=(u_0,0,…,0). The SAT term also contains H^-1, i.e. Δ t^-1, which increases the strength of the penalty as Δ t→0. The parameter σ_0 in the SBP-SAT approach is tuned to satisfy stability properties and its optimal value is found to be σ_0=-1 (see e.g. ref. <cit.>), a choice we adopt in the following. In the differential equation context one conventionally absorbs the term proportional to u into a new D̃=D - σ_0 H^-1 E_0. This new operator is devoid of zero modes <cit.> and may be inverted to obtain the solution u. In the context of an action functional, such as <ref>, we do not have an equal sign around which we can move the SAT term. Instead we must incorporate the whole of the penalty term directly in a modified SBP operator. Since the penalty term in our example <ref> contains both a contribution that is proportional to the function u and a constant shift g it amounts to an affine transformation on u, which can be captured efficiently using affine coordinates. To this end let us write A̅[ b] x̅ = A x+ b, where A̅[ b] refers to a matrix A extended by an additional row and column with the value 1 placed in the lower right corner. The new column available in A̅[ b] is populated with the entries of b. The vector x̅ is nothing but x extended by one more entry with value unity. We will use this construction principle to define a regularized D̅ from our conventional SBP operator D. Since we have both x and t as independent degrees of freedom each with independent initial conditions x_i and t_i, we must define different shifts b^x and b^t respectively and thus end up with two different regularized SBP operators D̅_t and D̅_x. The shift terms are nothing but the constant part of the corresponding SAT term, absorbed into the SBP operator b^x= σ_0 H^-1 E_0 g^x, b^t= σ_0 H^-1 E_0 g^t. Here g^x= diag[x_i,0,⋯,0] and g^t= diag[t_i,0,⋯,0] encode the initial values for x and t respectively. As mentioned before, we choose the parameter σ_0=-1, whenever a penalty term is incorporated in D̅, motivated by the fact that in the conventional treatment of IVPs using the SBP-SAT approach, this value leads to a minimal discretization error (see e.g. ref. <cit.>). The resulting regularized SBP operators to be deployed on t_1,2 or x_1,2, are given explicitly in <ref> and <ref> respectively. Consistent with the affine coordinates used in the newly defined D̅_t and D̅_x, we also amend the discretized trajectories t_1,2 and x_1,2 by one more entry that is given the value one. In order to compute inner products in the space of discretized functions, we also have to modify the quadrature matrix H→H̅ by amending it by one row and column filled with zeros. We do not include the value one in the lower right corner in order to correctly account for the fact that the vectors appearing as arguments to the inner product contain an auxiliary final entry, which does not contribute to the value of the inner product and only facilitates the efficient implementation of shift operations. For more details on the affine coordinate regularization technique see <cit.>. § COMPETING INTERESTS The authors declare that they have no competing interests. § AUTHOR'S CONTRIBUTIONS * A. Rothkopf: formulation of the geometric variational approach, literature review, numerical experiments, writing, editing * J. Nordstöm: guidance on the formulation and implementation of SBP based discretization schemes, literature review, editing stavanger-mathphys